Search Results

Search found 22463 results on 899 pages for 'sub query'.

Page 221/899 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Ajax Code to run PHP query After Facebook Like Button is Clicked

    - by John
    I have the PHP below on a file called fblike.php. On another file, I have the Facebook Like button. The Like button functions. I would like to run the code below when the Facebook Like button is clicked. I know that FB.Event.subscribe('edge.create', function(response) {} is supposed to run whenever the Like button is clicked. I know that I am probably supposed to use Javascript and maybe Ajax to cause the PHP on fblike.php to run. But after multiple tries, I can't get it to work. What is the specific Ajax code that I could include within the Facebook Event? Do I need to do anything to the Like button code to allow the Facebook Event to work? $submissionid = $_POST['submissionid']; $uid = $_POST['uid']; mysql_connect("server", "username", "password") or die(mysql_error()); mysql_select_db("database") or die(mysql_error()); $q = "INSERT INTO fblikes VALUES (NULL, '$submissionid', '$uid', NULL)"; $r = mysql_query($q); if($r) { echo "Success!"; } elseif(!$r) { echo "Failed!"; }

    Read the article

  • SQL Server Query with "ELSE:"

    - by Mike D
    I have various VB6 projects I'm maintaining with some of the queries being passed to the server having "ELSE:" with the colon used in case statements. I'm wondering can someone tell me what the **ll the colon is used for? It causes errors in SQL2005 and greater, but SQL2000 works with no complaints. I'd like to just remove it from the code & re-compile, but I'm afraid it'll break 10 other things in the application.. Thanks in advance...

    Read the article

  • help for a query

    - by stighy
    Hi, i've a problem for a table update. follow table structure: Table1 tableid ... ... productID_1 productID_2 productID_3 Table2 productID Total I've to totalize each product in table2. For example: SELECT COUNT(*) as tot, ProductID_1 FROM Table1 GROUP Table1 then the UPDATE table2 SET total =..??? (how can i do) WHERE productID_1 = .... Hope you can help me. Thank you

    Read the article

  • Find query that caused update

    - by MadMax
    We have been having problems with ghost updates in our DB (SQL Server 2005) fields are changeing and we cannot find the routine that is updating. Is there a way using an update trigger (Or any other way) to tell what caused the update?

    Read the article

  • How to write a query get all infomation from one table to another one

    - by Dave
    I am building access database that will get data from a outside source and place it in a table that is link to the data source. As we all know that you are not allowed to recinfigure that linked table. What I want to do is take that data from that that linked table and make another table that I will be able to add additional new fields and snyc the out that gets put into the linked table. Please Help

    Read the article

  • sql count() query for tables

    - by air
    i have two tables table1 fields fid,fname,fage a ,abc ,20 b ,bcv ,21 c ,cyx ,19 table2 fields rcno,fid,status 1 ,a ,ok 2 ,c ,ok 3 ,a ,ok 4 ,b ,ok 5 ,a ,ok i want to display rectors like this fid from table1 , count(recno) from table 2 and fage from table1 fid,count(recno),fage a ,3 ,20 b ,2 ,21 c ,1 ,19 i try many sql queries but got error Thanks

    Read the article

  • Succinct LINQ to XML Query

    - by Kent Boogaart
    Assuming you have the following XML: <?xml version="1.0" encoding="utf-8"?> <content> <info> <media>   </media> </info> </content> Using LINQ to XML, what is the most succinct, robust way to obtain a System.Uri for an image of a given type? At the moment I have this: private static Uri GetImageUri(XElement xml, string imageType) { return (from imageTypeElement in xml.Descendants("imageType") where imageTypeElement.Value == imageType && imageTypeElement.Parent != null && imageTypeElement.Parent.Parent != null from imageDataElement in imageTypeElement.Parent.Parent.Descendants("imagedata") let fileRefAttribute = imageDataElement.Attribute("fileref") where fileRefAttribute != null && !string.IsNullOrEmpty(fileRefAttribute.Value) select new Uri(fileRefAttribute.Value)).FirstOrDefault(); } This works, but feels way too complicated. Especially when you consider the XPath equivalent. Can anyone point out a better way?

    Read the article

  • .NET DB Query Without Allocations?

    - by Michael Covelli
    I have been given the task of re-writing some libraries written in C# so that there are no allocations once startup is completed. I just got to one project that does some DB queries over an OdbcConnection every 30 seconds. I've always just used .ExecuteReader() which creates an OdbcDataReader. Is there any pattern (like the SocketAsyncEventArgs socket pattern) that lets you re-use your own OdbcDataReader? Or some other clever way to avoid allocations? I haven't bothered to learn LINQ since all the dbs at work are Oracle based and the last I checked, there was no official Linq To Oracle provider. But if there's a way to do this in Linq, I could use one of the third-party ones.

    Read the article

  • Django get() query not working

    - by pimcoooooooo
    this_category = Category.objects.get(name=cat_name) gives error: get() takes exactly 2 non-keyword arguments (1 given) I am using the appengine helper, so maybe that is causing problems. Category is my model. Category.objects.all() works fine. Filter is also similarily not working. Thanks,

    Read the article

  • php: parse error on mysql query

    - by dwstein
    I'm getting the following error: Parse error: syntax error, unexpected T_VARIABLE in /home/a4999406/public_html/willingLog.html on line 48 on the following code (line 48 is first row of this code): $rows = mysql_num_rows($result); for ($j=0; $j<$rows: ++$j) { echo 'ID: ' . mysql_result($result, $j, 'id') . '<br />'; echo 'First: ' . mysql_result($result, $j, 'first') . '<br />'; echo 'Last: ' . mysql_result($result, $j, 'last') . '<br />'; echo 'Email: ' . mysql_result($result, $j, 'email') . '<br />'; } Anyone know what i'm doing wrong?

    Read the article

  • Will [WithEvents = Nothing] RemoveHandlers in the derived class?

    - by serhio
    I use to set WithEvents variables to Nothing in Destuctor, because this will "Remove" all the Handlers associated with Handles keyword. Will this have the same effect for derivated classes? Class A Protected WithEvents _Foo as Button Private Sub _Foo_Click Handles _Foo.Click ' ... some Click action ' End Sub Public Sub Dispose(disposing as Boolean) If disposing then _Foo = Nothing ' remove handler _Foo_Click ' End Sub End Class Class B Inherits A Private Sub _Foo_Move Handles _Foo.Move ' ... some Move action ' End Sub ' ????? will or NOT remove OR handler _Foo_Move the base Dispose??' Public Overrides Sub Dispose(disposing as Boolean) 'If disposing then _Foo = Nothing ' MyBase.Dispose(disposing) End Sub End Class

    Read the article

  • Using query to change table mapping

    - by crapbag
    I have a table mytable( id, key, value). I realize that key is generating a lot of data redundancy since my key is a string. (my keys are really long, but repetititve) How do I build a separate table out that has (key, keyID) and then alternate my table to be mytable( id, keyID, value) and keyTable(keyID, key) ?

    Read the article

  • mysql query for change in values in a logging table

    - by kiasectomondo
    I have a table like this: Index , PersonID , ItemCount , UnixTimeStamp 1 , 1 , 1 , 1296000000 2 , 1 , 2 , 1296000100 3 , 2 , 4 , 1296003230 4 , 2 , 6 , 1296093949 5 , 1 , 0 , 1296093295 Time and index always go up. Its basically a logging table to log the itemcount each time it changes. I get the most recent ItemCount for each Person like this: SELECT * FROM table a INNER JOIN ( SELECT MAX(index) as i FROM table GROUP BY PersonID) b ON a.index = b.i; What I want to do is get get the most recent record for each PersonID that is at least 24 hours older than the most recent record for each Person ID. Then I want to take the difference in ItemCount between these two to get a change in itemcount for each person over the last 24 hours: personID ChangeInItemCountOverAtLeast24Hours 1 3 2 -11 3 6 Im sort of stuck with what to do next. How can I join another itemcount based on latest adjusted timestamp of individual rows?

    Read the article

  • TIME REDUCE(OPTIMISE QUERY)

    - by user2527657
    select a.userid,(select firstName from user where userid=NOTUSED.userid) as z, (select max(login_time) from userLoginTime AS b where userid = a.user_id GROUP BY b.user_id ORDER BY b.user_id) as y From(SELECT DISTINCT a.user_id FROM user AS a LEFT OUTER JOIN (SELECT (userid) FROM userlogintime where serialid=15400012)AS b ON user.user_id = b.user_id where a.Serialid=15400012 AND b.userid IS NULL) NOTUSED, Relation r, user a where r.childuserid = NOTUSED.userid and guarduserid = a.userid

    Read the article

  • 3G USB Modem Not Working in 12.04

    - by Seyed Mohammad
    When I connect my 3G USB Modem to my laptop with 12.04, nothing shows up in Network-Manager. This modem is working in 11.10 and the modem is shown in Network-Manager but not in 12.04 !! Here are the outputs of lsusb and usb-devices on two machines , one with 11.10 and the other with 12.04 : Ubuntu-11.10 : $ lsusb Bus 002 Device 009: ID 1c9e:6061 $ usb-devices T: Bus=02 Lev=02 Prnt=02 Port=03 Cnt=01 Dev#= 9 Spd=12 MxCh= 0 D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1c9e ProdID=6061 Rev=00.00 S: Manufacturer=3G USB Modem ?? S: SerialNumber=000000000002 C: #Ifs= 4 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=option I: If#= 1 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option I: If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option I: If#= 3 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage Ubuntu-12.04 : $ lsusb Bus 002 Device 003: ID 1c9e:6061 OMEGA TECHNOLOGY WL-72B 3.5G MODEM $ usb-devices T: Bus=02 Lev=01 Prnt=01 Port=01 Cnt=01 Dev#= 3 Spd=12 MxCh= 0 D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1c9e ProdID=6061 Rev=00.00 S: Manufacturer=Qualcomm, Incorporated S: Product=USB MMC Storage S: SerialNumber=000000000002 C: #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=(none) As the output of the above commands show, the device is detected as a modem in 11.10 but in 12.04 it is detected as a USB storage (the device is both a 3G Modem and a SD-card USB adapter). Any help ?!

    Read the article

  • Creating SparseImages for Pivot

    - by John Conwell
    Learning how to programmatically make collections for Microsoft Live Labs Pivot has been a pretty interesting ride. There are very few examples out there, and the folks at MS Live Labs are often slow on any feedback.  But that is what Reflector is for, right? Well, I was creating these InfoCard images (similar to the Car images in the "New Cars" sample collection that that MS created for Pivot), and wanted to put a Tag Cloud into the info card.  The problem was the size of the tag cloud might vary in order for all the tags to fit into the tag cloud (often times being bigger than the info card itself).  This was because the varying word lengths and calculated font sizes. So, to fix this, I made the tag cloud its own separate image from the info card.  Then, I would create a sparse image out of the two images, where the tag cloud fit into a small section of the info card.  This would allow the user to see the info card, but then zoom into the tag cloud and see all the tags at a normal resolution.  Kind'a cool. But...I couldn't find one code example (not one!) of how to create a sparse image.  There is one page on the SeaDragon site (http://www.seadragon.com/developer/creating-content/deep-zoom-tools/) that gives over the API for creating images and collections, and it sparsely goes over how to create a sparse image, but unless you are familiar with the API already, the documentation doesn't help very much. The key is the Image.ViewportWidth and Image.ViewportOrigin properties of the image that is getting super imposed on the main image.  I'll walk through the code below.  I've setup a couple Point structs to represent the parent and sub image sizes, as well as where on the parent I want to position the sub image.  Next, create the parent image.  This is pretty straight forward.  Then I create the sub image.  Then I calculate several ratios; the height to width ratio of the sub image, the width ratio of the sub image to the parent image, the height ratio of the sub image to the parent image, then the X and Y coordinates on the parent image where I want the sub image to be placed represented as a ratio of the position to the parent image size. After all these ratios have been calculated, I use them to calculate the Image.ViewportWidth and Image.ViewportOrigin values, then pass the image objects into the SparseImageCreator and call Create. The key thing that was really missing from the API documentation page is that when setting up your sub images, everything is expressed in a ratio in relation to the main parent image.  If I had known this, it would have saved me a lot of trial and error time.  And how did I figure this out?  Reflector of course!  There is a tool called Deep Zoom Composer that came from MS Live Labs which can create a sparse image.  I just dug around the tool's code until I found the method that create sparse images.  But seriously...look at the API documentation from the SeaDragon size and look at the code below and tell me if the documentation would have helped you at all.  I don't think so!   public static void WriteDeepZoomSparseImage(string mainImagePath, string subImagePath, string destination) {     Point parentImageSize = new Point(720, 420);     Point subImageSize = new Point(490, 310);     Point subImageLocation = new Point(196, 17);     List<Image> images = new List<Image>();     //create main image     Image mainImage = new Image(mainImagePath);     mainImage.Size = parentImageSize;     images.Add(mainImage);     //create sub image     Image subImage = new Image(subImagePath);     double hwRatio = subImageSize.X/subImageSize.Y;            // height width ratio of the tag cloud     double nodeWidth = subImageSize.X/parentImageSize.X;        // sub image width to parent image width ratio     double nodeHeight = subImageSize.Y / parentImageSize.Y;    // sub image height to parent image height ratio     double nodeX = subImageLocation.X/parentImageSize.X;       //x cordinate position on parent / width of parent     double nodeY = subImageLocation.Y / parentImageSize.Y;     //y cordinate position on parent / height of parent     subImage.ViewportWidth = (nodeWidth < double.Epsilon) ? 1.0 : (1.0 / nodeWidth);     subImage.ViewportOrigin = new Point(         (nodeWidth < double.Epsilon) ? -1.0 : (-nodeX / nodeWidth),         (nodeHeight < double.Epsilon) ? -1.0 : ((-nodeY / nodeHeight) / hwRatio));     images.Add(subImage);     //create sparse image     SparseImageCreator creator = new SparseImageCreator();     creator.Create(images, destination); }

    Read the article

  • basic json > struct question

    - by danwoods
    I'm working with twitter's api, trying to get the json data from http://search.twitter.com/trends/current.json which looks like: {"as_of":1268069036,"trends":{"2010-03-08 17:23:56":[{"name":"Happy Women's Day","query":"\"Happy Women's Day\" OR \"Women's Day\""},{"name":"#MusicMonday","query":"#MusicMonday"},{"name":"#MM","query":"#MM"},{"name":"Oscars","query":"Oscars OR #oscars"},{"name":"#nooffense","query":"#nooffense"},{"name":"Hurt Locker","query":"\"Hurt Locker\""},{"name":"Justin Bieber","query":"\"Justin Bieber\""},{"name":"Cmon","query":"Cmon"},{"name":"My World 2","query":"\"My World 2\""},{"name":"Sandra Bullock","query":"\"Sandra Bullock\""}]}} My structs look like: type trend struct { name string query string } type trends struct { id string arr_of_trends []trend } type Trending struct { as_of string trends_obj trends } and then I parse the JSON into a variable of type Trending. I'm very new to JSON so my main concern is making sure I've have the data structure correctly setup to hold the returned json data. I'm writing this in 'Go' for a project for school. (This is not part of a particular assignment, just something I'm demo-ing for a presentation on the language)

    Read the article

  • Failed upgrade of PHP on Ubunutu 12.04, error: Sub-process /usr/bin/dpkg returned an error code (1)

    - by DanielAttard
    I just tried to upgrade my version of PHP on Ubuntu 12.04 and now I have messed it up. First I did this: sudo add-apt-repository ppa:ondrej/php5-oldstable Then I did this: sudo apt-get update Then finally I did this: sudo apt-get install php5 And now I am getting an error message about Sub-process /usr/bin/dpkg returned an error code (1) What have I done wrong? How can I fix this problem? Thanks. Here are the errors received: Do you want to continue [Y/n]? Y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Setting up libapache2-mod-php5 (5.4.28-1+deb.sury.org~precise+1) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing libapache2-mod-php5 (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Setting up php5-cli (5.4.28-1+deb.sury.org~precise+1) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing php5-cli (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-curl: php5-curl depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-curl (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-gd: php5-gd depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-gd (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-mcrypt: php5-mcrypt depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-mcrypt (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-mysql: php5-mysql depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-mysql (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5: php5 depends on libapache2-mod-php5 (>= 5.4.28-1+deb.sury.org~precise+1) | libapache2-mod-php5filter (>= 5.4.28-1+deb.sury.org~precise+1) | php5-cgi (>= 5.4.28-1+deb.sury.org~precise+1) | php5-fpm (>= 5.4.28-1+deb.sury.org~precise+1); however: Package libapache2-mod-php5 is not configured yet. Package libapache2-mod-php5filter is not installed. Package php5-cgi is not installed. Package php5-fpm is not installed. dpkg: error processing php5 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: libapache2-mod-php5 php5-cli php5-curl php5-gd php5-mcrypt php5-mysql php5 E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Can I use TCP as DNS query protocol on Mac OS?

    - by Brian
    Hi, I'm using Mac OS, Snow Leopard 10.6.2, and I'm suffering from UDP packet loss during DNS query. So my web browser is too slow to surf internet nicely. But it worked very well when I tried a DNS query on TCP using dig command. However, I can't find some control switch to change to use TCP during DNS query. Is there a way to change it in Mac OS? Thank you.

    Read the article

  • Can I use TCP as DNS query protocol on Mac OS?

    - by Brian
    Hi, I'm using Mac OS, Snow Leopard 10.6.2, and I'm suffering from UDP packet loss during DNS query. So I tried DNS query as TCP using dig command, it worked very well. However, I can't find some control switch to change to use TCP during DNS query. Is there a way to change it in Mac OS? Thank you.

    Read the article

  • How to get just value from database query in Excel?

    - by Corin
    I'm creating a spreadsheet as a collection point of information from a number of MS Access databases. I will run a query on each database to get a count of records in a particular table. Each database has the same structure but different content as they are used in different situations. So the query returns a single value, rec_count. I've figured out how to create that query, save it and then use it as the data source. So far so good. The problem is that Excel treats the query results as a table. So instead of getting just the single value the query returns, I also get the field name. Thus the result takes up two cells instead of one. When linking in the data source, I only see Table, PivotTable Report and PivotChart as options for viewing the data. I don't want any of those. I just want the single value without any formatting, column headers, etc. Is there a way to do this is Excel 2007?

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • MySQL query being performed when PHP if condition not met?

    - by Ryan
    The script I'm using is if($profile['username'] == $user['username']) { $db->query("UPDATE users SET newcomments = 0 WHERE username = '$user[username]'"); echo "This is a test"; } (Note that $db-query is exactly the same as mysql_query) For some very odd reason, the MySQL query is being performed even if the defined condition is false The "This is a test" works properly and only appears when the condition is met, but the MySQL query is performed anyway Whats the problem with it?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >