Search Results

Search found 8083 results on 324 pages for 'total newbie'.

Page 208/324 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • MySQL query, SUM of multiple colums

    - by Mick
    Hi I have multiple columns in a mySQL table. Three of the columns are named i100s, i60s and i25s and what I want to do is get the sum of all three entries . currently I have this code '$query= "SELECT SUM(i100s),SUM(i60s),SUM(i25s) AS tkit FROM event WHERE acc='100' " ; $result = mysql_query($query) or die(mysql_error()); $row = mysql_fetch_assoc($result) ; $total = $row['tkit'];' But it is not returning the correct result , can anyone help me please ? Thanks Mick

    Read the article

  • C# switch: case not falling through to other cases limitation

    - by Mike Fielden
    This question is kind of an add-on to this question In C#, a switch case cannot fall through to other cases, this causes a compilation error. In this case I am just adding some number to the month total for the selected month and each subsequent month thereafter. (simple example, not meant to be real) switch (month) { case 0: add something to month totals case 1: add something to month totals case 2: add something to month totals default: break; } Is there a logical alternative to this in C# without having to write out a ton of if statements? if (month <= 0) add something to month if (month <= 1) add something to month if (month <= 2) add something to month .... etc

    Read the article

  • MYSQL Event to update another database table

    - by Lee
    Hey All, I have just taken over a project for a client and the database schema is in a total mess. I would like to rename a load of fields make it a relationship database. But doing this will be a painstaking process as they have an API running of it also. So the idea would be to create a new database and start re-writing the code to use this instead. But I need a way to keep these tables in sync during this process. Would you agree that I should use MYSQL EVENT's to keep updating the new table on Inserts / updates & deletes?? Or can you suggest a better way?? Hope you can advise !! thanks for any input I get

    Read the article

  • linux user login/logout log for computer restriction

    - by Cedric
    Hi ! I would like to know how to log the login and logout of a user. I know it's possible to use the command "last". But this command is based on a file that has a r/w permission for the user, hence the possibility to change these data. I would like to log these data over two months. Why would I like to do that ? In fact, I would like to prevent a normal user to use a computer more than an hour a day - except week-ends, and 10 hours in total a week. Cedric System used : kubuntu, Programming language : bash script

    Read the article

  • phing FtpDeploy "connection to host failed"

    - by Jorre
    I'm getting the following error when trying to deploy a ZIP file to a remote FTP server. I tried connecting to the server using an FTP client (filezilla) and all goes well. Also, when connecting to a public ftp like ftp.belnet.be connections work fine. I'm trying to send the file to a VSFTPD server behind a router using port forwarding. Again, this works fine from any location using Filezilla, phing is not connecting though... BUILD FAILED /deployment/build.xml:60:12: Could not connect to FTP server x.x.x.x on port 21: Connection to host failed Total time: 2 minutes 30.09 seconds

    Read the article

  • Is there a way to list all the database queries my wordpress install is making for a given event?

    - by mattloak
    Using a method similar to the one described here: (http://stackoverflow.com/questions/14873/how-do-i-display-database-query-statistics-on-wordpress-site), I can see the total number of queries being made when I load a page. Now I'd like to output a list of the queries that are being made when the page loads. This would allow me to see who my biggest resource hogs are without having to go through the process of elimination of all my plugins and theme scripts. How would I do this? Thanks.

    Read the article

  • How to synch SQLite data to server in iPhone

    - by crawler486
    Hello Guys, I'm a total noob when it comes to iPhone development and have been tasked to create a database app that can download and upload sqlite data to and from a server via http using web services. So far I already have a form for retrieving and saving data to a SQLite database and now I need some information on how I can upload the SQLite data to a server. The SQLite database will only have one table with 3 columns and about 200 rows max. I hope somebody can point me to the right direction or lead me to some sample codes. Appreciate any help.

    Read the article

  • Changing a SUM returned NULL to zero

    - by Lee_McIntosh
    I have a Stored Procedure as follows: CREATE PROC [dbo].[Incidents] (@SiteName varchar(200)) AS SELECT ( SELECT SUM(i.Logged) FROM tbl_Sites s INNER JOIN tbl_Incidents i ON s.Location = i.Location WHERE s.Sites = @SiteName AND i.[month] = DATEADD(mm, DATEDIFF(mm, 0, GetDate()) -1,0) GROUP BY s.Sites ) AS LoggedIncidents 'tbl_Sites contains a list of reported on sites. 'tbl_Incidents containts a generated list of total incidents by site/date (monthly) 'If a site doesnt have any incidents that month it wont be listed. The problem I'm having is that a site doesnt have any Incidents this month and as such i get a NULL value returned for that site when i run this sproc, but i need to have a zero/0 returned to be used within a chart in SSRS. I've tried the using coalesce and isnull to no avail. SELECT COALESCE(SUM(c.Logged,0)) SELECT SUM(ISNULL(c.Logged,0)) Is there a way to get this formatted correctly? Cheers, Lee

    Read the article

  • time required in java

    - by Amol
    i would like to know how much time is required to execute the conditional loops individually. Like if i had an option to use "if else", "while","for", or "foreach" loop then which loop would get executed faster.I know the difference would be very small and most would say that it would not matter but if i were to have a program which would access thousands of data then this point would come into picture. I would like to know if i declare a variable in the beginning in java and if i were to declare it just before i use it will it make any difference. will the total time required be reduced? if yes then which is practically used (the one in which variables are declared in the beginning or where they are just declared b4 they are used)?

    Read the article

  • Why doesn't Maven's mvn clean ever work the first time?

    - by hoffmandirt
    Nine times out of ten when I run mvn clean on my projects I experience a build error. I have to execute mvn clean multiple times until the build error goes away. Does anyone else experience this? Is there any way to fix this within Maven? If not, how do you get around it? I wrote a bat file that deletes the target folders and that works well, but it's not practical when you are working on multiple projects. I am using Maven 2.2.1. [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to delete directory: C:\Documents and Settings\user\My Documents\software-developm ent\a\b\c\application-domain\target. Reason: Unable to delete directory C:\Documen ts and Settings\user\My Documents\software-development\a\b\c\application-domai n\target\classes\com\a\b [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 seconds [INFO] Finished at: Fri Oct 23 15:22:48 EDT 2009 [INFO] Final Memory: 11M/254M [INFO] ------------------------------------------------------------------------

    Read the article

  • importing content into drupal.

    - by anru
    I have a wordpress site with 5k post and each post has average 25 comments. so 125k total nodes have to be added. I need import those posts and comments into drupal 6 . I have written a script to import those post/comments into drupal by drupal's cron service. but the cron service keeps time out. because import 125k nodes one by one is very slow. what can i do to imporve drupal importing speed? i am use drupal built in node_save(), comment_save() method to do it. I have not find out a way to use customized SQL query to increase importing speed yet. I am execute my script through drupals's cron.php, that mean even i have set 'max_execute_time' to unlimited, but that only affects PHP , apache server has it own time out setting.

    Read the article

  • What does "warning: unable to unlink website: Operation not permitted" mean when checking out a Git

    - by James A. Rosen
    I'm trying to create a local branch that tracks a remote branch. Here's what I get: > git checkout master > git push origin origin:refs/heads/myBranch Total 0 (delta 0), reused 0 (delta 0) To [email protected]:myrepo/myproject.git * [new branch] origin/HEAD -> myBranch > git fetch origin > git checkout --track -b myBranch origin/myBranch warning: unable to unlink website: Operation not permitted Branch myBranch set up to track remote branch myBranch from origin. Switched to a new branch 'myBranch' What does "warning: unable to unlink website: Operation not permitted" mean? Did everything work fine?

    Read the article

  • Doctrine_Table_Exception: Unknown relation alias [closed]

    - by Sadiqur Rahman
    I am getting following error message: Doctrine_Table_Exception: Unknown relation alias shoesTable in /home/public_html/projects/giftshoes/system/database/doctrine/Doctrine/Relation/Parser.php on line 237 My Code is below: ------------BaseShoe------------ <?php // Connection Component Binding Doctrine_Manager::getInstance()->bindComponent('Shoes', 'sadiqsof_giftshoes'); /** * BaseShoes * * This class has been auto-generated by the Doctrine ORM Framework * * @property integer $sku * @property string $name * @property string $keywords * @property string $description * @property string $manufacturer * @property float $sale_price * @property float $price * @property string $url * @property string $image * @property string $category * @property Doctrine_Collection $Viewes * * @package ##PACKAGE## * @subpackage ##SUBPACKAGE## * @author ##NAME## <##EMAIL##> * @version SVN: $Id: Builder.php 6820 2009-11-30 17:27:49Z jwage $ */ abstract class BaseShoes extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('shoes'); $this->hasColumn('sku', 'integer', 4, array( 'type' => 'integer', 'fixed' => 0, 'unsigned' => false, 'primary' => true, 'autoincrement' => false, 'length' => '4', )); $this->hasColumn('name', 'string', 255, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '255', )); $this->hasColumn('keywords', 'string', 255, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '255', )); $this->hasColumn('description', 'string', null, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('manufacturer', 'string', 20, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '20', )); $this->hasColumn('sale_price', 'float', null, array( 'type' => 'float', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('price', 'float', null, array( 'type' => 'float', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('url', 'string', null, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('image', 'string', null, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('category', 'string', 50, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '50', )); } public function setUp() { parent::setUp(); $this->hasMany('Viewes', array( 'local' => 'sku', 'foreign' => 'sku')); } } --------------ShoesTable-------- <?php class ShoesTable extends Doctrine_Table { function getAllShoes($from = 0, $total = 15) { $q = Doctrine_Query::create() ->from('Shoes') ->limit($total) ->offset($from); return $q->execute(array(), Doctrine::HYDRATE_ARRAY); } } ---------------Shoes Model----------------- <?php /** * Shoes * * This class has been auto-generated by the Doctrine ORM Framework * * @package ##PACKAGE## * @subpackage ##SUBPACKAGE## * @author ##NAME## <##EMAIL##> * @version SVN: $Id: Builder.php 6820 2009-11-30 17:27:49Z jwage $ */ class Shoes extends BaseShoes { function __construct() { parent::__construct(); $this->shoesTable = Doctrine::getTable('Shoes'); } function getAllShoes() { return $this->shoesTable->getAllShoes(); } }

    Read the article

  • ZIPLIB problem on opening zip files

    - by Ahmet vardar
    I am using this class to create zip <?php // vim: expandtab sw=4 ts=4 sts=4: class zipfile { var $datasec = array(); var $ctrl_dir = array(); var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; var $old_offset = 0; function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method function addFile($data, $name, $time = 0) { $name = str_replace('\\', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = '\x' . $dtime[6] . $dtime[7] . '\x' . $dtime[4] . $dtime[5] . '\x' . $dtime[2] . $dtime[3] . '\x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method function addFiles($files ) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> It creates zip file but when i try to open it on Mac os x snow leopard and windows 7, it doesnt open. on mac i had this error: Error 1: operation not permitted Any idea ? thanks

    Read the article

  • Simple reduction (NP completeness)

    - by Allen
    hey guys I'm looking for a means to prove that the bicriteria shortest path problem is np complete. That is, given a graph with lengths and weights, I need to know if a there exists a path in the graph from s to t with total length <= L and weight <= W. I know that i must take an NP complete problem and reduce it to this one. We have at our disposal the following problems to choose from: 3-SAT, independent set, vertex cover, hamiltonian cycle, and 3-dimensional matching. Any ideas on which may be viable? thanks

    Read the article

  • jQuery plugin, return value from function

    - by Marius
    Hello there, Markup: <input type="text" name="email" /> Code: $(':text').focusout(function(){ $(this).validate(function(){ $(this).attr('name'); }); }); Plugin: (function($){ $.fn.validate = function(type) { return this.each(function(type) { if (type == 'email') { matches = this.val().match('/.+@.+\..{2,7}/'); (matches != null) ? alert('valid') : alert('invalid'); } /*else if (type == 'name') { } else if (type == 'age') { } else if (type == 'text') { }*/ else { alert('total failure'); } }); }; })(jQuery); The problem is that when I execute the code above, it runs the plugin as if type was a string: "function(){ $(this).attr('name'); });" instead of executing it as a function. How do I solve this? Thank you for your time. Kind regards, Marius

    Read the article

  • Create a thumbnail of a dwg in in a linux envrionment

    - by Kyle
    Creating a ruby on rails site that uses RMagick to create thumbnails for many types of images. RMagick cannot read dwg files however. I've tried a few things, looked into the Java library JDWGLib, which would probably allow me to write a converter, but it would be a total from the ground up solution, where I just need a thumbnail. Also considered using a viewer program to open the file in a remote X session and do a screen capture, however I'm not sure how I could ever guarantee that the viewer had completed opening when I took the screenshot. I'm not concerned with being able to manipulate the file other than to create the thumbnail. It is going to be used for commercial purposes, so any libraries used need to be compatible.

    Read the article

  • Rail plugin acts_as_taggable_on :through

    - by Craig
    I have two models: class Employee < ActiveRecord::Base has_many :projects end class Project < ActiveRecord::Base acts_as_taggable_on :skills, :roles end I would like to find Employees using the tags associated with their projects. The geokit-rails plugin supports a similar concept, using its ':through' relationship. Ideally, I would be able to: specify which tags (i.e. skills, roles) would be included in the conditions order the employees by the total number of projects with matching tags be able to access the matching-tag count for each employee for the purposes of building a tag cloud Any thoughts would be appreciated.

    Read the article

  • Problem with truncation of floating point values in DBSlayer.

    - by chrisdew
    When I run a query through DBslayer http://code.nytimes.com/projects/dbslayer the floating point results are truncated to a total of six digits (plus decimal point and negative sign when needed). { ... "lat":52.2228,"lng":-2.19906, ... } When I run the same query in MySQL, the results are as expected. | 52.22280884 | -2.19906425 | Firstly, am I correct in identifying DBSlayer as the cause of this effect? (Or the JSON library it uses, etc.) Secondly, is this floating point precision configurable within DBSlayer? Thanks, Chris. P.S. Ubuntu 9.10, x86_64 Path: . URL: http://dbslayer.googlecode.com/svn/trunk Repository Root: http://dbslayer.googlecode.com/svn Repository UUID: 5df2be84-4748-0410-afd4-f777a056bd0c Revision: 65 Node Kind: directory Schedule: normal Last Changed Author: dgottfrid Last Changed Rev: 65 Last Changed Date: 2008-03-28 22:52:46 +0000 (Fri, 28 Mar 2008)

    Read the article

  • heroku using git branch is confusing!

    - by Stacia
    Ok, so I have a big github project that i'm not supposed to merge my little Stacia branch into. However, it seems like Heroku only takes pushing MASTER seriously. It looks like I pushed my branch, but for example if I only have my branch, it even acts like there's no code on the server. I can't even get my gems installed since the .gems file is on my branch. Basically I don't even want Heroku to know there's a master. I just want to use my test Stacia branch. But it keeps ignoring my local branch. Is there a way to do this? And again, I don't want to overwrite anything on the main Github repository (eeek!) but it would be ok probably if I had both master and my branch on heroku and merged them there. I am a total git novice (on windows no less) so please bear with me.

    Read the article

  • PHP Curl and Loop based on a numeric value

    - by danit
    Im using the Twitter API to collect the number of tweets I've favorited, well to be accurate the total pages of favorited tweets. I use this URL: http://api.twitter.com/1/users/show/username.xml I grab the XML element 'favorites_count' For this example lets assume favorites_count=5 The Twitter API uses this URL to get the favorties: http://twitter.com/favorites.xml (Must be authenticated) You can only get the last 20 favorties using this URL, however you can alter the URL to include a 'page' option by adding: ?page=3 to the end of the favorites URL e.g. http://twitter.com/favorites.xml?page=2 So what I need to do is use CURL (I think) to collect the favorite tweets, but using the URL: http://twitter.com/favorites.xml?page=1 http://twitter.com/favorites.xml?page=2 http://twitter.com/favorites.xml?page=3 http://twitter.com/favorites.xml?page=4 etc... Some kind of loop to visit each URL, and collect the Tweets and then output the cotents. Can anyone help with this: - Need to use CURL to authenticate - Collect the number of pages of tweets (Already scripted this) - Then use a loop to go through each page URL based on the pages value?

    Read the article

  • jQuery / jqgrid / Editing form events

    - by MiBol
    I'm working with the jqGrid and I want to know if exists an event to read a double click in the Editing Form? Has example: I have a grid with ColumnA and ColumnB. I want read the event when the user perform a double click under ColumnB (In the Editing Form). Thanks! I found the solution of my problem ^^ Here is the code, to this example I use the alert "TEST!!!"... [Thanks to Oleg to wake up my mind :P] In the colModel { name: 'Total_uploads', index: 'Total_uploads', width: '100', editable: true, edittype: 'text', editoptions: { size: 10, maxlength: '20', dataInit: function (el) { $(el).click(function () { alert("TEST!!!"); }); } }, editrules: { required: true }, formoptions: { label: 'Total uploads: ', elmsuffix: '&nbsp;&nbsp; <span style="color : #0C66BE; font-family: Calibri">(*)</span>' } }

    Read the article

  • Fixing warning from git

    - by japancheese
    I've been doing a workflow of making a git repository on a remote central repository, cloning that repo on my local dev machine, doing some work, and then pushing the changes back to the same repo on the remote server. However, and I believe this was after an update I did to git recently, after pushing up a change, I'm getting the following warning: Counting objects: 2724, done. Delta compression using up to 2 threads. Compressing objects: 100% (2666/2666), done. Writing objects: 100% (2723/2723), 5.90 MiB | 313 KiB/s, done. Total 2723 (delta 219), reused 0 (delta 0) warning: updating the currently checked out branch; this may cause confusion, as the index and working tree do not reflect changes that are now in HEAD. Can someone explain to me exactly what this warning means, and what I'm doing wrong in my workflow to not receive this warning?

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • Noob Rails ? about learning Rails

    - by user271916
    Hi All I have been programming for a while and for the past 3 or 4 months have been learning ruby. I am not an expert by any means but I believe I have the basics down. I decided to start learning RoR and bought the "Agile Web Development with Rails 3rd Edition" and have been dutifully going through the chapters one by one. Currently I am in chapter 8 and have had no problems so far. My question is I know I have learned several things so far and I know that I am starting to get a sense of the Rails framework I have this fear that I am just not learning as much as I should. Some things I get and understand the interconnections while I feel on other things I am just going through the motions and don't fully comprehend the total interconnectivity. Now, there is still a large amount of the book for me to complete. I guess I am just wondering if I complete this book what should I expect to be able to accomplish on my own and what should be my next steps. Thanks

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >