Search Results

Search found 10505 results on 421 pages for 'total lunar eclipse'.

Page 290/421 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • Problem with truncation of floating point values in DBSlayer.

    - by chrisdew
    When I run a query through DBslayer http://code.nytimes.com/projects/dbslayer the floating point results are truncated to a total of six digits (plus decimal point and negative sign when needed). { ... "lat":52.2228,"lng":-2.19906, ... } When I run the same query in MySQL, the results are as expected. | 52.22280884 | -2.19906425 | Firstly, am I correct in identifying DBSlayer as the cause of this effect? (Or the JSON library it uses, etc.) Secondly, is this floating point precision configurable within DBSlayer? Thanks, Chris. P.S. Ubuntu 9.10, x86_64 Path: . URL: http://dbslayer.googlecode.com/svn/trunk Repository Root: http://dbslayer.googlecode.com/svn Repository UUID: 5df2be84-4748-0410-afd4-f777a056bd0c Revision: 65 Node Kind: directory Schedule: normal Last Changed Author: dgottfrid Last Changed Rev: 65 Last Changed Date: 2008-03-28 22:52:46 +0000 (Fri, 28 Mar 2008)

    Read the article

  • importing content into drupal.

    - by anru
    I have a wordpress site with 5k post and each post has average 25 comments. so 125k total nodes have to be added. I need import those posts and comments into drupal 6 . I have written a script to import those post/comments into drupal by drupal's cron service. but the cron service keeps time out. because import 125k nodes one by one is very slow. what can i do to imporve drupal importing speed? i am use drupal built in node_save(), comment_save() method to do it. I have not find out a way to use customized SQL query to increase importing speed yet. I am execute my script through drupals's cron.php, that mean even i have set 'max_execute_time' to unlimited, but that only affects PHP , apache server has it own time out setting.

    Read the article

  • jQuery plugin, return value from function

    - by Marius
    Hello there, Markup: <input type="text" name="email" /> Code: $(':text').focusout(function(){ $(this).validate(function(){ $(this).attr('name'); }); }); Plugin: (function($){ $.fn.validate = function(type) { return this.each(function(type) { if (type == 'email') { matches = this.val().match('/.+@.+\..{2,7}/'); (matches != null) ? alert('valid') : alert('invalid'); } /*else if (type == 'name') { } else if (type == 'age') { } else if (type == 'text') { }*/ else { alert('total failure'); } }); }; })(jQuery); The problem is that when I execute the code above, it runs the plugin as if type was a string: "function(){ $(this).attr('name'); });" instead of executing it as a function. How do I solve this? Thank you for your time. Kind regards, Marius

    Read the article

  • Simple reduction (NP completeness)

    - by Allen
    hey guys I'm looking for a means to prove that the bicriteria shortest path problem is np complete. That is, given a graph with lengths and weights, I need to know if a there exists a path in the graph from s to t with total length <= L and weight <= W. I know that i must take an NP complete problem and reduce it to this one. We have at our disposal the following problems to choose from: 3-SAT, independent set, vertex cover, hamiltonian cycle, and 3-dimensional matching. Any ideas on which may be viable? thanks

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • Create a thumbnail of a dwg in in a linux envrionment

    - by Kyle
    Creating a ruby on rails site that uses RMagick to create thumbnails for many types of images. RMagick cannot read dwg files however. I've tried a few things, looked into the Java library JDWGLib, which would probably allow me to write a converter, but it would be a total from the ground up solution, where I just need a thumbnail. Also considered using a viewer program to open the file in a remote X session and do a screen capture, however I'm not sure how I could ever guarantee that the viewer had completed opening when I took the screenshot. I'm not concerned with being able to manipulate the file other than to create the thumbnail. It is going to be used for commercial purposes, so any libraries used need to be compatible.

    Read the article

  • heroku using git branch is confusing!

    - by Stacia
    Ok, so I have a big github project that i'm not supposed to merge my little Stacia branch into. However, it seems like Heroku only takes pushing MASTER seriously. It looks like I pushed my branch, but for example if I only have my branch, it even acts like there's no code on the server. I can't even get my gems installed since the .gems file is on my branch. Basically I don't even want Heroku to know there's a master. I just want to use my test Stacia branch. But it keeps ignoring my local branch. Is there a way to do this? And again, I don't want to overwrite anything on the main Github repository (eeek!) but it would be ok probably if I had both master and my branch on heroku and merged them there. I am a total git novice (on windows no less) so please bear with me.

    Read the article

  • How to use third party themes in swing application?

    - by swift
    I want to use some third party themes (like synthetica http://www.javasoft.de/synthetica/themes/) in my swing appliaction. i am using eclipse ide, got the jar file of theme and did the following modification(according to the readme file from the theme) in my code try { UIManager.setLookAndFeel(new SyntheticaBlackMoonLookAndFeel()); } catch (Exception e) { e.printStackTrace(); } but after this modification its showing the following error The type de.javasoft.plaf.synthetica.SyntheticaLookAndFeel cannot be resolved. It is indirectly referenced from required .class files what does this mean? i tried searching on net but cant really find any useful answers Contents of Readme file: System Requirements =================== Java SE 5 (JRE 1.5.0) or above Synthetica V2.2.0 or above Integration =========== 1. Ensure that your classpath contains all Synthetica libraries (including Synthetica's core library 'synthetica.jar'). 2. Enable the Synthetica Look and Feel at startup time in your application: import de.javasoft.plaf.synthetica.SyntheticaBlackMoonLookAndFeel; try { UIManager.setLookAndFeel(new SyntheticaBlackMoonLookAndFeel()); } catch (Exception e) { e.printStackTrace(); }

    Read the article

  • Rail plugin acts_as_taggable_on :through

    - by Craig
    I have two models: class Employee < ActiveRecord::Base has_many :projects end class Project < ActiveRecord::Base acts_as_taggable_on :skills, :roles end I would like to find Employees using the tags associated with their projects. The geokit-rails plugin supports a similar concept, using its ':through' relationship. Ideally, I would be able to: specify which tags (i.e. skills, roles) would be included in the conditions order the employees by the total number of projects with matching tags be able to access the matching-tag count for each employee for the purposes of building a tag cloud Any thoughts would be appreciated.

    Read the article

  • c# resize window over display resolution

    - by Sebastian
    I am total newbie in .Net programming so be patient, please ;-). I have problem with resizing window. I want to resize from my app other app's window and take screenshot of it. I do resizing based on this example: http://blogs.geekdojo.net/richard/archive/2003/09/24/181.aspx. But I have a problem. I work on a laptop with 1024x640 pixels screen resolution but I want to resize my window to 1200x1600 px. I can't do that couse display limitations. Is there any tricky solution to resize window for this resolution and take a screenshot of whole window? I've alos tried Sdesk program witch is suggested here: http://stackoverflow.com/questions/445893/create-window-larger-than-desktop-display-resolution. Any help?

    Read the article

  • GlassFish Starting Up Java SE Client - No Initial Context Exception

    - by Marcel
    Hi I have developed a java se client that calls some session beans on a glassfish server. I connect to the bean remote interface like this. context = new InitialContext(); em = (ICrudService) context.lookup("java:global/BackITServer/CrudServiceImpl"); This works fine from inside eclipse (gf-client on build path). When I export my project as a runnable jar and call it on the console with java -jar BackItClient.jar I get a NoInitialContextException. MMMM. I would very much appreciate some help. Thank You Greetings Marcel PS: Do I really have to pack all the jars which gf-client is referencing into my jar?

    Read the article

  • Stackless installation and configuration with DJango

    - by crashekar
    I am trying to run a DJango Command Extension which uses stackless. I have installed Stackless Python (compiled with python 2.5) so whenever I type python2.5 at the console it fires up indicating that the version is Python 2.5.2 Stackless 3.1b3 060516 (python-2.52:72942, May 26 2009, 23:07:34) [GCC 4.3.3] on linux2 But in my eclipse I have configured my django application to run with python2.6. Specifically in the PyDev settings. So obviously when I mention import stackless it says that there is no such package. The problem is even if I add the '/usr/local/lib/python2.5/site-packages' directory it does not import stackless. What is the solution to this issue ?

    Read the article

  • Doctrine_Table_Exception: Unknown relation alias [closed]

    - by Sadiqur Rahman
    I am getting following error message: Doctrine_Table_Exception: Unknown relation alias shoesTable in /home/public_html/projects/giftshoes/system/database/doctrine/Doctrine/Relation/Parser.php on line 237 My Code is below: ------------BaseShoe------------ <?php // Connection Component Binding Doctrine_Manager::getInstance()->bindComponent('Shoes', 'sadiqsof_giftshoes'); /** * BaseShoes * * This class has been auto-generated by the Doctrine ORM Framework * * @property integer $sku * @property string $name * @property string $keywords * @property string $description * @property string $manufacturer * @property float $sale_price * @property float $price * @property string $url * @property string $image * @property string $category * @property Doctrine_Collection $Viewes * * @package ##PACKAGE## * @subpackage ##SUBPACKAGE## * @author ##NAME## <##EMAIL##> * @version SVN: $Id: Builder.php 6820 2009-11-30 17:27:49Z jwage $ */ abstract class BaseShoes extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('shoes'); $this->hasColumn('sku', 'integer', 4, array( 'type' => 'integer', 'fixed' => 0, 'unsigned' => false, 'primary' => true, 'autoincrement' => false, 'length' => '4', )); $this->hasColumn('name', 'string', 255, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '255', )); $this->hasColumn('keywords', 'string', 255, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '255', )); $this->hasColumn('description', 'string', null, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('manufacturer', 'string', 20, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '20', )); $this->hasColumn('sale_price', 'float', null, array( 'type' => 'float', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('price', 'float', null, array( 'type' => 'float', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('url', 'string', null, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('image', 'string', null, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('category', 'string', 50, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '50', )); } public function setUp() { parent::setUp(); $this->hasMany('Viewes', array( 'local' => 'sku', 'foreign' => 'sku')); } } --------------ShoesTable-------- <?php class ShoesTable extends Doctrine_Table { function getAllShoes($from = 0, $total = 15) { $q = Doctrine_Query::create() ->from('Shoes') ->limit($total) ->offset($from); return $q->execute(array(), Doctrine::HYDRATE_ARRAY); } } ---------------Shoes Model----------------- <?php /** * Shoes * * This class has been auto-generated by the Doctrine ORM Framework * * @package ##PACKAGE## * @subpackage ##SUBPACKAGE## * @author ##NAME## <##EMAIL##> * @version SVN: $Id: Builder.php 6820 2009-11-30 17:27:49Z jwage $ */ class Shoes extends BaseShoes { function __construct() { parent::__construct(); $this->shoesTable = Doctrine::getTable('Shoes'); } function getAllShoes() { return $this->shoesTable->getAllShoes(); } }

    Read the article

  • IBM RAD With Java 1.5 wont compile code with generics

    - by Matt1776
    Hello I have some code that has generic references in it and my IBM RAD IDE will not compile the code, instead treating it as an error. I have checked the version of the JRE its pointing to across all the Enterprise Project's and it is 1.5 which I am told does support generics. Also I checked that all the libraries for WAS were pointing to the correct version and that the Compiler Compliance Level was set correctly (which it was at 5.0 and i changed it to 6.0 with no luck either) Does anyone have any suggestions as to anything else I can try? I have issues like this with RAD all the time and I dont know about anyone else but they took eclipse and made it complicated and dysfunctional.

    Read the article

  • Noob Rails ? about learning Rails

    - by user271916
    Hi All I have been programming for a while and for the past 3 or 4 months have been learning ruby. I am not an expert by any means but I believe I have the basics down. I decided to start learning RoR and bought the "Agile Web Development with Rails 3rd Edition" and have been dutifully going through the chapters one by one. Currently I am in chapter 8 and have had no problems so far. My question is I know I have learned several things so far and I know that I am starting to get a sense of the Rails framework I have this fear that I am just not learning as much as I should. Some things I get and understand the interconnections while I feel on other things I am just going through the motions and don't fully comprehend the total interconnectivity. Now, there is still a large amount of the book for me to complete. I guess I am just wondering if I complete this book what should I expect to be able to accomplish on my own and what should be my next steps. Thanks

    Read the article

  • ZIPLIB problem on opening zip files

    - by Ahmet vardar
    I am using this class to create zip <?php // vim: expandtab sw=4 ts=4 sts=4: class zipfile { var $datasec = array(); var $ctrl_dir = array(); var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; var $old_offset = 0; function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method function addFile($data, $name, $time = 0) { $name = str_replace('\\', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = '\x' . $dtime[6] . $dtime[7] . '\x' . $dtime[4] . $dtime[5] . '\x' . $dtime[2] . $dtime[3] . '\x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method function addFiles($files ) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> It creates zip file but when i try to open it on Mac os x snow leopard and windows 7, it doesnt open. on mac i had this error: Error 1: operation not permitted Any idea ? thanks

    Read the article

  • How to run unittest under pydev for Django?

    - by photon
    I configured properties for my django project under pydev. I can run the django app under pydev or under console window. I can also run unittest for app under console window. But I have problems to run unittest under pydev. I guess it's something related to run configurations of pydev, so I made several trials, but with no success. Once I got messages like this: ImportError: Could not import settings 'D:\django_projects\MyProject' (Is it on sys.path? Does it have syntax errors?): No module named D:\django_projects\MyProject ERROR: Module: MyUnittestFile could not be imported. Another time I got messages like this: ImportError: Could not import settings 'MyProject.settngs' (Is it on sys.path? Does it have syntax errors?): No module named settngs 'ERROR: Module: MyUnittestFile could not be imported. I use pydev 1.5.6 on eclipse and windows xp. Any ideas for this problem?

    Read the article

  • PHP Curl and Loop based on a numeric value

    - by danit
    Im using the Twitter API to collect the number of tweets I've favorited, well to be accurate the total pages of favorited tweets. I use this URL: http://api.twitter.com/1/users/show/username.xml I grab the XML element 'favorites_count' For this example lets assume favorites_count=5 The Twitter API uses this URL to get the favorties: http://twitter.com/favorites.xml (Must be authenticated) You can only get the last 20 favorties using this URL, however you can alter the URL to include a 'page' option by adding: ?page=3 to the end of the favorites URL e.g. http://twitter.com/favorites.xml?page=2 So what I need to do is use CURL (I think) to collect the favorite tweets, but using the URL: http://twitter.com/favorites.xml?page=1 http://twitter.com/favorites.xml?page=2 http://twitter.com/favorites.xml?page=3 http://twitter.com/favorites.xml?page=4 etc... Some kind of loop to visit each URL, and collect the Tweets and then output the cotents. Can anyone help with this: - Need to use CURL to authenticate - Collect the number of pages of tweets (Already scripted this) - Then use a loop to go through each page URL based on the pages value?

    Read the article

  • Android:Android SDK and AVD Manager doesn't launch after did SDK upgrade?

    - by user187532
    Hi folks, I have Android development eclipse setup on Mac OS X. I recently did Android SDK upgrade from 1.5 to available versions such as 1.6, 2.0.1 and 2.1 and docs. After did upgrade, automatically restarted my Macbook and installed all new versions. After this, when i try to launch "Windows-Android SDK and AVD Manager", it doesn't launch at all. What might be the reason? Does anyone has the solution? The reason why i'm trying to launch Android SDK and AVD Manager is, i need to add new Android virtual device(avd) target for 2.0.1 and 2.1 versions, for that i'm trying to launch Android SDK and AVD Manager. Thank you in advance for your suggestions.

    Read the article

  • Visual Studio ReportViewer repeating data block on every page

    - by muhan
    I am using Reportviewer to generate a sales invoice to be printed by the user. How can I get databound fields to be printed on every page of a multi page invoice? The invoice is printed on a pre-printed form. I want the printed form to look roughly like: page 1 customer john smith 123 main st. city, CA 90000 some item1 $100 some item2 $150 some item3 $150 page 2 customer john smith 123 main st. city, CA 90000 some item4 $500 some item5 $250 some item6 $950 Total $2100 Using 1 list which contains databound textbox fields for the customer info, and a table for the items. The problem is if there are many items such that the items flow over to page 2, the next page only contains items, and not the customer info which needs to be printed on second page as well. I tried using page header, but I can't use databound items in page header. Please Help!

    Read the article

  • What's the reason for leaving an extra blank line at the end of a code file?

    - by Lord Torgamus
    Eclipse and MyEclipse create new Java files with an extra blank line after the last closing brace by default. I think CodeWarrior did the same thing a few years back, and that some people leave such blank lines in their code either by intention or laziness. So, this seems to be at least a moderately widespread behavior. As a former human language editor -- copy editing newspapers, mostly -- I find that those lines look like sloppiness or accidents, and I can't think of a reason to leave them in source files. I know they don't affect compilation in C-style languages, including Java. Are there benefits to having those lines, and if so, what are they?

    Read the article

  • jqGrid local date manipulation; problem with row ids when deleting and adding new rows

    - by Sam
    I'm using jqGrid as a client side grid input, allowing the user to input multiple records before POSTing all the data back at once. I'm having a problem where if the user has added a few records (say 3 ) the id's for the records will be 1,2,3. if the user deletes record 2, you're left with 1 and 3 for the id of the records. When the user now adds a new records, jqGrid assigns that records the id 3 again since it just seems to count the total records and increments it by one for the new record. This causes problems when selecting rows as now the row id's are 1, 3 and 3. Does anyone know how to access the row ids of records as I could probably use the afterSubmit event and reassign the row id's increasing from 1. ( so after i delete row id 2, this will set the other row id's to 1 and 2) Any other suggestions to solve this problem? Thanks

    Read the article

  • Configuration Log4j: ConsoleAppender to System.err?

    - by Gerard
    I saw that one of our tools uses a ConsoleAppender to System.err next to System.out in it's log4j configuration. Fragments of the configuration: <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender"> <!-- Log to STDOUT. --> <param name="Target" value="System.out"/> .... <appender name="CONSOLE_ERR" class="org.apache.log4j.ConsoleAppender"> <!-- Log to STDERR. --> <param name="Target" value="System.err"/> In Eclipse this results in double messages to the console, so I believe that is of no use, right? On the Linux server I see only one message to the PuTTY console, so where would that System.err message go to?

    Read the article

  • Fixing warning from git

    - by japancheese
    I've been doing a workflow of making a git repository on a remote central repository, cloning that repo on my local dev machine, doing some work, and then pushing the changes back to the same repo on the remote server. However, and I believe this was after an update I did to git recently, after pushing up a change, I'm getting the following warning: Counting objects: 2724, done. Delta compression using up to 2 threads. Compressing objects: 100% (2666/2666), done. Writing objects: 100% (2723/2723), 5.90 MiB | 313 KiB/s, done. Total 2723 (delta 219), reused 0 (delta 0) warning: updating the currently checked out branch; this may cause confusion, as the index and working tree do not reflect changes that are now in HEAD. Can someone explain to me exactly what this warning means, and what I'm doing wrong in my workflow to not receive this warning?

    Read the article

  • Bluimp file upload send data to the server on .fileupload

    - by MyName
    OK I've Googled...Googled and Googled again with no avail on how I can send data to the server like an ajax call's data option for the file upload: $('#file_upload').fileupload({ dataType: 'json', url: "@(Url.Action("UploadFiles", "ExcelUpload"))", // formData: function(form) { return [{ name: "dataTable", value : "@(Model)"}];}, progressall: function (e, data) { $(this).find('.progressbar').progressbar({ value: parseInt(data.loaded / data.total * 100, 10) }); }, done: function (e, data) { BadFile(e, data); } }); controller would look something like this: [HttpPost] public ContentResult UploadFiles(Mytype param1, MyType Param2) { .. } I want to do something similar to this: $.ajax({ url: "@(Url.Action("Action", "Controller"))", type: "post", data: { param1: value, param2: @(Model) } }); On the fileupload callback. Is this possible? How can I pass values to the ServerSide? Should I switch to a different uploader..? Please help me out! I need to resolve this as soon as possible.

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >