Search Results

Search found 4382 results on 176 pages for 'priority queue'.

Page 141/176 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Find Port Number and Domain Name to connect to Hive Table

    - by user1419563
    I am new to Hive, MapReduce and Hadoop. I am using Putty to connect to hive table and access records in the tables. So what I did is- I opened Putty and in the host name I typed- ares-ingest.vip.host.com and then I click Open. And then I entered my username and password and then few commands to get to Hive sql. Below is the list what I did $ bash bash-3.00$ hive Hive history file=/tmp/rjamal/hive_job_log_rjamal_201207010451_1212680168.txt hive> set mapred.job.queue.name=hdmi-technology; hive> select * from table LIMIT 1; So my question is- I was trying to connect to Hive Tables using Squirrel SQL Client, so in that my Connection URL is- jdbc:hive://ares-ingest.vip.host.com:10000/default. So whenever I try to connect with these attributes, I always get Hive: Could not establish connection to ares-ingest.vip.host.com:10000/default: java.net.ConnectException: Connection timed out: connect. It might be possible I am using wrong port number or domain name here. Is there any way from the command prompt I can find out these two things, like what Domain Name and Port Number(where Hive server is running) should I use to connect to Hive table from Squirrel SQL Client. As I know host and port are determined by where the hive server is running

    Read the article

  • Programming language shootout: code most like pseudocode for Dijkstra's Algorithm

    - by Casebash
    Okay, so this question here asked which language is most like executable pseudocode, so why not find out by actually writing some code! Here we have a competition where I will award a 100 point bounty (I know its not much, but I am poor after the recalc) to the code which most resembles this pseudocode. I've read through this a few times so I'm pretty sure that this pseudocode below is correct and about as unambiguous as pseudocode can be. Personally, I'm going to have a go in Python and probably Haskell as well, but I'm just learning the later so my attempt will probably be pretty poor. Note: Obviously to implement anything looking like this you'll have to define quite a few library functions. define DirectedGraph G with: Vertices as V, Edges as E define Vertex A, Z declare each e in E as having properties: Boolean fixed with: initial=false Real minSoFar with: initial=0 for A else infinity define PriorityQueue pq with: objects=V initial=A priority v=v.minSoFar create triggers for v in V: when v.minSoFar event reduced then pq.addOrUpdate v when v.fixed event becomesTrue then pq.remove v Repeat until Z.fixed==True: define Vertex U=pq.pop() U.fixed=True for Edge E adjacentTo U with other Vertex V: V.minSoFar=U.minSoFar+length(E) if reducesValue return Z.name, Z.minSoFar

    Read the article

  • Dojo addOnLoad, but is Dojo loaded?

    - by adamrice32
    I've encountered what seems like a chicken & egg problem, and have what I think is a logical solution. However, it occurred to me that others must have encountered something similar, so I figured I'd float it out there for the masses. The situation is that I want to use dojo's addOnLoad function to queue up a number of callbacks which should be executed after the DOM has completed rendering on the client side. So what I'm doing is as follows: <html> <head> <script type="text/javascript" src="dojo.xd.js"></script> ... </head> <body> ... <script type="text/javascript"> dojo.addOnLoad( ... ); dojo.addOnLoad( ... ); ... </script> </body> </html> Now, the issue is that I seem to be calling dojo.addOnLoad before the entire Dojo library has been downloaded the browser. This makes sense in a way, because the inline SCRIPT contents should be executed before the entire DOM is loaded (and the normal body onload callback is triggered). My question is this - is my approach sound, or would it make more sense to register a normal/standard body onload JavaScript callback to call a function, which does the same work that each of the dojo.addOnLoads is doing in the SCRIPT block. Of course, this begs the question, why would you ever then use dojo.addOnLoad if you're not guaranteed that the Dojo library will be loaded prior to using the library? Hopefully this situation makes sense to someone other than me. Seems like someone else may have encountered this situation. Thoughts? Best Regards, Adam Rice

    Read the article

  • Schedule multiple events with NSTimer?

    - by AWright4911
    I have a schedule cache stored in a pList. For the example below, I have a schedule time of April 13, 2010 2:00PM and Aril 13, 2010 2:05PM. How can I add both of these to a queue to fire on their own? item 0 -Hour --14 -Minute --00 -Month --04 -Day --13 -Year --2010 item 1 -Hour --14 -Minute --05 -Month --04 -Day --13 -Year --2010 this is how I am attempting to schedule multiple events to fire at specific date / time. -(void) buildScheduleCache { MPNotifyViewController *notifier = [MPNotifyViewController alloc] ; [notifier setStatusText:@"Rebuilding schedule cache, this will only take a moment."]; [notifier show]; NSCalendarDate *now = [NSCalendarDate calendarDate]; NSFileManager *manager = [[NSFileManager defaultManager] autorelease]; path = @"/var/mobile/Library/MobileProfiles/Custom Profiles"; theProfiles = [manager directoryContentsAtPath:path]; myPrimaryinfo = [[NSMutableArray arrayWithCapacity:6] retain]; keys = [NSArray arrayWithObjects:@"Profile",@"MPSYear",@"MPSMonth",@"MPSDay",@"MPSHour",@"MPSMinute",nil]; for (NSString *profile in theProfiles) { plistDict = [[[NSMutableDictionary alloc] initWithContentsOfFile:[NSString stringWithFormat:@"%@/%@",path,profile]] autorelease]; [myPrimaryinfo addObject:[NSDictionary dictionaryWithObjects: [NSArray arrayWithObjects: [NSString stringWithFormat:@"%@",profile], [NSString stringWithFormat:@"%@",[plistDict objectForKey:@"MPSYear"]], [NSString stringWithFormat:@"%@",[plistDict objectForKey:@"MPSMonth"]], [NSString stringWithFormat:@"%@",[plistDict objectForKey:@"MPSDay"]], [NSString stringWithFormat:@"%@",[plistDict objectForKey:@"MPSHour"]], [NSString stringWithFormat:@"%@",[plistDict objectForKey:@"MPSMinute"]], nil]forKeys:keys]]; profileSched = [NSCalendarDate dateWithYear:[plistDict objectForKey:@"MPSYear"] month:[plistDict objectForKey:@"MPSMonth"] day:[plistDict objectForKey:@"MPSDay"] hour:[plistDict objectForKey:@"MPSHour"] minute:[plistDict objectForKey:@"MPSMinute"] second:01 timeZone:[now timeZone]]; [self rescheduleTimer]; } NSString *testPath = @"/var/mobile/Library/MobileProfiles/Schedules.plist"; [myPrimaryinfo writeToFile:testPath atomically:YES]; } -(void) rescheduleTimer { timer = [[NSTimer alloc] initWithFireDate:profileSched interval:0.0f target:self selector:@selector(theFireEvent) userInfo:nil repeats:YES]; NSRunLoop *runLoop = [NSRunLoop currentRunLoop]; [runLoop addTimer:timer forMode:NSDefaultRunLoopMode]; }

    Read the article

  • Large number of simultaneous long-running operations in Qt

    - by Hostile Fork
    I have some long-running operations that number in the hundreds. At the moment they are each on their own thread. My main goal in using threads is not to speed these operations up. The more important thing in this case is that they appear to run simultaneously. I'm aware of cooperative multitasking and fibers. However, I'm trying to avoid anything that would require touching the code in the operations, e.g. peppering them with things like yieldToScheduler(). I also don't want to prescribe that these routines be stylized to be coded to emit queues of bite-sized task items...I want to treat them as black boxes. For the moment I can live with these downsides: Maximum # of threads tend to be O(1000) Cost per thread is O(1MB) To address the bad cache performance due to context-switches, I did have the idea of a timer which would juggle the priorities such that only idealThreadCount() threads were ever at Normal priority, with all the rest set to Idle. This would let me widen the timeslices, which would mean fewer context switches and still be okay for my purposes. Question #1: Is that a good idea at all? One certain downside is it won't work on Linux (docs say no QThread::setPriority() there). Question #2: Any other ideas or approaches? Is QtConcurrent thinking about this scenario? (Some related reading: how-many-threads-does-it-take-to-make-them-a-bad-choice, many-threads-or-as-few-threads-as-possible, maximum-number-of-threads-per-process-in-linux)

    Read the article

  • Most efficient way of creating tree from adjacency list

    - by Jeff Meatball Yang
    I have an adjacency list of objects (rows loaded from SQL database with the key and it's parent key) that I need to use to build an unordered tree. It's guaranteed to not have cycles. This is taking wayyy too long (processed only ~3K out of 870K nodes in about 5 minutes). Running on my workstation Core 2 Duo with plenty of RAM. Any ideas on how to make this faster? public class StampHierarchy { private StampNode _root; private SortedList<int, StampNode> _keyNodeIndex; // takes a list of nodes and builds a tree // starting at _root private void BuildHierarchy(List<StampNode> nodes) { Stack<StampNode> processor = new Stack<StampNode>(); _keyNodeIndex = new SortedList<int, StampNode>(nodes.Count); // find the root _root = nodes.Find(n => n.Parent == 0); // find children... processor.Push(_root); while (processor.Count != 0) { StampNode current = processor.Pop(); // keep a direct link to the node via the key _keyNodeIndex.Add(current.Key, current); // add children current.Children.AddRange(nodes.Where(n => n.Parent == current.Key)); // queue the children foreach (StampNode child in current.Children) { processor.Push(child); nodes.Remove(child); // thought this might help the Where above } } } } public class StampNode { // properties: int Key, int Parent, string Name, List<StampNode> Children }

    Read the article

  • Can't connect to SQL Server 2008 - looks like Shared Memory problem

    - by Proposition Joe
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest).

    Read the article

  • .NET Remoting memory leak?

    - by PrimeTSS
    I have a Remoting Class as a Singleton <configuration> <system.runtime.remoting> <application> <service> <wellknown mode="Singleton" type="PTSSLinkClasses.PTSSLinkClientDesktopRemotable, PTSSLinkClasses" objectUri="PTSSLinkDesktop" /> </service> <channels> <channel ref="http" port="8901"/> </channels> </application> </system.runtime.remoting> </configuration> Its created within a "server" Service. Another client service consumes this remote object. The client is calling the remote object every .5 second using a timer (polling) (for testing) If the server service is stopped, so the remote object is not available, memory useage for the client service keeps increasing...... I have overwritten InitialLifetimeService to return a null public override Object InitializeLifetimeService() { return null; } If a remote object is not available does .net queue all the call requests to this object??? untill all the memory is consumed? How can I dected if the remote object is not available and stop trying to call the remote method?

    Read the article

  • ez components and AWS PHP SDK makes ez components freak out

    - by David
    Hi, I try to work with ez Components and AWS PHP SDK at the same time. I have a file called resize.php which is just handling resizing images using the ez Components ImageTransition tools. I queue the image for resize in Amazon AWS SQS. If I load the AWS PHP SDK and ez Components in the same file, PHP always complains about not finding the ez Components classes. Code looks something like this: amazonSQS.php: require 'modules/resize.php'; require 'modules/aws/sdk.class.php'; $sqs = new AmazonSQS(); $response = $sqs->send_message($queue_url, $message); resize.php: function resize_image($filename) { $settings = new ezcImageConverterSettings( array( //new ezcImageHandlerSettings( 'GD', 'ezcImageGdHandler' ), new ezcImageHandlerSettings( 'ImageMagick', 'ezcImageImagemagickHandler' ), ) ); Error message: Fatal error: Class 'ezcImageConverterSettings' not found in /home/www.com/public_html/modules/resize.php on line 10 If I call resize.php from another PHP file which has AWS not included, it works fine. I load ezComponents like this: require 'ezc/Base/ezc_bootstrap.php'; It is installed as a PEAR package. Any idea someone?

    Read the article

  • Practical value for concurrent-request-timeout parameter

    - by Andrei
    In the Seam Reference Guide, one can find this paragraph: We can set a sensible default for the concurrent request timeout (in ms) in components.xml: <core:manager concurrent-request-timeout="500" /> However, we found that 500 ms is not nearly enough time for most of the cases we had to deal with, especially with the severe restriction seam places on conversation access. In our application we have a combination of page scoped ajax requests (triggered by various use actions), some global scoped polling notification logic (part of the header, so included in every page) and regular links that invoke actions and/or navigate to other pages. Therefore, we get the dreaded concurrent access to conversation exception way too often, even without any significant load on the site. After researching the options for quite a bit, we ended up bumping this value to several seconds (we're debating whether to bump it up to 10s), as none of the recommended solutions seemed able to solve our issue completely (even forcing a global queue for all the ajax requests would still leave us exposed to a user deciding to click a link right when one of our polling calls was in progress). And we'd much rather have the users wait for a second or two instead of getting an error page just because they clicked a link at the wrong moment. And now to the question: is there something obvious we're missing (like a way to allow concurrent access to conversations and taking care of the needed locking ourselves, for instance :)? How do people solve this problem (ajax requests mixed with user driven interaction) in seam? Disabling all the links on the page while ajax requests are in progress (as suggested by one blog page) is really not a viable option. Any other suggestions? TIA, Andrei

    Read the article

  • Modify code to change timestamp timezone in sitemap

    - by Aahan Krish
    Below is the code from a plugin I use for sitemaps. I would like to know if there's a way to "enforce" a different timezone on all date variable and functions in it. If not, how do I modify the code to change the timestamp to reflect a different timezone? Say for example, to America/New_York timezone. Please look for date to quickly get to the appropriate code blocks. Code on Pastebin. <?php /** * @package XML_Sitemaps */ class WPSEO_Sitemaps { ..... /** * Build the root sitemap -- example.com/sitemap_index.xml -- which lists sub-sitemaps * for other content types. * * @todo lastmod for sitemaps? */ function build_root_map() { global $wpdb; $options = get_wpseo_options(); $this->sitemap = '<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">' . "\n"; $base = $GLOBALS['wp_rewrite']->using_index_permalinks() ? 'index.php/' : ''; // reference post type specific sitemaps foreach ( get_post_types( array( 'public' => true ) ) as $post_type ) { if ( $post_type == 'attachment' ) continue; if ( isset( $options['post_types-' . $post_type . '-not_in_sitemap'] ) && $options['post_types-' . $post_type . '-not_in_sitemap'] ) continue; $count = $wpdb->get_var( $wpdb->prepare( "SELECT COUNT(ID) FROM $wpdb->posts WHERE post_type = %s AND post_status = 'publish' LIMIT 1", $post_type ) ); // don't include post types with no posts if ( !$count ) continue; $n = ( $count > 1000 ) ? (int) ceil( $count / 1000 ) : 1; for ( $i = 0; $i < $n; $i++ ) { $count = ( $n > 1 ) ? $i + 1 : ''; if ( empty( $count ) || $count == $n ) { $date = $this->get_last_modified( $post_type ); } else { $date = $wpdb->get_var( $wpdb->prepare( "SELECT post_modified_gmt FROM $wpdb->posts WHERE post_status = 'publish' AND post_type = %s ORDER BY post_modified_gmt ASC LIMIT 1 OFFSET %d", $post_type, $i * 1000 + 999 ) ); $date = date( 'c', strtotime( $date ) ); } $this->sitemap .= '<sitemap>' . "\n"; $this->sitemap .= '<loc>' . home_url( $base . $post_type . '-sitemap' . $count . '.xml' ) . '</loc>' . "\n"; $this->sitemap .= '<lastmod>' . htmlspecialchars( $date ) . '</lastmod>' . "\n"; $this->sitemap .= '</sitemap>' . "\n"; } } // reference taxonomy specific sitemaps foreach ( get_taxonomies( array( 'public' => true ) ) as $tax ) { if ( in_array( $tax, array( 'link_category', 'nav_menu', 'post_format' ) ) ) continue; if ( isset( $options['taxonomies-' . $tax . '-not_in_sitemap'] ) && $options['taxonomies-' . $tax . '-not_in_sitemap'] ) continue; // don't include taxonomies with no terms if ( !$wpdb->get_var( $wpdb->prepare( "SELECT term_id FROM $wpdb->term_taxonomy WHERE taxonomy = %s AND count != 0 LIMIT 1", $tax ) ) ) continue; // Retrieve the post_types that are registered to this taxonomy and then retrieve last modified date for all of those combined. $taxobj = get_taxonomy( $tax ); $date = $this->get_last_modified( $taxobj->object_type ); $this->sitemap .= '<sitemap>' . "\n"; $this->sitemap .= '<loc>' . home_url( $base . $tax . '-sitemap.xml' ) . '</loc>' . "\n"; $this->sitemap .= '<lastmod>' . htmlspecialchars( $date ) . '</lastmod>' . "\n"; $this->sitemap .= '</sitemap>' . "\n"; } // allow other plugins to add their sitemaps to the index $this->sitemap .= apply_filters( 'wpseo_sitemap_index', '' ); $this->sitemap .= '</sitemapindex>'; } /** * Build a sub-sitemap for a specific post type -- example.com/post_type-sitemap.xml * * @param string $post_type Registered post type's slug */ function build_post_type_map( $post_type ) { $options = get_wpseo_options(); ............ // We grab post_date, post_name, post_author and post_status too so we can throw these objects into get_permalink, which saves a get_post call for each permalink. while ( $total > $offset ) { $join_filter = apply_filters( 'wpseo_posts_join', '', $post_type ); $where_filter = apply_filters( 'wpseo_posts_where', '', $post_type ); // Optimized query per this thread: http://wordpress.org/support/topic/plugin-wordpress-seo-by-yoast-performance-suggestion // Also see http://explainextended.com/2009/10/23/mysql-order-by-limit-performance-late-row-lookups/ $posts = $wpdb->get_results( "SELECT l.ID, post_content, post_name, post_author, post_parent, post_modified_gmt, post_date, post_date_gmt FROM ( SELECT ID FROM $wpdb->posts {$join_filter} WHERE post_status = 'publish' AND post_password = '' AND post_type = '$post_type' {$where_filter} ORDER BY post_modified ASC LIMIT $steps OFFSET $offset ) o JOIN $wpdb->posts l ON l.ID = o.ID ORDER BY l.ID" ); /* $posts = $wpdb->get_results("SELECT ID, post_content, post_name, post_author, post_parent, post_modified_gmt, post_date, post_date_gmt FROM $wpdb->posts {$join_filter} WHERE post_status = 'publish' AND post_password = '' AND post_type = '$post_type' {$where_filter} ORDER BY post_modified ASC LIMIT $steps OFFSET $offset"); */ $offset = $offset + $steps; foreach ( $posts as $p ) { $p->post_type = $post_type; $p->post_status = 'publish'; $p->filter = 'sample'; if ( wpseo_get_value( 'meta-robots-noindex', $p->ID ) && wpseo_get_value( 'sitemap-include', $p->ID ) != 'always' ) continue; if ( wpseo_get_value( 'sitemap-include', $p->ID ) == 'never' ) continue; if ( wpseo_get_value( 'redirect', $p->ID ) && strlen( wpseo_get_value( 'redirect', $p->ID ) ) > 0 ) continue; $url = array(); $url['mod'] = ( isset( $p->post_modified_gmt ) && $p->post_modified_gmt != '0000-00-00 00:00:00' ) ? $p->post_modified_gmt : $p->post_date_gmt; $url['chf'] = 'weekly'; $url['loc'] = get_permalink( $p ); ............. } /** * Build a sub-sitemap for a specific taxonomy -- example.com/tax-sitemap.xml * * @param string $taxonomy Registered taxonomy's slug */ function build_tax_map( $taxonomy ) { $options = get_wpseo_options(); .......... // Grab last modified date $sql = "SELECT MAX(p.post_date) AS lastmod FROM $wpdb->posts AS p INNER JOIN $wpdb->term_relationships AS term_rel ON term_rel.object_id = p.ID INNER JOIN $wpdb->term_taxonomy AS term_tax ON term_tax.term_taxonomy_id = term_rel.term_taxonomy_id AND term_tax.taxonomy = '$c->taxonomy' AND term_tax.term_id = $c->term_id WHERE p.post_status = 'publish' AND p.post_password = ''"; $url['mod'] = $wpdb->get_var( $sql ); $url['chf'] = 'weekly'; $output .= $this->sitemap_url( $url ); } } /** * Build the <url> tag for a given URL. * * @param array $url Array of parts that make up this entry * @return string */ function sitemap_url( $url ) { if ( isset( $url['mod'] ) ) $date = mysql2date( "Y-m-d\TH:i:s+00:00", $url['mod'] ); else $date = date( 'c' ); $output = "\t<url>\n"; $output .= "\t\t<loc>" . $url['loc'] . "</loc>\n"; $output .= "\t\t<lastmod>" . $date . "</lastmod>\n"; $output .= "\t\t<changefreq>" . $url['chf'] . "</changefreq>\n"; $output .= "\t\t<priority>" . str_replace( ',', '.', $url['pri'] ) . "</priority>\n"; if ( isset( $url['images'] ) && count( $url['images'] ) > 0 ) { foreach ( $url['images'] as $img ) { $output .= "\t\t<image:image>\n"; $output .= "\t\t\t<image:loc>" . esc_html( $img['src'] ) . "</image:loc>\n"; if ( isset( $img['title'] ) ) $output .= "\t\t\t<image:title>" . _wp_specialchars( html_entity_decode( $img['title'], ENT_QUOTES, get_bloginfo('charset') ) ) . "</image:title>\n"; if ( isset( $img['alt'] ) ) $output .= "\t\t\t<image:caption>" . _wp_specialchars( html_entity_decode( $img['alt'], ENT_QUOTES, get_bloginfo('charset') ) ) . "</image:caption>\n"; $output .= "\t\t</image:image>\n"; } } $output .= "\t</url>\n"; return $output; } /** * Get the modification date for the last modified post in the post type: * * @param array $post_types Post types to get the last modification date for * @return string */ function get_last_modified( $post_types ) { global $wpdb; if ( !is_array( $post_types ) ) $post_types = array( $post_types ); $result = 0; foreach ( $post_types as $post_type ) { $key = 'lastpostmodified:gmt:' . $post_type; $date = wp_cache_get( $key, 'timeinfo' ); if ( !$date ) { $date = $wpdb->get_var( $wpdb->prepare( "SELECT post_modified_gmt FROM $wpdb->posts WHERE post_status = 'publish' AND post_type = %s ORDER BY post_modified_gmt DESC LIMIT 1", $post_type ) ); if ( $date ) wp_cache_set( $key, $date, 'timeinfo' ); } if ( strtotime( $date ) > $result ) $result = strtotime( $date ); } // Transform to W3C Date format. $result = date( 'c', $result ); return $result; } } global $wpseo_sitemaps; $wpseo_sitemaps = new WPSEO_Sitemaps();

    Read the article

  • Agile and Scrum burning me down please help me figuring out the truth

    - by jadook
    hi all, in the last while I installed MS-TFS 2008 then started to get myself prepared to use Agile Process Guidance template shipped with the TFS. with little googling I passed through Mike Cohn materials: I watched his conference in youtube "sponsored by google: http://www.youtube.com/watch?v=fb9Rzyi8b90 http://www.youtube.com/watch?v=jeT0pOVg0EI Read his book "Agile Estimating and Planning" Watching the video series in his website: http://www.mountaingoatsoftware.com/presentations-tag/video-recorded I was very happy while absorbing and eating the techniques he is using with the teams and how agile and scrum is such a great software process/methodology until I saw Mike answering a question regarding an architect role and talking about the requirements document... at that point everything start falling apart due to the following: Last year I had been assigned to make full analysis "including requirements gathering" for big project "very high priority project". within 2 months of hardwork, dedication and commitment I delivered the whole analysis with full satisfaction of the customer and my BOSS and ZERO amendments. Later on, the project entered the architecting, development ... phases. due to the fact that the system included many competitive and exciting features I requested patenting it and its going in the process... so imagine you are the kind of person who used to love facing all kind of challenges and returning with excellent experience and results for the stakeholders and yourself, How fairly agile and scrum processes will credit and admit your talent and passion while the scrum master/coach treat the team as one unit that accomplish user stories and converge through trial and error approach??!!!! with that dark thoughts about agile and scrum I found many people "anti agile" and on top of them is "Crispin Rogers Johnson": http://agile-crispin.blogspot.com/ that guy made anti statement for everything Mike Cohn used to talk about. I really don't know what to do next! so any guidance will be appreciated. Thanks,

    Read the article

  • Is ASP.NET MVC destined to replace Webforms?

    - by johnny
    I found these questions, but a couple of them were a little old: http://stackoverflow.com/questions/191556/should-i-pursue-asp-net-webforms-or-asp-net-mvc http://stackoverflow.com/questions/88787/do-you-think-asp-net-mvc-will-compete-with-asp-net-webforms http://stackoverflow.com/questions/722637/asp-net-mvc-asp-net-webforms-why I do not believe these are duplicates and might be old enough that new light can be shed. If not please close this. I know that no one framework or language is necessarily the only tool for every job. But, do you see MVC eclipsing webforms or webforms going lower on the priority list for Microsoft? They will have to keep webforms for a long time because so many have invested in it, but they don't have to keep adding new functionality for it. I don't know if this is a good example, but it reminds me of web parts. I never saw much improvement in it from Microsoft. It works and I thought it was great until I started to really try and get a lot out of it. Then from what I could see it just wasn't being pursued by Microsoft that much, though it stayed in Visual Studio. Maybe that's a bad example; just what I remembered. EDIT: Also, if anyone has any statements from Microsoft on this subject it is appreciated. No offense to anyone. I was only hoping for something official.

    Read the article

  • configure batch to sent minute info instead of entire stdout

    - by Daniel
    Hi all, I am working on a RedHat server along with several other users. We use the batch utility to set a job queue. Some of the programs that I use write stuff to stdout during the run, with info on who much data has been processed to far and estimated time until completion etc. batch -q z at> myScript -i somefile -o someotherfile By default, the batch util send an email to me (since I configured it using .forward) with the entire output from stdout. Since the scripts writes something to stdout a few times each second, the amount of log-stuff I get from a two-day script can be ˜20 Mbs. Clearly not what I want. I can of course pipe stdout to a file like so batch -q z at> myScript -i somefile -o someotherfile > myscript.stdout.log but then I get a blank e-mail from the util. So to my question: Is it possible to configure batch so that it sends time the job started and ended, or run time or some oth valuable information to me, instead of a 20 Mb mail or a blank mail? Note that the scripts that I use are binaries and I cannot customize the code to output less info in the first place (which would be the optimal solution I guess). Thanks /Daniel

    Read the article

  • Debugging Key-Value-Observing overflow.

    - by Paperflyer
    I wrote an audio player. Recently I started refactored some of the communication flow to make it fully MVC-compliant. Now it crashes, which in itself is not surprising. However, it crashes after a few seconds inside the Cocoa key-value-observing routines with a HUGE stack trace of recursive calls to NSKeyValueNotifyObserver. Obviously, it is recursively observing a value and thus overflowing the NSArray that holds pending notifications. According to the stack trace, the program loops from observeValueForKeyPath to setMyValue and back. Here is the according code: - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if ([keyPath isEqual:@"myValue"] && object == myModel && [self myValue] != [myModel myValue]) { [self setMyValue:[myModel myValue]; } } and - (void)setMyValue:(float)value { myValue = value; [myModel setMyValue:value]; } myModel changes myValue every 0.05 seconds and if I log the calls to these two functions, they get called only every 0.05 seconds just as they should be, so this is working properly. The stack trace looks like this: -[MyDocument observeValueForKeyPath:ofObject:change:context:] NSKeyValueNotifyObserver NSKeyValueDidChange -[NSObject(NSKeyValueObserverNotification) didChangeValueForKey:] -[MyDocument setMyValue:] _NSSetFloatValueAndNotify …repeated some ~8k times until crash Do you have any idea why I could still be spamming the KVO queue?

    Read the article

  • Declare a Nullable int (int?) using XAML

    - by Nate Zaugg
    I am trying to bind a combo box to a property on my ViewModel. The target type is short? and I would like to have null be an option. Basically I would like the value of the first item in the combo box be {x:Null}. <ComboBox Grid.Row="9" Grid.Column="1" SelectedValue="{Binding Priority}"> <clr:Int16></clr:Int16> <clr:Int16>1</clr:Int16> <clr:Int16>2</clr:Int16> <clr:Int16>3</clr:Int16> <clr:Int16>4</clr:Int16> <clr:Int16>5</clr:Int16> <clr:Int16>6</clr:Int16> <clr:Int16>7</clr:Int16> <clr:Int16>8</clr:Int16> <clr:Int16>9</clr:Int16> <clr:Int16>10</clr:Int16> </ComboBox> Any Suggestions?

    Read the article

  • xp_smtp_sendmail blank space added to html randomly

    - by Igor Timofeyev
    I have a proc where I generate small html doc with a link and send it out via xp_smtp_sendmail proc. Link is generated based on query results and is long. This works in most cases. However, sometimes the link gets broken due to spaces being inserted into querystring variable names, i.e. &Na me=John. This might vary between email clients(same link works in Gmail, but might not work in comcast due to spaces. The space seems to be randomly inserted, so in each broken email link space might break other querystring variables. When I do PRINT from proc the link is clean, no spaces. Here's my sample of the mail proc being executed within main proc(that gets query results and generates html for @Message). The space seems to be inserted regardless of whether I encode the url or not. Thank you in advance for your help. I can send a cleaner version of the code if it's not displayed properly here. ....query results above SET @Message = NULL SET @Message = @Message + + '<br/>Dear ' + @FirstName + ' ' + @LastName + ',' + '<br/><br/>Recently you took "' + @Title + '". ' + 'In response to the question "What is it?" ' + 'you responded "' + @Response + '".' + '<br/><br/>Following up on previous mailing' + '<br/><br/>Please click on the link below' + '<br/><br/><a href="' + @Link + '">Please click here</a>' + '<br/><br/>plain text' + '<br/><br/>plain text,' + '<br/><br/>plain text<br/> plain text<br/> plain text<br/> plain text<br/> plain text<br/> plain text EXEC @rc = master.dbo.xp_smtp_sendmail @FROM = '[email protected]', @FROM_NAME = 'Any User', @TO = @Email, @priority = N'NORMAL', @subject = N'My email', @message = @Message, @messagefile = N'', @type = N'text/html', @attachment = N'', @attachments = N'', @codepage = 0, @server = 'smtp.server.any'

    Read the article

  • Best Practice, objects design ASP.NET MVC

    - by DoomStone
    Hello Stackoverflow I have a code design question that have been torbeling me for a while, you see I’m doing a refactoring of my website Cosplay Denmark, a site where cospalyers can upload images of them self in their costumes. The original site was done in php, Zend MVC, but my refactoring is being done in ASP.NET MVC 2. If you take the site http://www.cosplaydanmark.dk/Costumes/ (You can switch to English in the left column (Sprog)) Here you see a list of all the anime’s we have on the site with images, we show the name, how many different characters and how many images there are under this anime. http://www.cosplaydanmark.dk/Costumes/Bleach If you click on an anime will you get a list of characters within the given anime which we have images in, here do we show the character name, how many galleries and how many images. http://www.cosplaydanmark.dk/Costumes/Bleach/Ichigo_Kurosaki/ If you click on the character name, will you get a list of the galleries under the given character in the given anime. Here we have some information about the gallery, such as image count. http://www.cosplaydanmark.dk/Costumes/Bleach/Ichigo_Kurosaki/Admi/ Should you click the gallery do you get a list of the images in the gallery. My database look like this at the moment. As you can might imagine there are a lot of different query’s to create the site, on the first site I need to do a select on the on the “animes” table and for each result, I need to do a count select on characters and galleries. My plan to create this will be one of the following Where the IList, would be a lazy load list. But I can’t decide what would be the best solution for this would be, also if there is a better way of doing this. My priority is to have good performance with a minimum lose of features and code upkeep. I’m using a service pattern with a linq to sql repository. My design is not absolute, I’m willing to change it if it could increase performance :D I hope that I have describe my question good enough for you to understand what I mean, but ask away if there are anything I have missed.

    Read the article

  • iPhone - dequeueReusableCellWithIdentifier usage

    - by Jukurrpa
    Hi, I'm working on a iPhone app which has a pretty large UITableView with data taken from the web, so I'm trying to optimize its creation and usage. I found out that dequeueReusableCellWithIdentifier is pretty useful, but after seeing many source codes using this, I'm wondering if the usage I make of this function is the good one. Here is what people usually do: UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:@"Cell"]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:@"Cell"]; // Add elements to the cell return cell; And here is the way I did it: NSString identifier = [NSString stringWithFormat:@"Cell @d", indexPath.row]: // The cell row UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:identifier]; if (cell != nil) return cell; cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:identifier]; // Add elements to the cell return cell; The difference is that people use the same identifier for every cell, so dequeuing one only avoids to alloc a new one. For me, the point of queuing was to give each cell a unique identifier, so when the app asks for a cell it already displayed, neither allocation nor element adding have to be done. In fine I don't know which is best, the "common" method ceils the table's memory usage to the exact number of cells it display, whislt the method I use seems to favor speed as it keeps all calculated cells, but can cause large memory consumption (unless there's an inner limit to the queue). Am I wrong to use it this way? Or is it just up to the developper, depending on his needs?

    Read the article

  • FancyURLOpener failing since moving to python 3.1.2

    - by Andrew Shepherd
    I had an application that was downloading a .CSV file from a password-protected website then processing it futher. I was using FancyURLOpener, and simply hardcoding the username and password. (Obviously, security is not a high priority in this particular instance). Since downloading Python 3.1.2, this code has stopped working. Does anyone know of the changes that have happened to the implementation? Here is a cut down version of the code: import urllib.request; class TracOpener (urllib.request.FancyURLopener) : def prompt_user_passwd(self, host, realm) : return ('andrew_ee', '_my_unenctryped_password') csvUrl='http://mysite/report/19?format=csv@USER=fred_nukre' opener = TracOpener(); f = opener.open(csvUrl); s = f.read(); f.close(); s; For the sake of completeness, here's the entire call stack: Traceback (most recent call last): File "C:\reporting\download_csv_file.py", line 12, in <module> f = opener.open(csvUrl); File "C:\Program Files\Python31\lib\urllib\request.py", line 1454, in open return getattr(self, name)(url) File "C:\Program Files\Python31\lib\urllib\request.py", line 1628, in open_http return self._open_generic_http(http.client.HTTPConnection, url, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1624, in _open_generic_http response.status, response.reason, response.msg, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1640, in http_error result = method(url, fp, errcode, errmsg, headers) File "C:\Program Files\Python31\lib\urllib\request.py", line 1878, in http_error_401 return getattr(self,name)(url, realm) File "C:\Program Files\Python31\lib\urllib\request.py", line 1950, in retry_http_basic_auth return self.open(newurl) File "C:\Program Files\Python31\lib\urllib\request.py", line 1454, in open return getattr(self, name)(url) File "C:\Program Files\Python31\lib\urllib\request.py", line 1628, in open_http return self._open_generic_http(http.client.HTTPConnection, url, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1590, in _open_generic_http auth = base64.b64encode(user_passwd).strip() File "C:\Program Files\Python31\lib\base64.py", line 56, in b64encode raise TypeError("expected bytes, not %s" % s.__class__.__name__) TypeError: expected bytes, not str

    Read the article

  • Finding source of over release

    - by Benedict Lowndes
    Hi, I'm consistently seeing the same message sent in as a crash report from users of an app. It's clear that an object is being over-released but I'm unable to replicate it and I'm looking for tips on tracing the source of it. The relevant section from the crash report shows this: Application Specific Information: objc_msgSend() selector name: release Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 libobjc.A.dylib 0x90892edb objc_msgSend + 27 1 com.apple.CoreFoundation 0x95ec5a40 __CFBasicHashStandardCallback + 384 2 com.apple.CoreFoundation 0x95ec564e __CFBasicHashDrain + 478 3 com.apple.CoreFoundation 0x95ead6f1 _CFRelease + 353 4 com.apple.CoreFoundation 0x95eda0ed _CFAutoreleasePoolPop + 253 5 com.apple.Foundation 0x97ecedd6 NSPopAutoreleasePool + 76 6 com.apple.Foundation 0x97ececfe -[NSAutoreleasePool drain] + 130 7 com.apple.AppKit 0x9211255f -[NSApplication run] + 1013 8 com.apple.AppKit 0x9210a535 NSApplicationMain + 574 9 TheApp 0x000020a6 start + 54 I've used zombies and leaks, but haven't seen anything there. I've gone through the code and can't see it. What's the next step? Are there any hints I can discern from this information as to the source of it? Does the fact that this nearly exact same crash report is coming in repeatedly mean that it's the same object that's being over released, or because this is referring to the autorelease pool mean it could be any object? Does the reference to _CFRelease mean it's a Core Foundation object that's being over released?

    Read the article

  • HTTP Error: 400 when sending msmq message over http

    - by dontera
    I am developing a solution which will utilize msmq to transmit data between two machines. Due to the seperation of said machines, we need to use HTTP transport for the messages. In my test environment I am using a Windows 7 x64 development machine, which is attempting to send messages using a homebrew app to any of several test machines I have control over. All machines are either windows server 2003 or server 2008 with msmq and msmq http support installed. For any test destination, I can use the following queue path name with success: FORMATNAME:DIRECT=TCP:[machine_name_or_ip]\private$\test_queue But for any test destination, the following always fails FORMATNAME:DIRECT=HTTP://[machine_name_or_ip]/msmq/private$/test_queue I have used all permutations of machine names/ips available. I have created mappings using the method described at this blog post. All result in the same HTTP Error: 400. The following is the code used to send messages: MessageQueue mq = new MessageQueue(queuepath); System.Messaging.Message msg = new System.Messaging.Message { Priority = MessagePriority.Normal, Formatter = new XmlMessageFormatter(), Label = "test" }; msg.Body = txtMessageBody.Text; msg.UseDeadLetterQueue = true; msg.UseJournalQueue = true; msg.AcknowledgeType = AcknowledgeTypes.FullReachQueue | AcknowledgeTypes.FullReceive; msg.AdministrationQueue = new MessageQueue(@".\private$\Ack"); if (SendTransactional) mq.Send(msg, MessageQueueTransactionType.Single); else mq.Send(msg); Additional Information: in the IIS logs on the destination machines I can see each message I send being recorded as a POST with a status code of 200. I am open to any suggestions.

    Read the article

  • Delphi dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. So we can not just change the database and the connection code page to Unicode. We also have to modify all datamodules to use the new field type. The modified datamodule however will not be backwards compatible.

    Read the article

  • C++ thread safety - exchange data between worker and controller

    - by peterchen
    I still feel a bit unsafe about the topic and hope you folks can help me - For passing data (configuration or results) between a worker thread polling something and a controlling thread interested in the most recent data, I've ended up using more or less the following pattern repeatedly: Mutex m; tData * stage; // temporary, accessed concurrently // send data, gives up ownership, receives old stage if any tData * Send(tData * newData) { ScopedLock lock(m); swap(newData, stage); return newData; } // receiving thread fetches latest data here tData * Fetch(tData * prev) { ScopedLock lock(m); if (stage != 0) { // ... release prev prev = stage; stage = 0; } return prev; // now current } Note: This is not supposed to be a full producer-consumer queue, only the msot recent data is relevant. Also, I've skimmed ressource management somewhat here. When necessary I'm using two such stages: one to send config changes to the worker, and for sending back results. Now, my questions assuming that ScopedLock implements a full memory barrier: do stage and/or workerData need to be volatile? is volatile necessary for tData members? can I use smart pointers instead of the raw pointers - say boost::shared_ptr? Anything else that can go wrong? I am basically trying to avoid "volatile infection" spreading into tData, and minimize lock contention (a lock free implementation seems possible, too). However, I'm not sure if this is the easiest solution. ScopedLock acts as a full memory barrier. Since all this is more or less platform dependent, let's say Visual C++ x86 or x64, though differences/notes for other platforms are welcome, too. (a prelimenary "thanks but" for recommending libraries such as Intel TBB - I am trying to understand the platform issues here)

    Read the article

  • Is This a Valid Way to Use Blocks in Objective-C?

    - by Carter
    I've been building a HTTP client that uses web services to synchronize information between the client and server. I've been using Blocks and NSURLConnection to achieve this on the client side, but I'm getting frequent EXC_BAD_ACCESS crashes in objc_msgSend(). From what I understand, this usually means that a stored block that has fallen off the stack has been called. I think I've coded things correctly to avoid this, but I'm still stuck. Here is conceptually what my code is doing. It starts by calling "synchronizeWithWebServer". That method invokes "listRootObjectsOnServerWithBlock:" which takes in a block to be called when the method returns. "listRootObjectsOnServersWithBlock:" initiates a NSURLConnection to the web server asynchronously. It to expects a block to be called when it returns. Inside that block I want to be able to execute the original Block (so aptly named 'block'). This is only a simplified version of my code. The real synchronization process is more complex but it's mostly more of the same as what you see below. Sometimes the code works perfectly, but about 80% of the time it crashes very early on in the routine. It seems to be more vulnerable to crashing when my data set gets larger. - (void)synchronizeWithWebServer { [self listRootObjectsOnServerWithBlock:^(NSArray *results, NSError *error) { //Iterate over result objects and perform some other similar routines. }]; } - (void)listRootObjectsOnServerWithBlock:(void (^)(NSArray *results, NSError *error))block { //Create NSURLRequest Here //Create connection asynchronously. block = [block copy]; [NSURLConnection sendAsynchronousRequest:urlRequest queue:[NSOperationQueue currentQueue] completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){ //Parse response from web server (stored in NSData *data) NSArray *results = ..... //Call 'block' block(results, error); [block release]; }]; }

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >