Search Results

Search found 4532 results on 182 pages for 'category theory'.

Page 163/182 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • WTF is wtf? (in WebKit code base)

    - by Motti
    I downloaded Chromium's code base and ran across the WTF namespace. namespace WTF { /* * C++'s idea of a reinterpret_cast lacks sufficient cojones. */ template<typename TO, typename FROM> TO bitwise_cast(FROM in) { COMPILE_ASSERT(sizeof(TO) == sizeof(FROM), WTF_wtf_reinterpret_cast_sizeof_types_is_equal); union { FROM from; TO to; } u; u.from = in; return u.to; } } // namespace WTF Does this mean what I think it means? Could be so, the bitwise_cast implementation specified here will not compile if either TO or FROM is not a POD and is not (AFAIK) more powerful than C++ built in reinterpret_cast. The only point of light I see here is the nobody seems to be using bitwise_cast in the Chromium project. I see there's some legalese so I'll put in the little letters to keep out of trouble. /* * Copyright (C) 2008 Apple Inc. All Rights Reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */

    Read the article

  • Server 2008 Likes to restart itself

    - by Campo
    I have a weird issue here. I notice about once a week the web server restarts itself. This would be only a minor issue if we were not planning on implementing an IP failover. I have checked the event logs. I don't see anything that indicates a reason for the restart. I need some help diagnosing the reason the server restarts. It happened last night at 5:00AM Last even in the log was 1 hour before the unexpected shutdown. Here is the Log for the shutdown event. Any help is much appreciated. I know there isn't much to go on yet. Log Name: System Source: EventLog Date: 5/5/2010 5:01:12 AM Event ID: 6008 Task Category: None Level: Error Keywords: Classic User: N/A Computer: SERVERNAME Description: The previous system shutdown at 4:56:41 AM on 5/5/2010 was unexpected. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="EventLog" /> <EventID Qualifiers="32768">6008</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-05-05T09:01:12.000Z" /> <EventRecordID>346094</EventRecordID> <Channel>System</Channel> <Computer>SERVERNAME</Computer> <Security /> </System> <EventData> <Data>4:56:41 AM</Data> <Data>5/5/2010</Data> <Data> </Data> <Data> </Data> <Data>39594</Data> <Data> </Data> <Data> </Data> <Binary>DA070500030005000400380029008E03DA070500030005000800380029008E033C0000003C000000000 000000000000000000000000000000100000000000000</Binary> </EventData> </Event>

    Read the article

  • Why Won't My ASP.NET Hyperlink Work in IE?

    - by Giffyguy
    I'm making a very simple ad button system using ASP.NET 2.0 The advertisment is a 150x150px square that is displayed on "the r house." (Scroll down a little and you'll see the bright green "Angry Octopus" on the right side of the screen.) Now, I am not the administrator of "the r house." Instead, I am the administrator of angryoctopus.net Therefore, I don't have the ability to change the ad display code on a whim. So I gave "the r house" this snippet of code to display our ad nicely, while still allowing me to customize the back-end code on my end: <iframe src="http://www.angryoctopus.net/Content/Ad/150x150.aspx" frameborder="0" width="150" height="150" scrolling="no" style="padding: 0; margin: 0;"></iframe> You'll find this snippet in the page source to "the r house." On my side, the code looks like this: <asp:HyperLink runat="server" NavigateUrl="http://www.angryoctopus.net/" Target="_top"> <asp:Panel ID="pnlMain" runat="server" BackColor="#D1E231" style="padding: 0; margin: 0" Width="150" Height="150"> <asp:Image runat="server" ImageUrl="http://www.angryoctopus.net/Content/Ad/150x150.png" BorderStyle="None" style="padding: 0; margin: 0" /> </asp:Panel> </asp:HyperLink> ... and there's some insignificant back-end C# code for hit-counting. This looks all well and good from the code standpoint, as far as I can tell. Everything works in Firefox and Chrome. Also, everything appears to work in IE8 in all of my tests. I haven't tested IE7. But when you view "the r house" in IE(8) the hyperlink doesn't do anything, and the cursor doesn't indicate that the hyperlink is even there. Although you can see the target URL in the status bar. I've considered the fact that "the r house" uses XHTML 1.0 Strict could be causing problems, but that would probably effect Firefox and Chrome right? (My aspx pages use XHTML 1.0 Transitional) My only other theory is that some random CSS class could be applying a weird attribute to my iframe, but again I would expect that would effect Firefox and Chrome. Is this a security issue with IE? Does anyone know what part of the r house's website could be blocking the hyperlink in IE? And how can I get around this without having to hard code anything on the r house's website? Is there an alternative to iframe that would do the same job without requiring complicated scripting?

    Read the article

  • How to catch 'exceptions' for out of order execution in Workflow Foundation 4?

    - by Alex Key
    Hi, I am attempting to model a worklfow using a "WCF Workflow Service" in .net / vs 2010 that needs to handle out of order execution gracefully (but not allow it - if thath makes sense!?) For example I have 2 receive activities one called Initialize and the other called GetValue inside a FlowChart. In most cases Initialize should be called first and GetValue after (as modled in the flow chart). However if GetValue is executed before Initialize I do not want to return an "out of order" exception (although when I look at the WCF test client, I can't actually see an exception). But instead a custom exception saying something like "you must initialize first". In theory I could model this with lots of parallel activities and conditions to check if Initialized / Running / Terminated etc. But the business process I am modelling if very very similar to a state machine... except it must handle people executing things in the wrong order. Ideally I would like to catch the "out of order" exception (thought I don't think it's really an exception as such), check the 'exception' to see which function was attempted to run and then handle it. I have done some research around enabling AllowBufferedReceive. However I don't want to be able to execute out of order (I don't think), but instead give a detailed response if it does happen. I've looked at the new beta state machine template for WF 4 - but i'm not sure if it does what i'm after? I'm not sure if I have the wrong end of the stick, so any help would be greatly appreciated. [EDIT] To help clarify... Sorry it's a tricky one to explain. The standard I am trying to implement (the e-learning standard SCORM RTE) is structured like a state machine i.e. certain functions can only be executed in certain states. However the standard specifies that if the calling clients tries to execute a function that it is not meant to, then a warning should be issued... for example "you cannot use GetValue(), because you have not yet Initialized". Ideally I'd like to structure the workflow as the theoretical state machine and not need to have to use multiple if/else's to handle all the scenarios where something could be executed out-of-order. I'd like to catch a out-of-order exception (but I don't think there is such an exception - as it's not in the debugger) and rethrow it.

    Read the article

  • sendmail and MX records when mail server is not on web host

    - by Jim Nelson
    This is a problem I'm sure is easy to fix, but I've been banging my head on it all day. I'm developing a new web site for a client. The web site resides at (this is an example) website.com. I have a PHP form script to email visitors' requests to [email protected]. When I coded this on a staging server on a different domain, all worked fine. When I moved it to website.com, the mail messages never arrived. The web server is on a virtual host with a major ISP. Here's what I've learned since then: My client's mail server is Microsoft Exchange on a box physically in their office. Whenever someone on the outside world emails [email protected], the mail arrives. But if the web server sends to the same email address, it fails every time. This is not a PHP problem. I secure shell in to the web server and have tested this both with sendmail and the UNIX mail application. I've also tested it by emailing various email accounts from the shell. I can email myself, for example, just nobody at the website.com domain. In short, when I'm logged in to website.com, mail to [email protected], [email protected], [email protected] all fail. All other addresses work fine. What I've discovered is those dropped emails are routed to the web server's "catchall" account where they sit in its inbox. I've done an MX lookup on website.com. The MX record points to mailsec.website.com. I can telnet to mailsec.website.com port 25 and see the SMTP server. It appears to me that website.com isn't doing an MX lookup when it's sending mail to [email protected]. My theory is that it recognizes the domain as local, sees that there's no "requests" user account to deliver it to, and drops the mail into the catchall account. What I want is to force sendmail to do the MX lookup and send the message on to the Exchange server. I'm at wit's end here. I can't figure out how to do this. For that matter, I may be way off base here and have misdiagnosed this entirely. Internet mail and MX has always seemed a black art to me, and my ignorance is certainly showing in this question.

    Read the article

  • Increase efficiency for an R simulator of the Monty Hall Puzzle

    - by jahan_m
    The Monty Hall Problem is a simple puzzle involving probability that even stumps professionals in careers dealing with some heavy-duty math. Here's the basic problem: Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice? You can find numerous explanations of the solution here: http://en.wikipedia.org/wiki/Monty_Hall_problem Goal of my simulation: Prove that a switching strategy will win you the car 2/3 of the time. I got curious and wanted to write a little function that simulates the problem many times and returns the proportion of wins if you switched and the proportion of wins if you stayed with your first choice. The function then plots the cumulative wins. First and foremost, I'm interested in hearing if my simulation is indeed replicating the Monty Problem, or if some aspect of the code got it wrong. Secondly, this function takes a long time to run once I get to about 10,000 simulations. I know I don't need this many simulations to prove this but I'd love to hear some ideas on how to make it more efficient. Thanks for your feedback! Monty_Hall=function(repetitions){ doors=c('A','B','C') stay_wins=0 switch_wins=0 series=data.frame(sim_num=seq(repetitions),cum_sum_stay=replicate(repetitions,0),cum_sum_switch=replicate(repetitions,0)) for(i in seq(repetitions)){ winning_door=sample(doors,1) contestant_chooses=sample(doors,1) if(contestant_chooses==winning_door) stay_wins=stay_wins+1 else switch_wins=switch_wins+1 series[i,'cum_sum_stay']=stay_wins series[i,'cum_sum_switch']=switch_wins } plot(series$sim_num,series$cum_sum_switch,col=2,ylab='Cumulative # of wins', xlab='Simulation #',main=sprintf('%d Simulations of the Monty Hall Paradox',repetitions),type='l') lines(series$sim_num,series$cum_sum_stay,col=4) legend('topleft',legend=c('Cumulative wins from switching', 'Cumulative wins from staying'),col=c(2,4),lty=1) result=list(series=series,stay_wins=stay_wins,switch_wins=switch_wins, proportion_stay_wins=stay_wins/repetitions, proportion_switch_wins=switch_wins/repetitions) return(result) } #Theory predicts that it is to the contestant's advantage if he #switches his choice to the other door. This function simulates the game #many times, and shows you the proportion of games in which staying or #switching would win the car. It also plots the cumulative wins for each strategy. Monty_Hall(100)

    Read the article

  • translating specifications into query predicates

    - by Jeroen
    I'm trying to find a nice and elegant way to query database content based on DDD "specifications". In domain driven design, a specification is used to check if some object, also known as the candidate, is compliant to a (domain specific) requirement. For example, the specification 'IsTaskDone' goes like: class IsTaskDone extends Specification<Task> { boolean isSatisfiedBy(Task candidate) { return candidate.isDone(); } } The above specification can be used for many purposes, e.g. it can be used to validate if a task has been completed, or to filter all completed tasks from a collection. However, I want to re-use this, nice, domain related specification to query on the database. Of course, the easiest solution would be to retrieve all entities of our desired type from the database, and filter that list in-memory by looping and removing non-matching entities. But clearly that would not be optimal for performance, especially when the entity count in our db increases. Proposal So my idea is to create a 'ConversionManager' that translates my specification into a persistence technique specific criteria, think of the JPA predicate class. The services looks as follows: public interface JpaSpecificationConversionManager { <T> Predicate getPredicateFor(Specification<T> specification, Root<T> root, CriteriaQuery<?> cq, CriteriaBuilder cb); JpaSpecificationConversionManager registerConverter(JpaSpecificationConverter<?, ?> converter); } By using our manager, the users can register their own conversion logic, isolating the domain related specification from persistence specific logic. To minimize the configuration of our manager, I want to use annotations on my converter classes, allowing the manager to automatically register those converters. JPA repository implementations could then use my manager, via dependency injection, to offer a find by specification method. Providing a find by specification should drastically reduce the number of methods on our repository interface. In theory, this all sounds decent, but I feel like I'm missing something critical. What do you guys think of my proposal, does it comply to the DDD way of thinking? Or is there already a framework that does something identical to what I just described?

    Read the article

  • Routing and Remote Access Service won't start after full disk

    - by NKCSS
    The HDD of the server was out of disk space, and after a reboot, RRAS won't start anymore on my 2008 R2 server. Error Details: Log Name: System Source: RemoteAccess Date: 2/5/2012 9:39:52 PM Event ID: 20153 Task Category: None Level: Error Keywords: Classic User: N/A Computer: Windows14111.<snip> Description: The currently configured accounting provider failed to load and initialize successfully. The connection was prevented because of a policy configured on your RAS/VPN server. Specifically, the authentication method used by the server to verify your username and password may not match the authentication method configured in your connection profile. Please contact the Administrator of the RAS server and notify them of this error. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="RemoteAccess" /> <EventID Qualifiers="0">20153</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2012-02-05T20:39:52.000Z" /> <EventRecordID>12148869</EventRecordID> <Channel>System</Channel> <Computer>Windows14111.<snip></Computer> <Security /> </System> <EventData> <Data>The connection was prevented because of a policy configured on your RAS/VPN server. Specifically, the authentication method used by the server to verify your username and password may not match the authentication method configured in your connection profile. Please contact the Administrator of the RAS server and notify them of this error.</Data> <Binary>2C030000</Binary> </EventData> </Event> I think it has something to do with a corrupt config file, but I am unsure of what to do. I Removed the RRAS role, rebooted, and re-added, but it keeps failing with the same error. Thanks in advance. [UPDATE] If i set the accounting provider from 'Windows' to '' the service starts but VPN won't work. Any ideas how this can be repaired?

    Read the article

  • How can I refactor this JavaScript code to avoid making functions in a loop?

    - by Bungle
    I wrote the following code for a project that I'm working on: var clicky_tracking = [ ['related-searches', 'Related Searches'], ['related-stories', 'Related Stories'], ['more-videos', 'More Videos'], ['web-headlines', 'Publication'] ]; for (var x = 0, length_x = clicky_tracking.length; x < length_x; x++) { links = document.getElementById(clicky_tracking[x][0]) .getElementsByTagName('a'); for (var y = 0, length_y = links.length; y < length_y; y++) { links[y].onclick = (function(name, url) { return function() { clicky.log(url, name, 'outbound'); }; }(clicky_tracking[x][1], links[y].href)); } } What I'm trying to do is: define a two-dimensional array, with each instance the inner arrays containing two elements: an id attribute value (e.g., "related-searches") and a corresponding description (e.g., "Related Searches"); for each of the inner arrays, find the element in the document with the corresponding id attribute, and then gather a collection of all <a> elements (hyperlinks) within it; loop through that collection and attach an onclick handler to each hyperlink, which should call clicky.log, passing in as parameters the description that corresponds to the id (e.g., "Related Searches" for the id "related-searches") and the value of the href attribute for the <a> element that was clicked. Hopefully that wasn't thoroughly confusing! The code may be more self-explanatory than that. I believe that what I've implemented here is a closure, but JSLint complains: http://img.skitch.com/20100526-k1trfr6tpj64iamm8r4jf5rbru.png So, my questions are: How can I refactor this code to make JSLint agreeable? Or, better yet, is there a best-practices way to do this that I'm missing, regardless of what JSLint thinks? Should I rely on event delegation instead? That is, attaching onclick event handlers to the document elements with the id attributes in my arrays, and then looking at event.target? I've done that once before and understand the theory, but I'm very hazy on the details, and would appreciate some guidance on what that would look like - assuming this is a viable approach. Thanks very much for any help!

    Read the article

  • Strange Map Reduce Behavior in CouchDB. Rereduce?

    - by Tony
    I have a mapreduce issue with couchdb (both functions shown below): when I run it with grouplevel = 2 (exact) I get accurate output: {"rows":[ {"key":["2011-01-11","staff-1"],"value":{"total":895.72,"count":2,"services":6,"services_ignored":6,"services_liked":0,"services_disliked":0,"services_disliked_avg":0,"Revise":{"total":275.72,"count":1},"Review":{"total":620,"count":1}}}, {"key":["2011-01-11","staff-2"],"value":{"total":8461.689999999999,"count":2,"services":41,"services_ignored":37,"services_liked":4,"services_disliked":0,"services_disliked_avg":0,"Revise":{"total":4432.4,"count":1},"Review":{"total":4029.29,"count":1}}}, {"key":["2011-01-11","staff-3"],"value":{"total":2100.72,"count":1,"services":10,"services_ignored":4,"services_liked":3,"services_disliked":3,"services_disliked_avg":2.3333333333333335,"Revise":{"total":2100.72,"count":1}}}, However, changing to grouplevel=1 so the values for all the different staff keys should be all grouped by date no longer gives accurate output (notice the total is currect but all others are wrong): {"rows":[ {"key":["2011-01-11"],"value":{"total":11458.130000000001,"count":2,"services":0,"services_ignored":0,"services_liked":0,"services_disliked":0,"services_disliked_avg":0,"None":{"total":11458.130000000001,"count":2}}}, My only theory is this has something to do with rereduce, which I have not yet learned. Should I explore that option or am I missing something else here? This is the Map function: function(doc) { if(doc.doc_type == 'Feedback') { emit([doc.date.split('T')[0], doc.staff_id], doc); } } And this is the Reduce: function(keys, vals) { // sum all key points by status: total, count, services (liked, rejected, ignored) var ret = { 'total':0, 'count':0, 'services': 0, 'services_ignored': 0, 'services_liked': 0, 'services_disliked': 0, 'services_disliked_avg': 0, }; var total_disliked_score = 0; // handle status function handle_status(doc) { if(!doc.status || doc.status == '' || doc.status == undefined) { status = 'None'; } else if (doc.status == 'Declined') { status = 'Rejected'; } else { status = doc.status; } if(!ret[status]) ret[status] = {'total':0, 'count':0}; ret[status]['total'] += doc.total; ret[status]['count'] += 1; }; // handle likes / dislikes function handle_services(services) { ret.services += services.length; for(var a in services) { if (services[a].user_likes == 10) { ret.services_liked += 1; } else if (services[a].user_likes >= 1) { ret.services_disliked += 1; total_disliked_score += services[a].user_likes; if (total_disliked_score >= ret.services_disliked) { ret.services_disliked_avg = total_disliked_score / ret.services_disliked; } } else { ret.services_ignored += 1; } } } // loop thru docs for(var i in vals) { // increment the total $ ret.total += vals[i].total; ret.count += 1; // update totals and sums for the status of this route handle_status(vals[i]); // do the likes / dislikes stats if(vals[i].groups) { for(var ii in vals[i].groups) { if(vals[i].groups[ii].services) { handle_services(vals[i].groups[ii].services); } } } // handle deleted services if(vals[i].hidden_services) { if (vals[i].hidden_services) { handle_services(vals[i].hidden_services); } } } return ret; }

    Read the article

  • What would you do to make this code more "over-engineered"? [closed]

    - by Mez
    A friend and I got bored, and, long story short, decided to make an over-engineered FizzBuzz in PHP <?php interface INumber { public function go(); public function setNumber($i); } class FBNumber implements INumber { private $value; private $fizz; private $buzz; public function __construct($fizz = 3 , $buzz = 5) { $this->setFizz($fizz); $this->setBuzz($buzz); } public function setNumber($i) { if(is_int($i)) { $this->value = $i; } } private function setFizz($i) { if(is_int($i)) { $this->fizz = $i; } } private function setBuzz($i) { if(is_int($i)) { $this->buzz = $i; } } private function isFizz() { return ($this->value % $this->fizz == 0); } private function isBuzz() { return ($this->value % $this->buzz == 0); } private function isNeither() { return (!$this->isBuzz() AND !$this->isFizz()); } private function isFizzBuzz() { return ($this->isFizz() OR $this->isBuzz()); } private function fizz() { if ($this->isFizz()) { return "Fizz"; } } private function buzz() { if ($this->isBuzz()) { return "Buzz"; } } private function number() { if ($this->isNeither()) { return $this->value; } } public function go() { return $this->fizz() . $this->buzz() . $this->number(); } } class FizzBuzz { private $limit; private $number_class; private $numbers = array(); function __construct(INumber $number_class, $limit = 100) { $this->number_class = $number_class; $this->limit = $limit; } private function collectNumbers() { for ($i=1; $i <= $this->limit; $i++) { $n = clone($this->number_class); $n->setNumber($i); $this->numbers[$i] = $n->go(); unset($n); } } private function printNumbers() { $return = ''; foreach($this->numbers as $number){ $return .= $number . "\n"; } return $return; } public function go() { $this->collectNumbers(); return $this->printNumbers(); } } $fb = new FizzBuzz(new FBNumber()); echo $fb->go(); In theory, what could we/would you do to make it even more "over-engineered"?

    Read the article

  • Using Core Data Concurrently and Reliably

    - by John Topley
    I'm building my first iOS app, which in theory should be pretty straightforward but I'm having difficulty making it sufficiently bulletproof for me to feel confident submitting it to the App Store. Briefly, the main screen has a table view, upon selecting a row it segues to another table view that displays information relevant for the selected row in a master-detail fashion. The underlying data is retrieved as JSON data from a web service once a day and then cached in a Core Data store. The data previous to that day is deleted to stop the SQLite database file from growing indefinitely. All data persistence operations are performed using Core Data, with an NSFetchedResultsController underpinning the detail table view. The problem I am seeing is that if you switch quickly between the master and detail screens several times whilst fresh data is being retrieved, parsed and saved, the app freezes or crashes completely. There seems to be some sort of race condition, maybe due to Core Data importing data in the background whilst the main thread is trying to perform a fetch, but I'm speculating. I've had trouble capturing any meaningful crash information, usually it's a SIGSEGV deep in the Core Data stack. The table below shows the actual order of events that happen when the detail table view controller is loaded: Main Thread Background Thread viewDidLoad Get JSON data (using AFNetworking) Create child NSManagedObjectContext (MOC) Parse JSON data Insert managed objects in child MOC Save child MOC Post import completion notification Receive import completion notification Save parent MOC Perform fetch and reload table view Delete old managed objects in child MOC Save child MOC Post deletion completion notification Receive deletion completion notification Save parent MOC Once the AFNetworking completion block is triggered when the JSON data has arrived, a nested NSManagedObjectContext is created and passed to an "importer" object that parses the JSON data and saves the objects to the Core Data store. The importer executes using the new performBlock method introduced in iOS 5: NSManagedObjectContext *child = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; [child setParentContext:self.managedObjectContext]; [child performBlock:^{ // Create importer instance, passing it the child MOC... }]; The importer object observes its own MOC's NSManagedObjectContextDidSaveNotification and then posts its own notification which is observed by the detail table view controller. When this notification is posted the table view controller performs a save on its own (parent) MOC. I use the same basic pattern with a "deleter" object for deleting the old data after the new data for the day has been imported. This occurs asynchronously after the new data has been fetched by the fetched results controller and the detail table view has been reloaded. One thing I am not doing is observing any merge notifications or locking any of the managed object contexts or the persistent store coordinator. Is this something I should be doing? I'm a bit unsure how to architect this all correctly so would appreciate any advice.

    Read the article

  • What is the best practice to segment c#.net projects based on a single base project

    - by Anthony
    Honestly, I can't word my question any better without describing it. I have a base project (with all its glory, dlls, resources etc) which is a CMS. I need to use this project as a base for othe custom bake projects. This base project is to be maintained and updated among all custom bake projects. I use subversion (Collabnet and Tortise SVN) I have two questions: 1 - Can I use subversion to share the base project among other projects What I mean here is can I "Checkout" the base project into another "Checked Out" project and have both update and commit seperatley. So, to paint a picture, let's say I am working on a custom project and I modify the core/base prject in some way (which I know will suit the others) can I then commit those changes and upon doing so when I update the base project in the other "Checked out" resources will it pull the changes? In short, I would like not to have to manually deploy updated core files whenever I make changes into each seperate project. 2 - If I create a custom file (let's say an webcontrol or aspx page etc) can I have it compile seperatley from the base project Another tricky one to explain. When I publish my web application it creates DLLs based on the namespaces of projects attached to it. So I may have a number of DLLs including the "Website's" namespace DLL, which could simply be website. I want to be able to make a seperate, custom, control which does not compile into those DLLs as the custom files should not rely on those DLLS to run. Is it as simple to set a seperate namespace for those files like CustomFiles.ProjectName for example? Think of the whole idea as adding modules to the .NET project, I don't want the module's code in any of the core DLLs but I do need for module to be able to access the core dlls. (There is no need for the core project to access the module code as it should be one way only in theory, though I reckon it woould not be possible anyway without using JSON/SOAP or something like that, maybe I am wrong.) I want to create a pluggable environment much like that of Joomla/Wordpress as since PHP generally doesn't have to be compiled first I see this is the reason why all this is possible/easy. The idea is to allow pluggable themes, modules etc etc. (I haven't tried simply adding .NET themes after compile/publish but I am assuming this is possible anyway? OR does the compiler need to reference items in the files?)

    Read the article

  • How can the Private Bytes of a process be significantly less than its effect on the system commit charge?

    - by bacar
    On a 64-bit Windows Server 2003, I can see using taskmgr or process explorer that the total commit charge is around 3.5GB, yet when I sum the Private Bytes consumed by each process (by running pslist -m and adding all values under the Priv column) the total comes in at 1.6GB. I know which process seems to be causing this (sqlservr.exe) as when I kill the process, the commit charge drops dramatically. However the process in question is consuming only ~220MB of Private Bytes yet killing the process drops the commit charge by ~1.6GB. How is this possible? How can the commit charge be so significantly greater than Private Bytes, which should represent the amount of committed memory? If some other factor contributes to the commit charge, what is that factor and how can I view its impact in process explorer? Note: I claim that I understand the difference between reserved and committed memory already: my investigations above relate specifically to Private Bytes which includes only committed memory and excludes reserved memory. the Virtual Size of the process in this case is over 4GB, but this should be irrelevant - Virtual Size in procexp represents reserved, not committed memory, and should not contribute to the commit charge. I'm particularly interested in generalised answers to this question: I'm assuming that if sqlservr.exe can behave in this way, that any process potentially could. Further Investigations I notice that pointing Sysinternals VMMap at this process reports a committed "Private Data" of 1.6GB despite Procexp's reported a Private Bytes of 220MB. This is particularly strange given that the documentation for this field in the "Windows® Sysinternals Administrator's Reference" states that: Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or the .NET runtime, or assigned to the Stack category... VMMap’s definition of “Private Data” is more granular than that of Process Explorer’s “private bytes.” Procexp’s “private bytes” includes all private committed memory belonging to the process. i.e. that VMMap's committed "Private Data" should be smaller than procexp's "Private Bytes". Also, after reading the 'Process committed memory' section of Mark Russinovich's excellent Pushing the Limits of Windows: Virtual Memory, he highlights two cases which won't show up in Private Bytes: File mapping views with copy-on-write semantics (however, according to VMMap there is no significant space allocated to Mapped Files). pagefile-backed virtual memory (however, I tried testlimit with the -l flag as suggested, and no significant memory is consumed by pagefile-backed sections)

    Read the article

  • How does Sentry aggregate errors?

    - by Hugo Rodger-Brown
    I am using Sentry (in a django project), and I'd like to know how I can get the errors to aggregate properly. I am logging certain user actions as errors, so there is no underlying system exception, and am using the culprit attribute to set a friendly error name. The message is templated, and contains a common message ("User 'x' was unable to perform action because 'y'"), but is never exactly the same (different users, different conditions). Sentry clearly uses some set of attributes under the hood to determine whether to aggregate errors as the same exception, but despite having looked through the code, I can't work out how. Can anyone short-cut my having to dig further into the code and tell me what properties I need to set in order to manage aggregation as I would like? [UPDATE 1: event grouping] This line appears in sentry.models.Group: class Group(MessageBase): """ Aggregated message which summarizes a set of Events. """ ... class Meta: unique_together = (('project', 'logger', 'culprit', 'checksum'),) ... Which makes sense - project, logger and culprit I am setting at the moment - the problem is checksum. I will investigate further, however 'checksum' suggests that binary equivalence, which is never going to work - it must be possible to group instances of the same exception, with differenct attributes? [UPDATE 2: event checksums] The event checksum comes from the sentry.manager.get_checksum_from_event method: def get_checksum_from_event(event): for interface in event.interfaces.itervalues(): result = interface.get_hash() if result: hash = hashlib.md5() for r in result: hash.update(to_string(r)) return hash.hexdigest() return hashlib.md5(to_string(event.message)).hexdigest() Next stop - where do the event interfaces come from? [UPDATE 3: event interfaces] I have worked out that interfaces refer to the standard mechanism for describing data passed into sentry events, and that I am using the standard sentry.interfaces.Message and sentry.interfaces.User interfaces. Both of these will contain different data depending on the exception instance - and so a checksum will never match. Is there any way that I can exclude these from the checksum calculation? (Or at least the User interface value, as that has to be different - the Message interface value I could standardise.) [UPDATE 4: solution] Here are the two get_hash functions for the Message and User interfaces respectively: # sentry.interfaces.Message def get_hash(self): return [self.message] # sentry.interfaces.User def get_hash(self): return [] Looking at these two, only the Message.get_hash interface will return a value that is picked up by the get_checksum_for_event method, and so this is the one that will be returned (hashed etc.) The net effect of this is that the the checksum is evaluated on the message alone - which in theory means that I can standardise the message and keep the user definition unique. I've answered my own question here, but hopefully my investigation is of use to others having the same problem. (As an aside, I've also submitted a pull request against the Sentry documentation as part of this ;-)) (Note to anyone using / extending Sentry with custom interfaces - if you want to avoid your interface being use to group exceptions, return an empty list.)

    Read the article

  • Coping with feelings of technical mediocrity

    - by Karim
    As I've progressed as a programmer, I noticed more nuance and areas I could study in depth. In part, I've come to think of myself from, at one point, a "guru" to now much less, even mediocre or inadequate. Is this normal, or is it a sign of a destructive excessive ambition? Background I started to program when I was still a kid, I had about 10 or 11 years. I really enjoy my work and never get bored from it. It's amazing how somebody could be paid for what he really likes to do and would be doing it anyway even for free. When I first started to program, I was feeling proud of what I was doing, each application I built was for me a success and after 2-3 year I had a feeling that I'm a coding guru. It was a nice feeling. ;-) But the more I was in the field and the more types of software I started to develop, I was starting to have a feeling that I'm completely wrong in thinking I'm a guru. I felt that I'm not even a mediocre developer. Each new field I start to work on is giving me this feeling. Like when I once developed a device driver for a client, I saw how much I need to learn about device drivers. When I developed a video filter for an application, I saw how much do I still need to learn about DirectShow, Color Spaces, and all the theory behind that. The worst thing was when I started to learn algorithms. It was several years ago. I knew then the basic structures and algorithms like the sorting, some types of trees, some hashtables, strings, etc. and when I really wanted to learn a group of structures I learned about 5-6 new types and saw that in fact even this small group has several hundred subtypes of structures. It's depressing how little time people have in their lives to learn all this stuff. I'm now a software developer with about 10 years of experience and I still feel that I'm not a proficient developer when I think about things that others do in the industry.

    Read the article

  • Extend argparse to write set names in the help text for optional argument choices and define those sets once at the end

    - by Kent
    Example of the problem If I have a list of valid option strings which is shared between several arguments, the list is written in multiple places in the help string. Making it harder to read: def main(): elements = ['a', 'b', 'c', 'd', 'e', 'f'] parser = argparse.ArgumentParser() parser.add_argument( '-i', nargs='*', choices=elements, default=elements, help='Space separated list of case sensitive element names.') parser.add_argument( '-e', nargs='*', choices=elements, default=[], help='Space separated list of case sensitive element names to ' 'exclude from processing') parser.parse_args() When running the above function with the command line argument --help it shows: usage: arguments.py [-h] [-i [{a,b,c,d,e,f} [{a,b,c,d,e,f} ...]]] [-e [{a,b,c,d,e,f} [{a,b,c,d,e,f} ...]]] optional arguments: -h, --help show this help message and exit -i [{a,b,c,d,e,f} [{a,b,c,d,e,f} ...]] Space separated list of case sensitive element names. -e [{a,b,c,d,e,f} [{a,b,c,d,e,f} ...]] Space separated list of case sensitive element names to exclude from processing What would be nice It would be nice if one could define an option list name, and in the help output write the option list name in multiple places and define it last of all. In theory it would work like this: def main_optionlist(): elements = ['a', 'b', 'c', 'd', 'e', 'f'] # Two instances of OptionList are equal if and only if they # have the same name (ALFA in this case) ol = OptionList('ALFA', elements) parser = argparse.ArgumentParser() parser.add_argument( '-i', nargs='*', choices=ol, default=ol, help='Space separated list of case sensitive element names.') parser.add_argument( '-e', nargs='*', choices=ol, default=[], help='Space separated list of case sensitive element names to ' 'exclude from processing') parser.parse_args() And when running the above function with the command line argument --help it would show something similar to: usage: arguments.py [-h] [-i [ALFA [ALFA ...]]] [-e [ALFA [ALFA ...]]] optional arguments: -h, --help show this help message and exit -i [ALFA [ALFA ...]] Space separated list of case sensitive element names. -e [ALFA [ALFA ...]] Space separated list of case sensitive element names to exclude from processing sets in optional arguments: ALFA {a,b,c,d,e,f} Question I need to: Replace the {'l', 'i', 's', 't', 's'} shown with the option name, in the optional arguments. At the end of the help text show a section explaining which elements each option name consists of. So I ask: Is this possible using argparse? Which classes would I have to inherit from and which methods would I need to override? I have tried looking at the source for argparse, but as this modification feels pretty advanced I don´t know how to get going.

    Read the article

  • Applying styles to a GridView matching certain criteria

    - by NickK
    Hi everyone. I'm fairly new to ASP.Net so it's probably just me being a bit stupid, but I just can't figure out why this isn't working. Basically, I have a GridView control (GridView1) on a page which is reading from a database. I already have a CSS style applied to the GridView and all I want to do is change the background image applied in the style depending on if a certain cell has data in it or not. The way I'm trying to handle this change is updating the CSS class applied to each row through C#. I have the code below doing this: protected void GridView1_RowDataBound(object sender, GridViewRowEventArgs e) { GridViewRow row = e.Row; string s = row.Cells[7].Text; if (s.Length > 0) { row.CssClass = "newRowBackground"; } else { row.CssClass = "oldRowBackground"; } } In theory, the data from Cell[7] will either be null or be a string (in this case, likely a person's name). The problem is that when the page loads, every row in the GridView has the new style applied to it, whether it's empty or not. However, when I change it to use hard coded examples, it works fine. So for example, the below would work exactly how I want it to: protected void GridView1_RowDataBound(object sender, GridViewRowEventArgs e) { GridViewRow row = e.Row; string s = row.Cells[7].Text; if (s == "Smith") //Matching a name in one of the rows { row.CssClass = "newRowBackground"; } else { row.CssClass = "oldRowBackground"; } } It seems as if the top piece of code is always returning the string with a value greater than 0, but when I check the database the fields are all null (except for my test record of "Smith"). I'm probably doing something very simple that's wrong here, but I can't see what. Like I said, I'm still very new to this. One thing I have tried is changing the argument in the if statement to things like: if (s != null), if (s != "") and if (s == string.empty) all with no luck. Any help is greatly appreciated and don't hesitate to tell me if I'm just being stupid here. :)

    Read the article

  • Insert Registration Data in MySQL using PHP

    - by J M 4
    I may not be asking this in the best way possible but i will try my hardest. Thank you ahead of time for your help: I am creating an enrollment website which allows an individual OR manager to enroll for medical testing services for professional athletes. I will NOT be using the site as a query DB which anybody can view information stored within the database. The information is instead simply stored, and passed along in a CSV format to our network provider so they can use as needed after the fact. There are two possible scenarios: Scenario 1 - Individual Enrollment If an individual athlete chooses to enroll him/herself, they enter their personal information, submit their payment information (credit/bank account) for processing, and their information is stored in an online database as Athlete1. Scenario 2 - Manager Enrollment If a manager chooses to enroll several athletes he manages/ promotes for, he enters his personal information, then enters the personal information for each athlete he wishes to pay for (name, address, ssn, dob, etc), then submits payment information for ALL athletes he is enrolling. This number can range from 1 single athlete, up to 20 athletes per single enrollment (he can return and complete a follow up enrollment for additional athletes). Initially, I was building the database to house ALL information regardless of enrollment type in a single table which housed over 400 columns (think 20 athletes with over 10 fields per athlete such as name, dob, ssn, etc). Now that I think about it more, I believe create multiple tables (manager(s), athlete(s)) may be a better idea here but still not quite sure how to go about it for the following very important reasons: Issue 1 If I list the manager as the parent table, I am afraid the individual enrolling athlete will not show up in the primary table and will not be included in the overall registration file which needs to be sent on to the network providers. Issue 2 All athletes being enrolled by a manager are being stored in SESSION as F1FirstName, F2FirstName where F1 and F2 relate to the id of the fighter. I am not sure technically speaking how to store multiple pieces of information within the same table under separate rows using PHP. For example, all athleteswill have a first name. The very basic theory of what i am trying to do is: If number_of_athletes 1, store F1FirstName in row 1, column 1 of Table "Athletes"; store F1LastName in row 1, column 2 of Table "Athletes"; store F2FirstName in row 2, column 1 of Table "Athletes"; store F2LastName in row 2, column 2 of table "Athletes"; Does this make sense? I know this question is very long and probably difficult so i appreciate the guidance.

    Read the article

  • Directly Jump to another C++ function

    - by maligree
    I'm porting a small academic OS from TriCore to ARM Cortex (Thumb-2 instruction set). For the scheduler to work, I sometimes need to JUMP directly to another function without modifying the stack nor the link register. On TriCore (or, rather, on tricore-g++), this wrapper template (for any three-argument-function) works: template< class A1, class A2, class A3 > inline void __attribute__((always_inline)) JUMP3( void (*func)( A1, A2, A3), A1 a1, A2 a2, A3 a3 ) { typedef void (* __attribute__((interrupt_handler)) Jump3)( A1, A2, A3); ( (Jump3)func )( a1, a2, a3 ); } //example for using the template: JUMP3( superDispatch, this, me, next ); This would generate the assembler instruction J (a.k.a. JUMP) instead of CALL, leaving the stack and CSAs unchanged when jumping to the (otherwise normal) C++ function superDispatch(SchedulerImplementation* obj, Task::Id from, Task::Id to). Now I need an equivalent behaviour on ARM Cortex (or, rather, for arm-none-linux-gnueabi-g++), i.e. generate a B (a.k.a. BRANCH) instruction instead of BLX (a.k.a. BRANCH with link and exchange). But there is no interrupt_handler attribute for arm-g++ and I could not find any equivalent attribute. So I tried to resort to asm volatile and writing the asm code directly: template< class A1, class A2, class A3 > inline void __attribute__((always_inline)) JUMP3( void (*func)( A1, A2, A3), A1 a1, A2 a2, A3 a3 ) { asm volatile ( "mov.w r0, %1;" "mov.w r1, %2;" "mov.w r2, %3;" "b %0;" : : "r"(func), "r"(a1), "r"(a2), "r"(a3) : "r0", "r1", "r2" ); } So far, so good, in my theory, at least. Thumb-2 requires function arguments to be passed in the registers, i.e. r0..r2 in this case, so it should work. But then the linker dies with undefined reference to `r6' on the closing bracket of the asm statement ... and I don't know what to make of it. OK, I'm not the expert in C++, and the asm syntax is not very straightforward... so has anybody got a hint for me? A hint to the correct __attribute__ for arm-g++ would be one way, a hint to fix the asm code would be another. Another way maybe would be to tell the compiler that a1..a3 should already be in the registers r0..r2 when the asm statement is entered (I looked into that a bit, but did not find any hint).

    Read the article

  • Declare Locally or Globally in Delphi?

    - by lkessler
    I have a procedure my program calls tens of thousands of times that uses a generic structure like this: procedure PrintIndiEntry(JumpID: string); type TPeopleIncluded = record IndiPtr: pointer; Relationship: string; end; var PeopleIncluded: TList<TPeopleIncluded>; PI: TPeopleIncluded; begin { PrintIndiEntry } PeopleIncluded := TList<TPeopleIncluded>.Create; { A loop here that determines a small number (up to 100) people to process } while ... do begin PI.IndiPtr := ...; PI.Relationship := ...; PeopleIncluded.Add(PI); end; DoSomeProcess(PeopleIncluded); PeopleIncluded.Clear; PeopleIncluded.Free; end { PrintIndiEntry } Alternatively, I can declare PeopleIncluded globally rather than locally as follows: unit process; interface type TPeopleIncluded = record IndiPtr: pointer; Relationship: string; end; var PeopleIncluded: TList<TPeopleIncluded>; PI: TPeopleIncluded; procedure PrintIndiEntry(JumpID: string); begin { PrintIndiEntry } { A loop here that determines a small number (up to 100) people to process } while ... do begin PI.IndiPtr := ...; PI.Relationship := ...; PeopleIncluded.Add(PI); end; DoSomeProcess(PeopleIncluded); PeopleIncluded.Clear; end { PrintIndiEntry } procedure InitializeProcessing; begin PeopleIncluded := TList<TPeopleIncluded>.Create; end; procedure FinalizeProcessing; begin PeopleIncluded.Free; end; My question is whether in this situation it is better to declare PeopleIncluded globally rather than locally. I know the theory is to define locally whenever possible, but I would like to know if there are any issues to worry about with regards to doing tens of thousands of of "create"s and "free"s? Making them global will do only one create and one free. What is the recommended method to use in this case? If the recommended method is to still define it locally, then I'm wondering if there are any situations where it is better to define globally when defining locally is still an option.

    Read the article

  • Setting Up My Server to Do DNS On OpenSuse 11.3

    - by adaykin
    Hello, I am attempting to use my server to be a DNS server. I am having trouble getting the domain setup. Here is what I have so far: /var/lib/named/master/andydaykin.com: $TTL 2d @ IN SOA andydaykin.com. root.andydaykin.com. ( 2011011000 ; serial 0 ; refresh 0 ; retry 0 ; expiry 0 ) ; minimum andydaykin.com. IN NS ns1.andydaykin.com. andydaykin.com. IN SOA ns1.andydaykin.com. hostmaster.andydaykin.com. ( @.andydaykin.com. IN NS ns1.andydaykin.com. ns1.andydaykin.com. IN A 204.12.227.33 www.andydaykin.com. IN A 204.12.227.33 /etc/resolve.conf: search andydaykin.com nameserver 204.12.227.33 /etc/named.conf: options { # The directory statement defines the name server's working directory directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; statistics-file "/var/log/named.stats"; listen-on port 53 { 127.0.0.1; }; listen-on-v6 { any; }; notify no; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; include "/etc/named.d/forwarders.conf"; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; Include the meta include file generated by createNamedConfInclude. This includes all files as configured in NAMED_CONF_INCLUDE_FILES from /etc/sysconfig/named include "/etc/named.conf.include"; zone "andydaykin.com" in { file "master/andydaykin.com"; type master; allow-transfer { any; }; }; logging { category default { log_syslog; }; channel log_syslog { syslog; }; }; What I am doing wrong?

    Read the article

  • avoiding enums as interface identifiers c++ OOP

    - by AlasdairC
    Hi I'm working on a plugin framework using dynamic loaded shared libraries which is based on Eclipse's (and probally other's) extension-point model. All plugins share similar properties (name, id, version etc) and each plugin could in theory satisfy any extension-point. The actual plugin (ie Dll) handling is managed by another library, all I am doing really is managing collections of interfaces for the application. I started by using an enum PluginType to distinguish the different interfaces, but I have quickly realised that using template functions made the code far cleaner and would leave the grunt work up to the compiler, rather than forcing me to use lots of switch {...} statements. The only issue is where I need to specify like functionality for class members - most obvious example is the default plugin which provides a particular interface. A Settings class handles all settings, including the default plugin for an interface. ie Skin newSkin = settings.GetDefault<ISkin>(); How do I store the default ISkin in a container without resorting to some other means of identifying the interface? As I mentioned above, I currently use a std::map<PluginType, IPlugin> Settings::defaults member to achieve this (where IPlugin is an abstract base class which all plugins derive from. I can then dynamic_cast to the desired interface when required, but this really smells of bad design to me and introduces more harm than good I think. would welcome any tips edit: here's an example of the current use of default plugins typedef boost::shared_ptr<ISkin> Skin; typedef boost::shared_ptr<IPlugin> Plugin; enum PluginType { skin, ..., ... } class Settings { public: void SetDefault(const PluginType type, boost::shared_ptr<IPlugin> plugin) { m_default[type] = plugin; } boost::shared_ptr<IPlugin> GetDefault(const PluginType type) { return m_default[type]; } private: std::map<PluginType, boost::shared_ptr<IPlugin> m_default; }; SkinManager::Initialize() { Plugin thedefault = g_settings.GetDefault(skinplugin); Skin defaultskin = boost::dynamic_pointer_cast<ISkin>(theskin); defaultskin->Initialize(); } I would much rather call the getdefault as the following, with automatic casting to the derived class. However I need to specialize for every class type. template<> Skin Settings::GetDefault<ISkin>() { return boost::dynamic_pointer_cast<ISkin>(m_default(skin)); }

    Read the article

  • SwingWorker exceptions lost even when using wrapper classes

    - by Ti Strga
    I've been struggling with the usability problem of SwingWorker eating any exceptions thrown in the background task, for example, described on this SO thread. That thread gives a nice description of the problem, but doesn't discuss recovering the original exception. The applet I've been handed needs to propagate the exception upwards. But I haven't been able to even catch it. I'm using the SimpleSwingWorker wrapper class from this blog entry specifically to try and address this issue. It's a fairly small class but I'll repost it at the end here just for reference. The calling code looks broadly like try { // lots of code here to prepare data, finishing with SpecialDataHelper helper = new SpecialDataHelper(...stuff...); helper.execute(); } catch (Throwable e) { // used "Throwable" here in desperation to try and get // anything at all to match, including unchecked exceptions // // no luck, this code is never ever used :-( } The wrappers: class SpecialDataHelper extends SimpleSwingWorker { public SpecialDataHelper (SpecialData sd) { this.stuff = etc etc etc; } public Void doInBackground() throws Exception { OurCodeThatThrowsACheckedException(this.stuff); return null; } protected void done() { // called only when successful // never reached if there's an error } } The feature of SimpleSwingWorker is that the actual SwingWorker's done()/get() methods are automatically called. This, in theory, rethrows any exceptions that happened in the background. In practice, nothing is ever caught, and I don't even know why. The SimpleSwingWorker class, for reference, and with nothing elided for brevity: import java.util.concurrent.ExecutionException; import javax.swing.SwingWorker; /** * A drop-in replacement for SwingWorker<Void,Void> but will not silently * swallow exceptions during background execution. * * Taken from http://jonathangiles.net/blog/?p=341 with thanks. */ public abstract class SimpleSwingWorker { private final SwingWorker<Void,Void> worker = new SwingWorker<Void,Void>() { @Override protected Void doInBackground() throws Exception { SimpleSwingWorker.this.doInBackground(); return null; } @Override protected void done() { // Exceptions are lost unless get() is called on the // originating thread. We do so here. try { get(); } catch (final InterruptedException ex) { throw new RuntimeException(ex); } catch (final ExecutionException ex) { throw new RuntimeException(ex.getCause()); } SimpleSwingWorker.this.done(); } }; public SimpleSwingWorker() {} protected abstract Void doInBackground() throws Exception; protected abstract void done(); public void execute() { worker.execute(); } }

    Read the article

  • PHP/MySQL Interview - How would you have answered?

    - by martincarlin87
    I was asked this interview question so thought I would post it here to see how other users would answer: Please write some code which connects to a MySQL database (any host/user/pass), retrieves the current date & time from the database, compares it to the current date & time on the local server (i.e. where the application is running), and reports on the difference. The reporting aspect should be a simple HTML page, so that in theory this script can be put on a web server, set to point to a particular database server, and it would tell us whether the two servers’ times are in sync (or close to being in sync). This is what I put: // Connect to database server $dbhost = 'localhost'; $dbuser = 'xxx'; $dbpass = 'xxx'; $dbname = 'xxx'; $conn = mysql_connect($dbhost, $dbuser, $dbpass) or die (mysql_error()); // Select database mysql_select_db($dbname) or die(mysql_error()); // Retrieve the current time from the database server $sql = 'SELECT NOW() AS db_server_time'; // Execute the query $result = mysql_query($sql) or die(mysql_error()); // Since query has now completed, get the time of the web server $php_server_time = date("Y-m-d h:m:s"); // Store query results in an array $row = mysql_fetch_array($result); // Retrieve time result from the array $db_server_time = $row['db_server_time']; echo $db_server_time . '<br />'; echo $php_server_time; if ($php_server_time != $db_server_time) { // Server times are not identical echo '<p>Database server and web server are not in sync!</p>'; // Convert the time stamps into seconds since 01/01/1970 $php_seconds = strtotime($php_server_time); $sql_seconds = strtotime($db_server_time); // Subtract smaller number from biggest number to avoid getting a negative result if ($php_seconds > $sql_seconds) { $time_difference = $php_seconds - $sql_seconds; } else { $time_difference = $sql_seconds - $php_seconds; } // convert the time difference in seconds to a formatted string displaying hours, minutes and seconds $nice_time_difference = gmdate("H:i:s", $time_difference); echo '<p>Time difference between the servers is ' . $nice_time_difference; } else { // Timestamps are exactly the same echo '<p>Database server and web server are in sync with each other!</p>'; } Yes, I know that I have used the deprecated mysql_* functions but that aside, how would you have answered, i.e. what changes would you make and why? Are there any factors I have omitted which I should take into consideration? The interesting thing is that my results always seem to be an exact number of minutes apart when executed on my hosting account: 2012-12-06 11:47:07 2012-12-06 11:12:07

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >