Search Results

Search found 344 results on 14 pages for 'ethan hunt'.

Page 3/14 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Developer hardware autonomy in a managed desktop environment [closed]

    - by Troy Hunt
    I’m looking for some feedback on how developer PCs are managed within environments that have a strict managed desktop policy (normally large corporations). For example, many corporate environments control the installation of software and the deployment of patches and virus updates through a centralised channel. This usually means also dictating the OS version and architecture (32 bit versus 64 bit) which will likely also mean standardised hardware configurations. I’m particularly interested in feedback from developers who work in this sort of environment but have a high degree of autonomy over their machines. This might mean choosing your own hardware vendor, OS type and version and perhaps how the machines are built and maintained. I have several specific questions: How do you satisfy the needs of security, governance etc whilst maintaining your autonomy? For example, how do you address concerns about keeping virus definitions and OS patches up to date? Do you have a process for gaining exemption from standard desktop builds and if so, what do you need to demonstrate in order to get this? How have you justified this need to the decision makers? Essentially, what is the benefit to your role as a developer by having this degree of autonomy? Thanks very much everyone. Update: There's a great post from Jean-Paul Boodhoo which addresses the developer tool component of the quesiton here: http://blog.jpboodhoo.com/TheFallacyOfTheStandardizedDeveloperMachineimage.aspx

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • How do you make a randomly generated url address after form input?

    - by pmal10
    this is my first time ever posting on a Stackexchange website so I don't know much but my friend, a guy named Ethan know. But, to get on topic, I have a problem or question. Is there a way to get a URL from what you posted? I don't want to use the GET function on the post, because what I want to make is something like this: http://testwebsiteblahblahblah.com/forminput?formID=817 Is there a way to do it with JavaScript, HTML (CSS), ASP, or PHP ?

    Read the article

  • SEO?s Future Is Now

    The world of search is changing right before our eyes. Google is making waves, Microsoft just badda boomed Bing?ed themselves right back into direct competition with Google?s giant chunk of the searc... [Author: Ethan Luke - Computers and Internet - March 22, 2010]

    Read the article

  • SEO?s Perception Gap

    Search engine optimization is a newly emerging industry that is still growing every day. Its close ties to the Internet and Google allows the service to ride the coat-tails of search into an ever-cha... [Author: Ethan Luke - Computers and Internet - March 22, 2010]

    Read the article

  • SEO?s Perception Gap

    Search engine optimization is a newly emerging industry that is still growing every day. Its close ties to the Internet and Google allows the service to ride the coat-tails of search into an ever-cha... [Author: Ethan Luke - Computers and Internet - April 09, 2010]

    Read the article

  • List of resources for database continuous integration

    - by David Atkinson
    Because there is so little information on database continuous integration out in the wild, I've taken it upon myself to aggregate as much as possible and post the links to this blog. Because it's my area of expertise, this will focus on SQL Server and Red Gate tooling, although I am keen to include any quality articles that discuss the topic in general terms. Please let me know if you find a resource that I haven't listed! General database Continuous Integration · What is Database Continuous Integration? (David Atkinson) · Continuous Integration for SQL Server Databases (Troy Hunt) · Installing NAnt to drive database continuous integration (David Atkinson) · Continuous Integration Tip #3 - Version your Databases as part of your automated build (Doug Rathbone) · How the "migrations" approach makes database continuous integration possible (David Atkinson) · Continuous Integration for the Database (Keith Bloom) Setting up Continuous Integration with Red Gate tools · Continuous integration for databases using Red Gate tools - A technical overview (White Paper, Roger Hart and David Atkinson) · Continuous integration for databases using Red Gate SQL tools (Product pages) · Database continuous integration step by step (David Atkinson) · Database Continuous Integration with Red Gate Tools (video, David Atkinson) · Database schema synchronisation with RedGate (Vincent Brouillet) · Database continuous integration and deployment with Red Gate tools (David Duffett) · Automated database releases with TeamCity and Red Gate (Troy Hunt) · How to build a database from source control (David Atkinson) · Continuous Integration Automated Database Update Process (Lance Lyons) Other · Evolutionary Database Design (Martin Fowler) · Recipes for Continuous Database Integration: Evolutionary Database Development (book, Pramod J Sadalage) · Recipes for Continuous Database Integration (book, Pramod Sadalage) · The Red Gate Guide to SQL Server Team-based Development (book, Phil Factor, Grant Fritchey, Alex Kuznetsov, Mladen Prajdic) · Using SQL Test Database Unit Testing with TeamCity Continuous Integration (Dave Green) · Continuous Database Integration (covers MySQL, Perason Education) Technorati Tags: SQL Server,Continous Integration

    Read the article

  • Finding bugs is difficult, right?

    - by Laila
    Something I hear developers tell us all the time is that they take pride in being a developer.and that bugs are a dent in that pride. Someone once told me "I know I have found bugs years later, and it's the worst feeling in the world." So how can you avoid that sinking feeling when you find out a bug has been in production months before someone lets you know about it? Besides, let's face it: hearing about a bug often means a world of pain, because it can take hours to track down where the problem is and more hours (if not days) to fix it. And during that time, you're not working on something new, and that, my friends, is really frustrating! So to cheer you up, we've created a Bug Hunt game, where you battle against the clock to spot bugs. We've really enjoyed putting this together and hope you enjoy playing it too. Once you're done with the bug hunt, we explain how easy it can be to find and fix bugs in real life, using a neat mechanism that we call Automated Error Reporting. Play the game now.

    Read the article

  • Django automatically compress Model Field on save() and decompress when field is accessed

    - by Brian M. Hunt
    Given a Django model likeso: from django.db import models class MyModel(models.Model): textfield = models.TextField() How can one automatically compress textfield (e.g. with zlib) on save() and decompress it when the property textfield is accessed (i.e. not on load), with a workflow like this: m = MyModel() textfield = "Hello, world, how are you?" m.save() # compress textfield on save m.textfield # no decompression id = m.id() m = MyModel.get(pk=id) # textfield still compressed m.textfield # textfield decompressed I'd be inclined to think that you would overload MyModel.save, but I don't know the pattern for in-place modification of the element when saving. I also don't know the best way in Django to decompress when the field when it's accessed (overload __getattr__?). Or would a better way to do this be to have a custom field type? I'm certain I've seen an example of almost exactly this, but alas I've not been able to find it recently. Thank you for reading – and for any input you may be able to provide.

    Read the article

  • What enterprise architecture tools support DoDAF 2.0?

    - by David Hunt
    What tools best support the DoD Architecture Framework (DoDAF) Version 2.0, including support for transfer of the architecture data in accordance with the DoDAF Meta Model (DM2) Physical Exchange Specification (PES)? My initial research found that MagicDraw and Casewise claim support for version 2.0; and several other tools have support for earlier (or unspecified) DoDAF/MoDAF versions including Sparx Enterprise Architect, Troux, IDS Scheer ARIS, Artisan Studio and Rational System Architect. Experiences with any enterprise architecture tools and DoDAF 2.0 would be appreciated. The immediate need is for Data and Information Viewpoint models (DIV-1, DIV-2/OV-7, DIV-3/SV-11), but models in the other Viewpoints will be developed. Thanks -

    Read the article

  • Mapping JSON data in JQGrid

    - by hunt
    Hi , I am using jqGrid 3.6.4 and a jquery 1.4.2 . in my sample i am getting following json data format & i want to map these json data into rows of a jqgrid { "page": "1", "total": 1, "records": "6", "rows": [ { "head": { "student_name": "Mr S. Jack ", "year": 2007 }, "sub": [ { "course_description": "Math ", "date": "22-04-2010", "number": 1, "time_of_add": "2:00", "day": "today" } ] } ] } my jqgrid code is as follows jQuery("#"+subgrid_table_id).jqGrid({ url:"http://localhost/stud/beta/web/GetStud.php?sid="+sid, dtatype: "json", colNames: ['Stud Name','Year','Date'.'Number'], colModel: [ {name:'Stud Name',index:'student_name', width:100, jsonmap:"student_name"}, {name:'Year',index:'year', width:100, jsonmap:"year"}, {name:'Date',index:'date', width:100, jsonmap:"date"}, {name:'Number',index:'number', width:100, jsonmap:"number"} ], height:'100%', jsonReader: { repeatitems : false, root:"head" }, }); So now the problem is as my data i.e. student_name and year is under "head" , the jqgrid is enable to locate these two fields. at the same time other two column values i.e. Date and Number lies under "sub" and even those columns i am not be able to map it with jqgrid so kindly help me how to located these attributes in JQGrid. Thanks

    Read the article

  • Config transformations and “TransformXml task failed” error message

    - by Troy Hunt
    I’ve just enabled config transformations on a .NET 3.5 project in VS2010 RC after watching Scott Hanselman’s video on web deployment. Unfortunately every time I go to publish I now get the following error: The "TransformXml" task failed unexpectedly. System.UriFormatException: Invalid URI: The URI is empty. at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind) at System.Uri..ctor(String uriString) at Microsoft.Web.Publishing.Tasks.TransformXml.Execute() at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult) If I take a brand new VS2010 web application which already has the config transformations by default I don’t have a problem so I suspect my issue is project related. Has anyone come across this before or have any ideas on a fix?

    Read the article

  • Transactions in codeigniter with multiple tables.

    - by Ethan
    Hey SO, I'm new to transactions in general, but especially with CodeIgniter. I'm using InnoDB and everything, but my transactions aren't rolling back when I want them to. Here's my code (slightly simplified). $dog_db = $this->load->database('dog', true); $dog_db->trans_begin(); $dog_id = $this->dogs->insert($new_dog); //Gets primary key of insert if(!$dog_id) { $dog_db->trans_rollback(); throw new Exception('We have had an error trying to add this dog. Please go back and try again.'); } $new_review['dog_id'] = $dog_id; $new_review['user_id'] = $user_id; $new_review['date_added'] = time(); if(!$this->reviews->insert($new_review)) //If the insert fails { $dog_db->trans_rollback(); throw new Exception('We have had an error trying to add this dog. Please go back and try again.'); } //ADD DESCRIPTION $new_description['description'] = $add_dog['description']; $new_description['dog_id'] = $dog_id; $new_description['user_id'] = $user_id; $new_description['date_added'] = time(); if(!$this->descriptions->insert($new_description)) { $dog_db->trans_rollback(); throw new Exception('We have had an error trying to add this dog. Please go back and try again.'); } $booze_db->trans_rollback(); //THIS IS JUST TO SEE IF IT WORKS throw new Exception('We have had an error trying to add this dog. Please go back and try again.'); $booze_db->trans_commit(); } catch(Exception $e) { echo $e->getMessage(); } I'm not getting any error messages, but it's not rolling back either. It should roll back at that final trans_rollback right before the commit. My models are all on the "dog" database, so I think that the transaction would carry into the models' functions. Maybe you just can't use models like this. Any help would be greatly appreciated! Thanks!

    Read the article

  • Using ExpressionEngine or Joomla templates inside a pre-existing page?

    - by Ethan
    Hey SO, So I'm new to both Joomla and Expression Engine, and want to know if I can use it like I'd like. I've already made a full site, and would like to integrate blogging into the site. The site is on CodeIgniter. Is there a way that I could create a form template for submitting a post which would then save to my Joomla/CodeIgniter DB. Then, on a different page, use a different Joomla/CodeIgniter template to display the blog in the form I would like. Note that this wouldn't necessarily be powered by EE or Joomla. From what I understand, and from all the examples I've seen, you have to make the html of the entire page inside of their templates. At worst, if neither work, is there anything I can use to do this? Thanks!

    Read the article

  • having a weird bug with mongrel - please help!

    - by Ethan
    this is from the development log... /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:101:in `dispatch_cgi' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:27:in `dispatch' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/rails.rb:76:in `process' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/rails.rb:74:in `synchronize' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/rails.rb:74:in `process' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:159:in `process_client' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:158:in `each' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:158:in `process_client' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:285:in `run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:285:in `initialize' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:285:in `new' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:285:in `run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:268:in `initialize' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:268:in `new' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:268:in `run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:282:in `run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:281:in `each' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:281:in `run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:128:in `run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/command.rb:212:in `run' /usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:281 /usr/bin/mongrel_rails:19:in `load' /usr/bin/mongrel_rails:19 /!\ FAILSAFE /!\ Thu Apr 15 20:19:18 +0000 2010 Status: 500 Internal Server Error please help - any ideas would be amazing - been stuck on trying to fix this thing for a week!

    Read the article

  • What’s the status of CAT.NET?

    - by Troy Hunt
    I’m trying to find Microsoft CAT.NET for VS2010 and it looks like there was a beta of their 2.0 version but every link to it in Microsoft Connect is now dead. This is the most recent reference I could find: http://blogs.msdn.com/securitytools/archive/2010/02/05/how-to-use-cat-net-2-0-beta.aspx Some references suggest it may have been rolled into FxCop. Does anyone know the status of the project?

    Read the article

  • Getting uninitialized constant error when trying to run tests

    - by Ethan
    I just updated all my gems and I'm finding that I'm getting errors when trying to run Test::Unit tests. I'm getting the error copied below. That comes from creating new, empty Rails project, scaffolding a simple model, and running rake test. Tried Googling "uninitialized constant" and TestResultFailureSupport. The only thing I found was this bug report from 2007. I'm using OS X. These are the gems that I updated right before the tests stopped working: $ sudo gem outdated Password: RedCloth (4.2.1 < 4.2.2) RubyInline (3.8.1 < 3.8.2) ZenTest (4.1.1 < 4.1.3) bluecloth (2.0.4 < 2.0.5) capistrano (2.5.5 < 2.5.8) haml (2.0.9 < 2.2.1) hoe (2.2.0 < 2.3.2) json (1.1.6 < 1.1.7) mocha (0.9.5 < 0.9.7) rest-client (1.0.2 < 1.0.3) thoughtbot-factory_girl (1.2.1 < 1.2.2) thoughtbot-shoulda (2.10.1 < 2.10.2) Has anyone else seen this issue? Any troubleshooting suggestions? UPDATE On a hunch I downgraded ZenTest from 4.1.3 back to 4.1.1 and now everything works again. Still curious to know if anyone else has seen this or has any interesting comments or insights. $ rake test (in /Users/me/foo) /usr/local/bin/ruby -I"lib:test" "/usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/helpers/users_helper_test.rb" "test/unit/user_test.rb" /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:105:in `const_missing': uninitialized constant Test::Unit::TestResult::TestResultFailureSupport (NameError) from /usr/local/lib/ruby/gems/1.8/gems/test-unit-2.0.2/lib/test/unit/testresult.rb:28 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:158:in `require' from /usr/local/lib/ruby/gems/1.8/gems/test-unit-2.0.2/lib/test/unit/ui/testrunnermediator.rb:9 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:158:in `require' ... 6 levels... from /usr/local/lib/ruby/1.8/test/unit/autorunner.rb:214:in `run' from /usr/local/lib/ruby/1.8/test/unit/autorunner.rb:12:in `run' from /usr/local/lib/ruby/1.8/test/unit.rb:278 from /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 /usr/local/bin/ruby -I"lib:test" "/usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/functional/users_controller_test.rb"

    Read the article

  • replace double quotes to parse JSON in PHP

    - by hunt
    hi, i have following json format { "status": "ACTIVE", "result": false, "isworking": false, "margin": 1, "employee": { "111": { "val1": 5.7000000000000002, "val2": "9/2", "val3": 5.7000000000000002 }, "222": { "val1": 31.550000000000001, "val2": "29/1", "val3": 31.550000000000001 } } } how the problem is when i am trying to decode above json response in php using json_decode($res,true) { true param for associative array } i am getting following result as few fields like "result":false is not "result":"false" i.e. at many of the places doubles quotes are missing in values of json. see in val1 and val3 fields resultant data after decoding in php (associative array) Array ( [status] => > ACTIVE [result] => > [isworking] => > [margin] => > 1 [employee] => > Array ( [111] => > Array ( [val1] => > 5.7 [val2] => > 9/2 [val3] => > 5.7 ) [222] => > Array ( [val1] => > 31.55 [val2] => > 29/1 [val3] => > 31.55 ) ) ) please help me on how would i insert double quotes in values ? Thanks

    Read the article

  • DataMapper Dates

    - by Ethan Turkeltaub
    Forgive me if this is a simple answer. But how do you get a Date from a DataMapper property. For example: require 'rubygems' require 'sinatra' require 'datamapper' class Test include DataMapper::Resource property :id, Serial property :created_at, Date end get '/:id' do test = Test.get(1) test.created_at = ? end

    Read the article

  • ServiceRoute + WebServiceHostFactory kills WSDL generation? How to create extensionless WCF service

    - by Ethan J. Brown
    I'm trying to use extenionless / .svc-less WCF services. Can anyone else confirm or deny the issue I'm experiencing? I use routing in code, and do this in Application_Start of global.asax.cs: RouteTable.Routes.Add(new ServiceRoute("Data", new WebServiceHostFactory(), typeof(DataDips))); I have tested in both IIS 6 and IIS 7.5 and I can use the service just fine (ie my extensionless handler is correctly configured for ASP.NET). However, metadata generation is totally screwed up. I can hit my /mex endpoint with the WCF Test Client (and I presume svcutil.exe) -- but the ?wsdl generation you typically get with .svc is toast. I can't hit it with a browser (get 400 bad request), I can't hit it with wsdl.exe, etc. Metadata generation is configured correctly in web.config. This is a problem of course, because the service is exposed as basicHttpBinding so that an old style ASMX client can get to it. But of course, the client can't generate the proxy without a WSDL description. If I instead use serviceActivation routing in config like this, rather than registering a route in code: <serviceHostingEnvironment aspNetCompatibilityEnabled="true"> <serviceActivations> <add relativeAddress="Data.svc" service="DataDips" /> </serviceActivations> </serviceHostingEnvironment> Then voila... it works. But then I don't have a clean extensionless url. If I change relativeAddress from Data.svc to Data, then I get a configuration exception as this is not supported by config. (Must use an extension registered to WCF). I've also attempted to use this code in conjunction with the above config: RouteTable.Routes.MapPageRoute("","Data/{*data}","~/Data.svc/{*data}",false); My thinking is that I can just point the extensionless url at the configured .svc url. This doesn't work -- the /Data.svc continues to work, but /Data returns a 404. Anyone with any bright ideas?

    Read the article

  • jQueryUI autocomplete - when no results are returned

    - by Brian M. Hunt
    I'm wondering how one can catch and add a custom handler when empty results are returned from the server when using jQueryUI autocomplete. There seem to be a few questions on this point related to the various jQuery plugins (e.g. jQuery autocomplete display “No data” error message when results empty), but I am wondering if there's a better/simpler way to achieve the same with the jQueryUI autocomplete. It seems to me this is a common use case, and I thought perhaps that jQueryUI had improved on the jQuery autocomplete by adding the ability to cleanly handle this situation. However I've not been able to find documentation of such functionality, and before I hack away at it I'd like to throw out some feelers in case others have seen this before. While probably not particularly influential, I can have the server return anything - e.g. HTTP 204: No Content to a 200/JSON empty list - whatever makes it easiest to catch the result in jQueryUI's autocomplete. My first thought is to pass a callback with two arguments, namely a request object and a response callback to handle the code, per the documentation: The third variation, the callback, provides the most flexibility, and can be used to connect any data source to Autocomplete. The callback gets two arguments: A request object, with a single property called "term", which refers to the value currently in the text input. For example, when the user entered "new yo" in a city field, the Autocomplete term will equal "new yo". A response callback, which expects a single argument to contain the data to suggest to the user. This data should be filtered based on the provided term, and can be in any of the formats described above for simple local data (String-Array or Object-Array with label/value/both properties). When the response callback receives no data, it inserts returns a special one-line object-array that has a label and an indicator that there's no data (so the select/focus recognize it as the indicator that no-data was returned). This seems overcomplicated. I'd prefer to be able to use a source: "http://...", and just have a callback somewhere indicating that no data was returned. Thank you for reading. Brian

    Read the article

  • GWT and java.io.Serializable

    - by Ethan Leroy
    Hello, In my GWT app I have the following model class: import com.google.gwt.user.client.rpc.IsSerializable; public class TestEntity implements IsSerializable { public String testString; } This class implements the GWT custom IsSerializable marker interface - which I really don't like, because I use my model classes not only for GWT. So I prefer java.io.Serializable. But if I modify the class to implement Serializable instead of IsSerializable, the GWT RPC mechanism doesn't work anymore. I don't get an error on the server side, but on the client AsyncCallback.onFailure is invoked. I am using... GWT 1.7.0. Spring 2.5.6.SEC01 Spring and GWT are configured as described here.

    Read the article

  • Python-daemon doesn't kill its kids

    - by Brian M. Hunt
    When using python-daemon, I'm creating subprocesses likeso: import multiprocessing class Worker(multiprocessing.Process): def __init__(self, queue): self.queue = queue # we wait for things from this in Worker.run() ... q = multiprocessing.Queue() with daemon.DaemonContext(): for i in xrange(3): Worker(q) while True: # let the Workers do their thing q.put(_something_we_wait_for()) When I kill the parent daemonic process (i.e. not a Worker) with a Ctrl-C or SIGTERM, etc., the children don't die. How does one kill the kids? My first thought is to use atexit to kill all the workers, likeso: with daemon.DaemonContext(): workers = list() for i in xrange(3): workers.append(Worker(q)) @atexit.register def kill_the_children(): for w in workers: w.terminate() while True: # let the Workers do their thing q.put(_something_we_wait_for()) However, the children of daemons are tricky things to handle, and I'd be obliged for thoughts and input on how this ought to be done. Thank you.

    Read the article

  • Get signal names from numbers in Python

    - by Brian M. Hunt
    Is there a way to map a signal number (e.g. signal.SIGINT) to its respective name (i.e. "SIGINT")? I'd like to be able to print the name of a signal in the log when I receive it, however I cannot find a map from signal numbers to names in Python, i.e. import signal def signal_handler(signum, frame): logging.debug("Received signal (%s)" % sig_names[signum]) signal.signal(signal.SIGINT, signal_handler) For some dictionary sig_names, so when the process receives SIGINT it prints: Received signal (SIGINT) Thank you.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >