Search Results

Search found 2677 results on 108 pages for 'eren trigger'.

Page 100/108 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Which network protocol to use for lightweight notification of remote apps (Delphi 2005)

    - by Chris Thornton
    I have this situation.... Client-initiated SOAP 1.1 communication between one server and let's say, tens of thousands of clients. Clients are external, coming in through our firewall, authenticated by certificate, https, etc.. They can be anywhere, and usually have their own firewalls, NAT routers, etc... They're truely external, not just remote corporate offices. They could be in a corporate/campus network, DSL/Cable, even Dialup. Currently, clients push new data to the server and pull new data from the server on 15-minute polling loop. The server currently does not push data - the client hits the "messagecount" method, to see if there is new data to pull. If 0, it sleeps for another 15 min and checks again. We're trying to get that down to 7 seconds. If this were an internal app, with one or just a few dozen clients, we'd write a cilent "listener" soap service, and would push data to it. But since they're external, sit behind their own firewalls, and sometimes private networks behind NAT routers, this is not practical. So we're left with polling on a much quicker loop. 10K clients, each checking their messagecount every 10 seconds, is going to be 1000/sec messages that will mostly just waste bandwidth, server, firewall, and authenticator resources. So I'm trying to design something better than what would amount to a self-inflicted DoS attack. I don't think it's practical to have the server send soap messages to the client (push) as this would require too much configuration at the client end. But I think there are alternatives that I don't know about. Such as: 1) Is there a way for the client to make a request for GetMessageCount() via Soap 1.1, and get the response, and then perhaps, "stay on the line" for perhaps 5-10 minutes to get additional responses in case new data arrives? i.e the server says "0", then a minute later in response to some SQL trigger (the server is C# on Sql Server, btw), knows that this client is still "on the line" and sends the updated message count of "5"? 2) Is there some other protocol that we could use to "ping" the client, using information gathered from their last GetMessageCount() request? 3) I don't even know. I guess I'm looking for some magic protocol where the client can send a GetMessageCount() request, which would include info for "oh by the way, in case the answer changes in the next hour, ping me at this address...". Also, I'm assuming that any of these "keep the line open" schemes would seriously impact the server sizing, as it would need to keep many thousands of connections open, simultaneously. That would likely impact the firewalls too, I think. Is there anything out there like that? Or am I pretty much stuck with polling? TIA, Chris

    Read the article

  • Need Google Map InfoWindow Hyperlink to Open Content in Overlay (Fusion Table Usage)

    - by McKev
    I have the following code established to render the map in my site. When the map is clicked, the info window pops up with a bunch of content including a hyperlink to open up a website with a form in it. I would like to utilize a function like fancybox to open up this link "form" in an overlay. I have read that fancybox doesn't support calling the function from within an iframe, and was wondering if there was a way to pass the link data to the DOM and trigger the fancybox (or another overlay option) in another way? Maybe a callback trick - any tips would be much appreciated! <style> #map-canvas { width:850px; height:600px; } </style> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script> <script src="http://gmaps-utility-gis.googlecode.com/svn/trunk/fusiontips/src/fusiontips.js" type="text/javascript"></script> <script type="text/javascript"> var map; var tableid = "1nDFsxuYxr54viD_fuH7fGm1QRZRdcxFKbSwwRjk"; var layer; var initialLocation; var browserSupportFlag = new Boolean(); var uscenter = new google.maps.LatLng(37.6970, -91.8096); function initialize() { map = new google.maps.Map(document.getElementById('map-canvas'), { zoom: 4, mapTypeId: google.maps.MapTypeId.ROADMAP }); layer = new google.maps.FusionTablesLayer({ query: { select: "'Geometry'", from: tableid }, map: map }); //http://gmaps-utility-gis.googlecode.com/svn/trunk/fusiontips/docs/reference.html layer.enableMapTips({ select: "'Contact Name','Contact Title','Contact Location','Contact Phone'", from: tableid, geometryColumn: 'Geometry', suppressMapTips: false, delay: 500, tolerance: 8 }); ; // Try W3C Geolocation (Preferred) if(navigator.geolocation) { browserSupportFlag = true; navigator.geolocation.getCurrentPosition(function(position) { initialLocation = new google.maps.LatLng(position.coords.latitude,position.coords.longitude); map.setCenter(initialLocation); //Custom Marker var pinColor = "A83C0A"; var pinImage = new google.maps.MarkerImage("http://chart.apis.google.com/chart?chst=d_map_pin_letter&chld=%E2%80%A2|" + pinColor, new google.maps.Size(21, 34), new google.maps.Point(0,0), new google.maps.Point(10, 34)); var pinShadow = new google.maps.MarkerImage("http://chart.apis.google.com/chart?chst=d_map_pin_shadow", new google.maps.Size(40, 37), new google.maps.Point(0, 0), new google.maps.Point(12, 35)); new google.maps.Marker({ position: initialLocation, map: map, icon: pinImage, shadow: pinShadow }); }, function() { handleNoGeolocation(browserSupportFlag); }); } // Browser doesn't support Geolocation else { browserSupportFlag = false; handleNoGeolocation(browserSupportFlag); } function handleNoGeolocation(errorFlag) { if (errorFlag == true) { //Geolocation service failed initialLocation = uscenter; } else { //Browser doesn't support geolocation initialLocation = uscenter; } map.setCenter(initialLocation); } } google.maps.event.addDomListener(window, 'load', initialize); </script>

    Read the article

  • I get an `Cannot read property 'slice' of undefined` message when I use the scrollTo jQuery plugin inside this function

    - by alexchenco
    I'm using the jQuery scrollTo plugin. I get this error in my JS Console: 16827Uncaught TypeError: Cannot read property 'slice' of undefined d.fn.scrollToindex.html.js:16827 jQuery.extend.eachindex.html.js:662 d.fn.scrollToindex.html.js:16827 jQuery.extend.eachindex.html.js:662 jQuery.fn.jQuery.eachindex.html.js:276 d.fn.scrollToindex.html.js:16827 popupPlaceindex.html.js:18034 (anonymous function)index.html.js:17745 jQuery.extend._Deferred.deferred.resolveWithindex.html.js:1018 doneindex.html.js:7247 jQuery.ajaxTransport.send.script.onload.script.onreadystatechange When I place $(".menu").scrollTo( $("li.matched").attr("id"), 800 ); inside it. function popupPlace(dict) { $popup = $('div#dish-popup'); $popup.render(dict,window.dishPopupTemplate); if(typeof(dict.dish) === 'undefined') { $popup.addClass('place-only'); } else { $popup.removeClass('place-only'); } var $place = $('div#dish-popup div.place'); var place_id = dict.place._id; if(liked[place_id]) { $place.addClass('liked'); } else { $place.removeClass('liked'); } if(dict.place.likes) { $place.addClass('has-likes'); } else { $place.addClass('zero-likes'); } var tokens = window.currentSearchTermTokens; var tokenRegex = tokens && new RegExp($.map(tokens, RegExp.escape).join('|'), 'gi'); $.each(dict.place.products, function(n, product) { $product = $('#menu-item-'+product.id); if(liked[place_id+'/'+product.id]) { $product.addClass('liked'); } if(tokens && matchesDish(product, tokens)) { $product.addClass('matched'); $product.highlight(tokenRegex); } else { $product.removeClass('matched'); $product.removeHighlight(); } if(product.likes) { $product.addClass('has-likes'); } else { $product.addClass('zero-likes'); } }); $('#overlay').show(); $('#dish-popup-container').show(); // Scroll to matched dish //$("a#scrolll").attr("href", "#" + $("li.matched").attr("id")); //$("a#scrolll").attr("href", "#" + $("li.matched").attr("id")); //$("a#scrolll").trigger("click"); $(".menu").scrollTo( $("li.matched").attr("id"), 800 ); // Hide dish results on mobile devices to prevent having a blank space at the bottom of the site if (Modernizr.mq('only screen and (max-width: 640px)')) { $('ol.results').hide(); } $(".close-dish-popup").click(function() { $("#overlay").hide(); $("#dish-popup-container").hide(); $('ol.results').show(); changeState({}, ['dish', 'place', 'serp']); }); showPopupMap(dict.place, "dish-popup-map"); } Any suggestion to fix this?

    Read the article

  • I have to do two seemingly mutually exclusive things on leaving an asp:textbox. Please help me get

    - by aape
    This project has gone from being a simple '99 Ford F-150 to the Homer. I've got controls with a gridview with textboxes for data entry. All the user controls on the pages are in AJAX updatepanels. User types in a database column or budget entity or some other financial thing they want to include in the report. The textboxes in the gridview have autopostback = true set. overly long background info When the user leaves the textbox, during the postback (triggered by onTextChanged) I do some validation back on the server on their entry - regexs, do they have rights to that column, is that column locked, etc. If it fails, I put a error message next to the textbox. If it passes, I wipe out any title or error that used to be next to the code. Focus is getting lost from the postback if they're tabbing out of the box, rather than going to the next textbox in the gridview. So to fix that I need, if their leaving the tb via the tab key, to also figure out what textbox or gridviewrow they're on, if they're not on the last row, and after the validation and labeling, put the focus on the textbox in the next row. I can't figure out how, in ontextchanged, to find what caused me to leave the textbox, so I'm thinking use javascript onkeyup to test the key pressed and then find the next box etc, but the ontextchanged fires first and then the js never does, and also, since the control is all AJAXed, the javascript can't find the textboxes because when you enter the page everything is collapsed (the requirements people loooove to collapse and expand things), and so when it's expanded, all the 'new' textboxes are up in the viewstate stuff in the page source, and not down where javascript can see them. The questions So I'm wondering if I can have an onblur in the javascript that can trigger a postback where I can do my validation and such, and either 1) include the keypressed or pick it out of sender in the event or 2) followup the onblur with onkeyup and somehow figure out what textbox is next on the grid and throw focus there. Or, is there another .NET based approach that could work for this? In terms of tearing the whole thing down and starting from scratch, I couldn't sell that to the bosses, I'm past the point of no return as far as that goes.

    Read the article

  • Maximum nametable char count exceeded

    - by doc
    I'm having issues with the maximum nametable char count quota, I followed a couple of answers here and it solved the problem for a while, but now I'm having the same issue. My Server side config is as follows: <system.serviceModel> <bindings> <netTcpBinding> <binding name="GenericBinding" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None" /> </binding> </netTcpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="1000000" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="REMWCF.RemWCFSvc"> <endpoint address="" binding="netTcpBinding" contract="REMWCF.IRemWCFSvc" bindingConfiguration="GenericBinding" /> <endpoint address="mex" binding="mexTcpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:9081/RemWCFSvc" /> </baseAddresses> </host> </service> </services> </system.serviceModel> I also have the same tcp binding on the devenv configuration. Have I reached the limit of contracts supported? Is there a way to turn off that quota? EDIT Error Message: Error: Cannot obtain Metadata from net.tcp://localhost:9081/RemWCFSvc/mex If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at http://go.microsoft.com/fwlink/?LinkId=65455.WS-Metadata Exchange Error URI: net.tcp://localhost:9081/RemWCFSvc/mex Metadata contains a reference that cannot be resolved: 'net.tcp://localhost:9081/RemWCFSvc/mex'. There is an error in the XML document. The maximum nametable character count quota (16384) has been exceeded while reading XML data. The nametable is a data structure used to store strings encountered during XML processing - long XML documents with non-repeating element names, attribute names and attribute values may trigger this quota. This quota may be increased by changing the MaxNameTableCharCount property on the XmlDictionaryReaderQuotas object used when creating the XML reader. I'm getting that error when trying to run the WCF (which is hosted in a windows service app).

    Read the article

  • Issue with blocking the UI during a onchange request - prevents other event from firing.

    - by jfrobishow
    I am having issues with jQuery blockUI plugins and firing two events that are (I think, unless I am loosing it) unrelated. Basically I have textboxes with onchange events bound to them. The event is responsible for blocking the UI, doing the ajax call and on success unblocking the UI. The ajax is saving the text in memory. The other control is a button with on onclick event which also block the UI, fire an ajax request saving what's in memory to the database and on success unblock the UI. Both of these work fine separately. The issue arise when I trigger the onchange by clicking on the button. Then only the onchange is fired and the onclick is ignored. I can change the text in the checkbox, click on the link and IF jQuery.blockUI() is present the onchange alone is fired and the save is never called. If I remove the blockUI both function are called. Here's a fully working example where you can see the issue. Please note the setTimeout are there when I was trying to simulate the ajax delay but the issue is happening without it. <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script src="http://github.com/malsup/blockui/raw/master/jquery.blockUI.js?v2.31"></script> <script> function doSomething(){ $.blockUI(); alert("doing something"); //setTimeout(function(){ $.unblockUI(); //},500); } function save(){ $.blockUI(); //setTimeout(function(){ alert("saving"); $.unblockUI(); //}, 1000); } </script> </head> <body> <input type="text" onchange="doSomething();"> <a href="#" onclick="save()">save</a> </body> </html>

    Read the article

  • Multiple Timers with setTimeInterval

    - by visibleinvisibly
    I am facing a problem with setInterval being used in a loop. I have a function subscribeFeed( ) which takes an array of urls as input. It loops through the url array and subscribes each url to getFeedAutomatically() using a setInterval function. so if three URL's are there in the array, then 3 setInterval's will be called. The problem is 1)how to distinguish which setInterval is called for which URL. 2)it is causing Runtime exception in setInterval( i guess because of closure problem in javascript) //constructor function myfeed(){ this.feedArray = []; } myfeed.prototype.constructor= myfeed; myfeed.prototype.subscribeFeed =function(feedUrl){ var i=0; var url; var count = 0; var _this = this; var feedInfo = { url : [], status : "" }; var urlinfo = []; feedUrl = (feedUrl instanceof Array) ? feedUrl : [feedUrl]; //notifyInterval = (notifyInterval instanceof Array) ? notifyInterval: [notifyInterval]; for (i = 0; i < feedUrl.length; i++) { urlinfo[i] = { url:'', notifyInterval:5000,// Default Notify/Refresh interval for the feed isenable:true, // true allows the feed to be fetched from the URL timerID: null, //default ID is null called : false, position : 0, getFeedAutomatically : function(url){ _this.getFeedUpdate(url); }, }; urlinfo[i].url = feedUrl[i].URL; //overide the default notify interval if(feedUrl[i].NotifyInterval /*&& (feedUrl[i] !=undefined)*/){ urlinfo[i].notifyInterval = feedUrl[i].NotifyInterval; } // Trigger the Feed registered event with the info about URL and status feedInfo.url[i] = feedUrl[i].URL; //Set the interval to get the feed. urlinfo[i].timerID = setInterval(function(){ urlinfo[i].getFeedAutomatically(urlinfo[i].url); }, urlinfo[i].notifyInterval); this.feedArray.push(urlinfo[i]); } } // The getFeedUpate function will make an Ajax request and coninue myfeed.prototype.getFeedUpdate = function( ){ } I am posting the same on jsfiddle http://jsfiddle.net/visibleinvisibly/S37Rj/ Thanking you in advance

    Read the article

  • Jquery draggable + toggleClass problem..

    - by vrynxzent
    the problem here is that the toggleClass position top:0px; left:0px will not trigger.. only the width and height and background-color will activate.. it will work if i will not drag the div(draggable).. if i start to drag the element, the toggled class positioning will not effect.. i dont know if there's such a function in jquery to help this.. <html> <head> <title></title> <script type="text/javascript" src="jquery-1.4.2.js"></script> <script type="text/javascript" src="jquery.ui.core.js"></script> <script type="text/javascript" src="jquery.ui.widget.js"></script> <script type="text/javascript" src="jquery.ui.mouse.js"></script> <script type="text/javascript" src="jquery.ui.resizable.js"></script> <script type="text/javascript" src="jquery.ui.draggable.js"></script> <script type="text/javascript"> $(document).ready(function(){ $("#x").draggable().dblclick(function(){ $(this).toggleClass("hi"); }); }); </script> <style> .hello { background:red; width:200px; height:200px; position:relative; top:100px; left:100px; } .hi { background:yellow; position:relative; width:300px; height:300px; top:0px; left:0px; } </style> </head> <body> <div id="x" class="hello"> </div> </body> </html>

    Read the article

  • EJB / JSF java.lang.ClassNotFoundException: com.ericsantanna.jobFC.dao.DAOFactoryRemote from [Module "com.sun.jsf-impl:main" from local module loader

    - by Eric Sant'Anna
    I'm in my first time using EJB and JSF, and I can't resolve this: 20:23:12,457 Grave [javax.enterprise.resource.webcontainer.jsf.application] (http-localhost-127.0.0.1-8081-2) com.ericsantanna.jobFC.dao.DAOFactoryRemote from [Module "com.sun.jsf-impl:main" from local module loader @439db2b2 (roots: C:\jboss-as-7.1.1.Final\modules)]: java.lang.ClassNotFoundException: com.ericsantanna.jobFC.dao.DAOFactoryRemote from [Module "com.sun.jsf-impl:main" from local module loader @439db2b2 (roots: C:\jboss-as-7.1.1.Final\modules)] I'm getting this when I do an action like a selectOneMenu or a commandButton click. DAOFactory.class @Singleton @Remote(DAOFactoryRemote.class) public class DAOFactory implements DAOFactoryRemote { private static final long serialVersionUID = 6030538139815885895L; @PersistenceContext private EntityManager entityManager; @EJB private JobDAORemote jobDAORemote; /** * Default constructor. */ public DAOFactory() { // TODO Auto-generated constructor stub } @Override public JobDAORemote getJobDAO() { JobDAO jobDAO = (JobDAO) jobDAORemote; jobDAO.setEntityManager(entityManager); return jobDAO; } JobDAO.class @Stateless @Remote(JobDAORemote.class) public class JobDAO implements JobDAORemote { private static final long serialVersionUID = -5483992924812255349L; private EntityManager entityManager; /** * Default constructor. */ public JobDAO() { // TODO Auto-generated constructor stub } @Override public void insert(Job t) { entityManager.persist(t); } @Override public Job findById(Class<Job> classe, Long id) { return entityManager.getReference(classe, id); } @Override public Job findByName(Class<Job> clazz, String name) { return entityManager .createQuery("SELECT job FROM " + clazz.getName() + " job WHERE job.nome = :nome" , Job.class) .setParameter("name", name) .getSingleResult(); } ... TriggerFormBean.class @ManagedBean @ViewScoped @Stateless public class TriggerFormBean implements Serializable { private static final long serialVersionUID = -3293560384606586480L; @EJB private DAOFactoryRemote daoFactory; @EJB private TriggerManagerRemote triggerManagerRemote; ... triggerForm.xhtml (a portion with problem) </p:layoutUnit> <p:layoutUnit id="eastConditionPanel" position="center" size="50%"> <p:panel header="Conditions to Release" style="width:97%;height:97%;"> <h:panelGrid columns="2" cellpadding="3"> <h:outputLabel value="Condition Name:" for="conditionName" /> <p:inputText id="conditionName" value="#{triggerFormBean.newCondition.name}" /> </h:panelGrid> <p:commandButton value="Add Condition" update="conditionsToReleaseList" id="addConditionToRelease" actionListener="#{triggerFormBean.addNewCondition}" /> <p:orderList id="conditionsToReleaseList" value="#{triggerFormBean.trigger.conditionsToRelease}" var="condition" controlsLocation="none" itemLabel="#{condition.name}" itemValue="#{condition}" iconOnly="true" style="width:97%;heigth:97%;"/> </p:panel> </p:layoutUnit> In TriggerFormBean.class if comments daoFactory we get the same exception with triggerManagerRemote, both annotated with @EJB. I'm don't understand the relationship between my DAOFactory and the "Module com.sun.jsf-impl:main"... Thanks.

    Read the article

  • Visual Studio not recognizing "BuildStep"

    - by AmbiguousX
    I'm trying to add an automatic post-build trigger to run NDepend after an automated team build in TFS 2010. NDepend's website provided code for integrating this capability, and so I have pasted their code into my .csproj file where they said for it to go, but I receive errors on the build. The errors refer to two of the three "BuildStep" tags I have in the code snippet. The following two snippets are giving me errors: <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Message="Running NDepend analysis"> <Output TaskParameter="Id" PropertyName="StepId" /> </BuildStep> and <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Id="$(StepId)" Status="Failed" /> However, this code snippet is NOT throwing up any problems: <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Id="$(StepId)" Status="Succeeded" /> I just don't understand why one works fine and a nearly identically laid out BuildStep tag does not. Is there something simple that I'm just overlooking? EDIT: Here is how it looks all together, if this makes a difference: <Target Name="NDepend" > <PropertyGroup> <NDPath>c:\tools\NDepend\NDepend.console.exe</NDPath> <NDProject>$(SolutionDir)MyProject.ndproj</NDProject> <NDOut>$(TargetDir)NDepend</NDOut> <NDIn>$(TargetDir)</NDIn> </PropertyGroup> <Exec Command='"$(NDPath)" "$(NDProject)" /OutDir "$(NDOut)" /InDirs "$(NDIn)"'/> </Target> <Target Name="AfterBuild"> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Message="Running NDepend analysis"> <Output TaskParameter="Id" PropertyName="StepId" /> </BuildStep> <PropertyGroup> <NDPath>c:\tools\NDepend\NDepend.console.exe</NDPath> <NDProject>$(SolutionRoot)\Main\src\MyProject.ndproj</NDProject> <NDOut>$(BinariesRoot)\NDepend</NDOut> <NDIn>$(BinariesRoot)\Release</NDIn> </PropertyGroup> <Exec Command='$(NDPath) "$(NDProject)" /OutDir "$(NDOut)" /InDirs "$(NDIn)"'/> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Id="$(StepId)" Status="Succeeded" /> <OnError ExecuteTargets="MarkBuildStepAsFailed" /> </Target> <Target Name="MarkBuildStepAsFailed"> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Id="$(StepId)" Status="Failed" /> </Target> EDIT: Added a bounty because I really need to get this going for my team. Thank you in advance!

    Read the article

  • iPad web app: Prevent input focus AFTER ajax call

    - by Mike Barwick
    So I've read around and can't for the life of me figure out of to solve my issue effectively. In short, I have a web app built for the iPad - which works as it should. However, I have an Ajax form which also submits as it should. But, after the callback and I clear/reset my form, the "iPad" automatically focuses on an input and opens the keyboard again. This is far from ideal. I managed to hack my way around it, but it's still not perfect. The code below is run on my ajax callback, which works - except there's still a flash of the keyboard quickly opening and closing. Note, my code won't work unless I use setTimeout. Also, from my understanding, document.activeElement.blur(); only works when there's a click event, so I triggered one via js. IN OTHER WORDS, HOW DO I PREVENT THE KEYBOARD FROM REOPENING AFTER AJAX CALL ON WEB APP? PS: Ajax call works fine and doesn't open the keyboard in Safari on the iPad, just web app mode. Here's my code: hideKeyboard: function () { // iOS web app only, iPad IS_IPAD = navigator.userAgent.match(/iPad/i) != null; if (IS_IPAD) { $(window).one('click', function () { document.activeElement.blur(); }); setTimeout(function () { $(window).trigger('click'); }, 500); } } Maybe it's related to how I'm clearing my forms, so here's that code. Note, all inputs have tabindex="-1" as well. clearForm: function () { // text, textarea, etc $('#campaign-form-wrap > form')[0].reset(); // checkboxes $('input[type="checkbox"]').removeAttr('checked'); $('#campaign-form-wrap > form span.custom.checkbox').removeClass('checked'); // radio inputs $('input[type="radio"]').removeAttr('checked'); $('#campaign-form-wrap > form span.custom.radio').removeClass('checked'); // selects $('form.custom .user-generated-field select').each(function () { var selection = $(this).find('option:first').text(), labelFor = $(this).attr('name'), label = $('[for="' + labelFor + '"]'); label.find('.selection-choice').html(selection); }); optin.hideKeyboard(); }

    Read the article

  • Performance - FunctionCall vs Event vs Action vs Delegate

    - by hwcverwe
    Currently I am using Microsoft Sync Framework to synchronize databases. I need to gather information per record which is inserted/updated/deleted by Microsoft Sync Framework and do something with this information. The sync speed can go over 50.000 records per minute. So that means my additional code need to be very lightweight otherwise it will be a huge performance penalty. Microsoft Sync Framework raises an SyncProgress event for each record. I am subscribed to that code like this: // Assembly1 SyncProvider.SyncProgress += OnSyncProgress; // .... private void OnSyncProgress(object sender, DbSyncProgressEventArgs e) { switch (args.Stage) { case DbSyncStage.ApplyingInserts: // MethodCall/Delegate/Action<>/EventHandler<> => HandleInsertedRecordInformation // Do something with inserted record info break; case DbSyncStage.ApplyingUpdates: // MethodCall/Delegate/Action<>/EventHandler<> => HandleUpdatedRecordInformation // Do something with updated record info break; case DbSyncStage.ApplyingDeletes: // MethodCall/Delegate/Action<>/EventHandler<> => HandleDeletedRecordInformation // Do something with deleted record info break; } } Somewhere else in another assembly I have three methods: // Assembly2 public class SyncInformation { public void HandleInsertedRecordInformation(...) {...} public void HandleUpdatedRecordInformation(...) {...} public void HandleInsertedRecordInformation(...) {...} } Assembly2 has a reference to Assembly1. So Assembly1 does not know anything about the existence of the SyncInformation class which need to handle the gathered information. So I have the following options to trigger this code: use events and subscribe on it in Assembly2 1.1. EventHandler< 1.2. Action< 1.3. Delegates using dependency injection: public class Assembly2.SyncInformation : Assembly1.ISyncInformation Other? I know the performance depends on: OnSyncProgress switch using a method call, delegate, Action< or EventHandler< Implementation of SyncInformation class I currently don't care about the implementation of the SyncInformation class. I am mainly focused on the OnSyncProgress method and how to call the SyncInformation methods. So my questions are: What is the most efficient approach? What is the most in-efficient approach? Is there a better way than using a switch in OnSyncProgress?

    Read the article

  • Java Flow Control Problem

    - by Kyle_Solo
    I am programming a simple 2d game engine. I've decided how I'd like the engine to function: it will be composed of objects containing "events" that my main game loop will trigger when appropriate. A little more about the structure: Every GameObject has an updateEvent method. objectList is a list of all the objects that will receive update events. Only objects on this list have their updateEvent method called by the game loop. I’m trying to implement this method in the GameObject class (This specification is what I’d like the method to achieve): /** * This method removes a GameObject from objectList. The GameObject * should immediately stop executing code, that is, absolutely no more * code inside update events will be executed for the removed game object. * If necessary, control should transfer to the game loop. * @param go The GameObject to be removed */ public void remove(GameObject go) So if an object tries to remove itself inside of an update event, control should transfer back to the game engine: public void updateEvent() { //object's update event remove(this); System.out.println("Should never reach here!"); } Here’s what I have so far. It works, but the more I read about using exceptions for flow control the less I like it, so I want to see if there are alternatives. Remove Method public void remove(GameObject go) { //add to removedList //flag as removed //throw an exception if removing self from inside an updateEvent } Game Loop for(GameObject go : objectList) { try { if (!go.removed) { go.updateEvent(); } else { //object is scheduled to be removed, do nothing } } catch(ObjectRemovedException e) { //control has been transferred back to the game loop //no need to do anything here } } // now remove the objects that are in removedList from objectList 2 questions: Am I correct in assuming that the only way to implement the stop-right-away part of the remove method as described above is by throwing a custom exception and catching it in the game loop? (I know, using exceptions for flow control is like goto, which is bad. I just can’t think of another way to do what I want!) For the removal from the list itself, it is possible for one object to remove one that is farther down on the list. Currently I’m checking a removed flag before executing any code, and at the end of each pass removing the objects to avoid concurrent modification. Is there a better, preferably instant/non-polling way to do this?

    Read the article

  • SQL Cartesian product joining table to itself and inserting into existing table

    - by Emma
    I am working in phpMyadmin using SQL. I want to take the primary key (EntryID) from TableA and create a cartesian product (if I am using the term correctly) in TableB (empty table already created) for all entries which share the same value for FieldB in TableA, except where TableA.EntryID equals TableA.EntryID So, for example, if the values in TableA were: TableA.EntryID TableA.FieldB 1 23 2 23 3 23 4 25 5 25 6 25 The result in TableB would be: Primary key EntryID1 EntryID2 FieldD (Default or manually entered) 1 1 2 Default value 2 1 3 Default value 3 2 1 Default value 4 2 3 Default value 5 3 1 Default value 6 3 2 Default value 7 4 5 Default value 8 4 6 Default value 9 5 4 Default value 10 5 6 Default value 11 6 4 Default value 12 6 5 Default value I am used to working in Access and this is the first query I have attempted in SQL. I started trying to work out the query and got this far. I know it's not right yet, as I’m still trying to get used to the syntax and pieced this together from various articles I found online. In particular, I wasn’t sure where the INSERT INTO text went (to create what would be an Append Query in Access). SELECT EntryID FROM TableA.EntryID TableA.EntryID WHERE TableA.FieldB=TableA.FieldB TableA.EntryID<>TableA.EntryID INSERT INTO TableB.EntryID1 TableB.EntryID2 After I've got that query right, I need to do a TRIGGER query (I think), so if an entry changes it's value in TableA.FieldB (changing it’s membership of that grouping to another grouping), the cartesian product will be re-run on THAT entry, unless TableB.FieldD = valueA or valueB (manually entered values). I have been using the Designer Tab. Does there have to be a relationship link between TableA and TableB. If so, would it be two links from the EntryID Primary Key in TableA, one to each EntryID in TableB? I assume this would not work because they are numbered EntryID1 and EntryID2 and the name needs to be the same to set up a relationship? If you can offer any suggestions, I would be very grateful. Research: http://www.fluffycat.com/SQL/Cartesian-Joins/ Cartesian Join example two Q: You said you can have a Cartesian join by joining a table to itself. Show that! Select * From Film_Table T1, Film_Table T2;

    Read the article

  • How can I make Google Maps icon to always appear in the center of map - when clicked?

    - by JHM_67
    For simplicity sake, lets use the XML example on Econym's site. http://econym.org.uk/gmap/example_map3.htm Once clicked, I would like icon balloon to be displayed in the middle of the map. What might I need to add to Mike's code to get this to work? I apologize for asking a lot.. Thanks in advance. <script type="text/javascript"> //<![CDATA[ if (GBrowserIsCompatible()) { side_bar var side_bar_html = ""; var gmarkers = []; function createMarker(point,name,html) { var marker = new GMarker(point); GEvent.addListener(marker, "click", function() { marker.openInfoWindowHtml(html); }); gmarkers.push(marker); side_bar_html += '<a href="javascript:myclick(' + (gmarkers.length-1) + ')">' + name + '<\/a><br>'; return marker; } function myclick(i) { GEvent.trigger(gmarkers[i], "click"); } var map = new GMap2(document.getElementById("map")); map.addControl(new GLargeMapControl()); map.addControl(new GMapTypeControl()); map.setCenter(new GLatLng( 43.907787,-79.359741), 9); GDownloadUrl("example.xml", function(doc) { var xmlDoc = GXml.parse(doc); var markers = xmlDoc.documentElement.getElementsByTagName("marker"); for (var i = 0; i < markers.length; i++) { // obtain the attribues of each marker var lat = parseFloat(markers[i].getAttribute("lat")); var lng = parseFloat(markers[i].getAttribute("lng")); var point = new GLatLng(lat,lng); var html = markers[i].getAttribute("html"); var label = markers[i].getAttribute("label"); var marker = createMarker(point,label,html); map.addOverlay(marker); } document.getElementById("side_bar").innerHTML = side_bar_html; }); } else { alert("Sorry, the Google Maps API is not compatible with this browser"); } //]]> </script>

    Read the article

  • How can I use $(this) in a function called by the onClick event?

    - by tepkenvannkorn
    I want to set the current state to selected when clicking on each link. I can do this by: <ul class="places"> <li class="selected"> <a href="javascript:void(0)" onclick="myClick(0);"> <span class="date">Saturday November 2, 2013</span> <span class="time">10am – 12pm</span> <span class="location">Western Sydney Parklands</span> </a> </li> <li> <a href="javascript:void(0)" onclick="myClick(1);"> <span class="date">Saturday November 9, 2013</span> <span class="time">10am – 12pm</span> <span class="location">Bankstown High School</span> </a> </li> <li> <a href="javascript:void(0)" onclick="myClick(2);"> <span class="date">Tuesday November 12, 2013</span> <span class="time">9am – 11am</span> <span class="location">Greystanes Park</span> </a> </li> </ul> $(document).ready( function() { $('.places li a').click( function() { $('.places li').removeClass('selected'); $(this).parent().addClass('selected'); }); }); But this will double triggering onclick event an each link because the calling function myClick() is called to push data to map. Then I decided to implement these in the myClick() function: function myClick( id ) { google.maps.event.trigger(markers[id], 'click'); $('.places li').removeClass('selected'); $(this).parent().addClass('selected'); } The problem is that I cannot use $(this) to add class to its parent li. See what I have tried here. Any help would be very much appreciated. Thanks!

    Read the article

  • Long pause when accessing DFS namespace

    - by Matt
    We've recently migrated our Windows network to use DFS for shared files. DFS is working well, except for one annoying problem: users experience a significant delay when they try to access a DFS namespace that they have not accessed for some time. I have tried to troubleshoot the issue but have not had any success so far, and I was hoping someone here may have some pointers to help resolve the problem. Firstly, some background on our network: The network uses a Windows 2008 functional level Active Directory domain with two Windows 2008 DCs and two DNS servers (one on each of the DCs). The network is DNS only - no WINS. All computers are located at the same site and connected by Gigabit Ethernet. We have approximately 20 Domain-based DFS namespaces in Windows 2008 mode, and each DFS namespace has two Windows 2008 DFS namespace servers (the same two servers for all namespaces). All namespace servers are in FQDN mode and all folder targets are specified using their FQDN. All computers are up-to-date with Service Packs and patches. The actual folder targets (i.e. the SMB shares our DFS folders point to) are scattered across several file and application servers, all running Windows 2008 bar two application servers which run Windows 2003 R2, with no replication setup at all (e.g. all DFS folders currently only have one folder target). Some more detail on the problem: The namespace access delay is generally 1 - 10 seconds long and seems to occur when a particular computer has not accessed the requested namespace for approximately five minutes or more. For example, if the user has not accessed \\domain.name\namespace1\ for more than five minutes and attempts to access \\domain.name\namespace1\ via Windows Explorer, the Explorer window will freeze for 1 - 10 seconds before finally resuming and displaying the folders that exist in \\domain.name\namespace1. If they then close the Explorer window and attempt to access \\domain.name\namespace1\ again within five minutes the contents will be displayed almost instantly - if they wait longer than five minutes it will go through the 1 - 10 second pause again. Once "inside" the namespace everything is nice and snappy, it's just the initial connection to the namespace that is slow. The browsing delays seem to affect all variants of Windows that we use (Windows 2008 x64 SP2, Windows 2003 R2 x86 SP2, Windows XP Pro x86 SP3) - it is possibly a bit worse in Windows XP / 2003 than in Windows 2008, but I'm not sure if the difference isn't just psychological. Accessing the underlying folder targets directly exhibits no delay at all - i.e. if the SMB shares pointed to by DFS are accessed directly (bypassing DFS) then there is no pause. During trouble-shooting I noticed that the "Cache duration" for all of our DFS roots is set to 300 seconds - 5 minutes. Given that this is the same amount of time required to trigger the pause I assume that this caching is somehow related, although I am unsure exactly what is cached on the client and hence what needs to be looked up again after 5 minutes have elapsed. In trying to resolve the problem I have already tried / checked the following (without success): Run dcdiag on both Domain Controllers - no problems found Done some basic DNS server checks without finding any problems - I don't know how to check the DNS servers in detail, but I would add that the network is not exhibiting any other strange behavior that may point to a DNS problem Disabled Anti-virus on clients and servers Removing one of the namespace servers from a couple of namespaces - no difference So that's where I'm up to - and I'm out of ideas. Can anyone suggest what may be causing the delays and/or what I should be trying next?

    Read the article

  • Can't sync filesystem without reboot

    - by Fabio
    I'm having an issue with a linux server. Once a week the running mysql instance hangs and there is no way to fully stop it. If I kill it, it remains in zombie status and init does not reap its pid. The server is used for staging deployments and some internal tools, so it's not under heavy load. The only process constantly used id mysql and for this I think that it's the only process which suffer of this issue. I've searched system logs for errors and the only thing I found is this error (repeated a couple of times) in dmesg output: [706560.640085] INFO: task mysqld:31965 blocked for more than 120 seconds. [706560.640198] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [706560.640312] mysqld D ffff88032fd93f40 0 31965 1 0x00000000 [706560.640317] ffff880242a27d18 0000000000000086 ffff88031a50dd00 ffff880242a27fd8 [706560.640321] ffff880242a27fd8 ffff880242a27fd8 ffff88031e549740 ffff88031a50dd00 [706560.640325] ffff88031a50dd00 ffff88032fd947f8 0000000000000002 ffffffff8112f250 [706560.640328] Call Trace: [706560.640338] [<ffffffff8112f250>] ? __lock_page+0x70/0x70 [706560.640344] [<ffffffff816cb1b9>] schedule+0x29/0x70 [706560.640347] [<ffffffff816cb28f>] io_schedule+0x8f/0xd0 [706560.640350] [<ffffffff8112f25e>] sleep_on_page+0xe/0x20 [706560.640353] [<ffffffff816c9900>] __wait_on_bit+0x60/0x90 [706560.640356] [<ffffffff8112f390>] wait_on_page_bit+0x80/0x90 [706560.640360] [<ffffffff8107dce0>] ? autoremove_wake_function+0x40/0x40 [706560.640363] [<ffffffff8112f891>] filemap_fdatawait_range+0x101/0x190 [706560.640366] [<ffffffff81130975>] filemap_write_and_wait_range+0x65/0x70 [706560.640371] [<ffffffff8122e441>] ext4_sync_file+0x71/0x320 [706560.640376] [<ffffffff811c3e6d>] do_fsync+0x5d/0x90 [706560.640379] [<ffffffff811c40d0>] sys_fsync+0x10/0x20 [706560.640383] [<ffffffff816d495d>] system_call_fastpath+0x1a/0x1f When this happens the only way to make everything working again is a full reboot, but in order to do that I'm forced to use this command after I've manually stopped all running processes echo b > /proc/sysrq-trigger otherwise normal reboot process hangs forever. I've tracked reboots script and I've found out that also the reboot process hangs on a sync call, this one in /etc/init.d/sendsigs (I'm on ubuntu) # Flush the kernel I/O buffer before we start to kill # processes, to make sure the IO of already stopped services to # not slow down the remaining processes to a point where they # are accidentily killed with SIGKILL because they did not # manage to shut down in time. sync I'm almost sure that the cause of this is an hardware issue (the RAID controller???) also because I've other two machines with the same hardware and software configuration and they don't suffer of this, but I can't find any hint in syslog or dmesg. I've also installed smartmontools and mcelog packages but none of them did report any issue. What can I do to track the cause of this issue? Today is happened again, here is the status of system after triggering a reboot init---console-kit-dae---64*[{console-kit-dae}] +-dbus-daemon +-mcelog +-mysqld---{mysqld} +-newrelic-daemon---newrelic-daemon---11*[{newrelic-daemon}] +-ntpd +-polkitd---{polkitd} +-python3 +-rpc.idmapd +-rpc.statd +-rpcbind +-sh---rc---S20sendsigs---sync +-smartd +-snmpd +-sshd---sshd---zsh---sudo---zsh---pstree +-sshd---sshd---zsh---sudo---zsh And here is the status of sync process # ps aux | grep sync root 3637 0.1 0.0 4352 372 ? D 05:53 0:00 sync i.e. Uninterruptible sleep... Hardware specs as reported by lshw I think the raid controller is a fake raid. I usually don't deal with hardware (and for the record I don't have physical access to it) description: Computer product: X7DBP () vendor: Supermicro version: 0123456789 serial: 0123456789 width: 64 bits capabilities: smbios-2.4 dmi-2.4 vsyscall32 configuration: administrator_password=disabled boot=normal frontpanel_password=unknown keyboard_password=unknown power-on_password=disabled uuid=53D19F64-D663-A017-8922-0030487C1FEE *-core description: Motherboard product: X7DBP vendor: Supermicro physical id: 0 version: PCB Version serial: 0123456789 *-firmware description: BIOS vendor: Phoenix Technologies LTD physical id: 0 version: 6.00 date: 05/29/2007 size: 106KiB capacity: 960KiB capabilities: pci pnp upgrade shadowing escd cdboot bootselect edd int13floppy2880 acpi usb ls120boot zipboot biosbootspecification *-storage description: RAID bus controller product: 631xESB/632xESB SATA RAID Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 version: 09 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list configuration: driver=ahci latency=0 resources: irq:19 ioport:18a0(size=8) ioport:1874(size=4) ioport:1878(size=8) ioport:1870(size=4) ioport:1880(size=32) memory:d8500400-d85007ff

    Read the article

  • IP Micro-outages, telephone micro-outages, and CATV micro-outages

    - by Michael Graff
    This is a long and complicated question, mostly because it has been going on for 2.5 years without a solution in sight. It also is only one-third computer related, the other two-thirds are cable TV and cable-phone related. Background I have COX Communications for a cable provider, and we get Internet, digital cable TV, and digital phone service through them. The Internet is a SB5101 right now, and has been a DPC2100 and SB5120 in the past. Same results. The phone service is provided through a telephone interface mounted on the outside of the house (not classic VoIP) and the CATV is through a Scientific Atlanta receiver without DVR. I do have a TiVo connected to the CATV box. Symptoms The CATV shows "blocking" -- sometimes very very short duration where a few blocks appear on the screen. Sometimes it lasts long enough that the video "pauses" for 2-5 seconds, and rarely but not unseen the audio also fails. The CATV decoder box shows no correctable (FEC) or uncorrectable errors. That is, all BER counters are zero for the video stream. The Internet shows "micro-outages" where it appears that sent packets are not making it out, but I continue to receive packets from local modems. That is, pings stop coming back, but I continue to see modems broadcast for DHCP, and sometimes they ask more than once. The cable modem shows no errors during this time, but cable modems lie like you would not believe. It is actually possible to unplug the coax from the modem for 20 seconds and it reports NO ERRORS to the provider's tools. The phone service cuts out for 1-3 seconds, infrequently. When this happens, I hear NOTHING (not even comfort noise) and the remote side hears a "click" as if I were getting a call waiting message. However, there is no call incoming, other than the one I'm currently on of course. Things SEEM to happen more frequently when the temperature outside swings from cold to warm, so fall/spring seems worse than summer/winter. All micro-outages occur between once or twice a day (which I could ignore) to 10 times per hour. All SNR, signal levels, noise levels, etc. show very close to optimal when measured. COX's diagnosis This is a continual pain for me. Over the last 2.5 years, they have opened, "fixed" something, and closed the tickets. They close it without confirming that it is indeed better, and when I reopen they cannot do that, but instead they open a new ticket and send yet another low-level tech out to do the same signal tests and report that all is OK. I've finally gotten a line tech who has a clue and is motivated enough to pursue this with me. We have tried things like switching the local nodes over to UPS and generator power, but this does not trigger the noise. We have tried replacing all cabling, the tap outside my house, the modem, the CATV decoder -- all without resolution. Recently they have decided it is both my computer or switch, my TiVo, and my phone that are all broken and causing this issue. My debugging steps I spent the worse day of my TV-watching life yesterday and part of today. I watched live TV without the TiVo. I witnessed blocking, but it did "feel different." and was actually more severe. Some days it is better, some days it is worse, so perhaps this was just a very bad day. Today, I connected the TiVo to my DVD player, and ran two very long movies through it. I saw no blocking at all during nearly 6 hours of video. Suggestions? Does anyone have any suggestions on what to do next? I understand perhaps only the IP side can be addressed here, but it is one of the more limiting debugging options.

    Read the article

  • IMAPSync Migration to Exchange 2010 SP1: Exchange drops connections while checking for existence of folders

    - by Benjamin Priestman
    I'm migrating from ZImbra Collaboration Suite to Exchange 2010 SP1. I'm testing IMAPSync as a possible migration tool and have hit a problem with the IMAP server in Exchange 2010. For each account it migrates, IMAPSync loops through the list of folders in the source mailbox and tests for the existence of each one in the destination mailbox. It then goes on to create those folders that do not exist and copy over the messages. It's the intial testing for the existence of the folders that is giving me a problem. The response given by the Exchange server when the folder does not yet exist is given as an error: "R=""16 NO IMAPSyncTest/8 doesn't exist."" After ten of these errors have been issued in succession, the Exchange server appears to stop responding to the IMAP session. Enabling protocol logging for IMAP confirms that the 10th request for a non-existant folder is the last request to be logged on the server. IMAPSync carries on merrily without seeming to realise its connection has gone and thus fails to create any folders. I've logged this with the tool's creator. Does anyone have any idea why Exchange is stopping responding to the connections though? The behaviour looks rather like throttling, although the 'ten strikes and you're out' trigger does not seem to correspond to any of the triggers on the ThrottlingPolicies. Just to check, I've tried creating a new ThrottlingPolicy, turned everything that I think might be relevant up to 11 and applied it to the my test mailbox. Policy settings are listed below, along with IMAP settings. Everything else should be pretty much as default. Throttling Policy RunspaceId : afa3159c-32a6-4906-986f-8adfbe50868b IsDefault : False AnonymousMaxConcurrency : 1 AnonymousPercentTimeInAD : AnonymousPercentTimeInCAS : AnonymousPercentTimeInMailboxRPC : EASMaxConcurrency : 10 EASPercentTimeInAD : EASPercentTimeInCAS : EASPercentTimeInMailboxRPC : EASMaxDevices : 10 EASMaxDeviceDeletesPerMonth : EWSMaxConcurrency : 10 EWSPercentTimeInAD : 50 EWSPercentTimeInCAS : 90 EWSPercentTimeInMailboxRPC : 60 EWSMaxSubscriptions : 5000 EWSFastSearchTimeoutInSeconds : 60 EWSFindCountLimit : 1000 IMAPMaxConcurrency : 1000 IMAPPercentTimeInAD : 400 IMAPPercentTimeInCAS : 400 IMAPPercentTimeInMailboxRPC : 400 OWAMaxConcurrency : 5 OWAPercentTimeInAD : 30 OWAPercentTimeInCAS : 150 OWAPercentTimeInMailboxRPC : 150 POPMaxConcurrency : 20 POPPercentTimeInAD : POPPercentTimeInCAS : POPPercentTimeInMailboxRPC : PowerShellMaxConcurrency : 18 PowerShellMaxTenantConcurrency : PowerShellMaxCmdlets : PowerShellMaxCmdletsTimePeriod : ExchangeMaxCmdlets : PowerShellMaxCmdletQueueDepth : PowerShellMaxDestructiveCmdlets : PowerShellMaxDestructiveCmdletsTimePeriod : RCAMaxConcurrency : 1000 RCAPercentTimeInAD : 400 RCAPercentTimeInCAS : 400 RCAPercentTimeInMailboxRPC : 400 CPAMaxConcurrency : 20 CPAPercentTimeInCAS : 205 CPAPercentTimeInMailboxRPC : 200 MessageRateLimit : RecipientRateLimit : ForwardeeLimit : CPUStartPercent : 75 AdminDisplayName : ExchangeVersion : 0.10 (14.0.100.0) Name : TestMigrationThrottling DistinguishedName : CN=TestMigrationThrottling,CN=Global Settings,CN=Our Company,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=cimex,DC=com Identity : TestMigrationThrottling Guid : 240049b3-2023-4df1-8edc-fbfc1fc80b87 ObjectCategory : domain.com/Configuration/Schema/ms-Exch-Throttling-Policy ObjectClass : {top, msExchGenericPolicy, msExchThrottlingPolicy} WhenChanged : 21/04/2011 18:48:19 WhenCreated : 21/04/2011 18:07:20 WhenChangedUTC : 21/04/2011 17:48:19 WhenCreatedUTC : 21/04/2011 17:07:20 OrganizationId : OriginatingServer : a-domain-controller IsValid : True IMAPSettings RunspaceId : afa3159c-32a6-4906-986f-8adfbe50868b ProtocolName : IMAP4 Name : 1 MaxCommandSize : 10240 ShowHiddenFoldersEnabled : False UnencryptedOrTLSBindings : {192.168.x.x:143} SSLBindings : {192.168.x.x:993} InternalConnectionSettings : {mail.office.domain.com:143:TLS, mail.office.domain.com:993:SSL} ExternalConnectionSettings : {mail.office.domain.com:143:TLS, mail.office.domain.com:993:SSL} X509CertificateName : mail.domain.com Banner : The Microsoft Exchange IMAP4 service is ready. LoginType : SecureLogin AuthenticatedConnectionTimeout : 00:30:00 PreAuthenticatedConnectionTimeout : 00:01:00 MaxConnections : 2147483647 MaxConnectionFromSingleIP : 2147483647 MaxConnectionsPerUser : 16 MessageRetrievalMimeFormat : BestBodyFormat ProxyTargetPort : 143 CalendarItemRetrievalOption : iCalendar OwaServerUrl : EnableExactRFC822Size : False LiveIdBasicAuthReplacement : False SuppressReadReceipt : False ProtocolLogEnabled : True EnforceCertificateErrors : False LogFileLocation : C:\Program Files\Microsoft\Exchange Server\V14\Logging\Imap4 LogFileRollOverSettings : Daily LogPerFileSizeQuota : 0 B (0 bytes) ExtendedProtectionPolicy : None EnableGSSAPIAndNTLMAuth : True Server : CMX-OFFICE-EX01 AdminDisplayName : ExchangeVersion : 0.10 (14.0.100.0) DistinguishedName : CN=1,CN=IMAP4,CN=Protocols,CN=EXCHANGE01,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=Our COmpany,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=domain,DC=com Identity : EXCHANGE01\1 Guid : 48f9dc37-74c2-4fb0-a042-641f863f45f2 ObjectCategory : domain.com/Configuration/Schema/ms-Exch-Protocol-Cfg-IMAP-Server ObjectClass : {top, protocolCfg, protocolCfgIMAP, protocolCfgIMAPServer} WhenChanged : 21/04/2011 17:03:39 WhenCreated : 15/04/2011 13:51:58 WhenChangedUTC : 21/04/2011 16:03:39 WhenCreatedUTC : 15/04/2011 12:51:58 OrganizationId : OriginatingServer : a-domain-server IsValid : True

    Read the article

  • nginx server over https using up all available file handles (upd: infinite loop?)

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it? EDIT: nginx 0.7.63, ubuntu linux, sinatra 1.0 EDIT 2: Here's the offending code. It's sinatra serving jnlp, which I finally figured out: get '/uploader' do #read in the launch.jnlp file theJNLP = "" File.open("/launch.jnlp", "r+") do |file| while theTemp = file.gets theJNLP = theJNLP + theTemp end end content_type :jnlp theJNLP end If I serve this with Sinatra via Mongrel and http, everything works fine. If I serve this with Sinatra and nginx via https, I get the above error. All other parts of the website appear to be equivalent. EDIT: I have since upgraded to passenger 2.2.14, ruby 1.9.1, nginx 0.8.40, openssl 1.0.0a, and no change. EDIT: The culprit appears to be infinite redirects due to using SSL. I don't know how to fix this, other than hosting the jnlp file in the root directory of the server (which I'd rather not do, since it limits me to one jnlp-based app at a time). The relevant lines from nginx.conf: # HTTPS server # server { listen 443; server_name MyServer.org root /My/Root/Dir; passenger_enabled on; expires 1d; proxy_set_header X-FORWARDED_PROTO https; proxy_set_header X_FORWARDED_PROTO https;#the almighty google is not clear on which to use location /upload { proxy_pass https://127.0.0.1:443; } } The funny thing about this is, first, I was putting the jnlp into a directory called 'uploader', not 'upload', but that still appeared to trigger the problem, since that proxy_pass directive appeared in the logs. Second, again, moving the jnlp into root avoided the problem, because there wasn't any of this proxying due to ssl. So, how can I avoid the infinite proxy_pass loop in nginx?

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • Parallelism in .NET – Part 18, Task Continuations with Multiple Tasks

    - by Reed
    In my introduction to Task continuations I demonstrated how the Task class provides a more expressive alternative to traditional callbacks.  Task continuations provide a much cleaner syntax to traditional callbacks, but there are other reasons to switch to using continuations… Task continuations provide a clean syntax, and a very simple, elegant means of synchronizing asynchronous method results with the user interface.  In addition, continuations provide a very simple, elegant means of working with collections of tasks. Prior to .NET 4, working with multiple related asynchronous method calls was very tricky.  If, for example, we wanted to run two asynchronous operations, followed by a single method call which we wanted to run when the first two methods completed, we’d have to program all of the handling ourselves.  We would likely need to take some approach such as using a shared callback which synchronized against a common variable, or using a WaitHandle shared within the callbacks to allow one to wait for the second.  Although this could be accomplished easily enough, it requires manually placing this handling into every algorithm which requires this form of blocking.  This is error prone, difficult, and can easily lead to subtle bugs. Similar to how the Task class static methods providing a way to block until multiple tasks have completed, TaskFactory contains static methods which allow a continuation to be scheduled upon the completion of multiple tasks: TaskFactory.ContinueWhenAll. This allows you to easily specify a single delegate to run when a collection of tasks has completed.  For example, suppose we have a class which fetches data from the network.  This can be a long running operation, and potentially fail in certain situations, such as a server being down.  As a result, we have three separate servers which we will “query” for our information.  Now, suppose we want to grab data from all three servers, and verify that the results are the same from all three. With traditional asynchronous programming in .NET, this would require using three separate callbacks, and managing the synchronization between the various operations ourselves.  The Task and TaskFactory classes simplify this for us, allowing us to write: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAll( new[] {server1, server2, server3 }, (tasks) => { // Propogate exceptions (see below) Task.WaitAll(tasks); return this.CompareTaskResults( tasks[0].Result, tasks[1].Result, tasks[2].Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is clean, simple, and elegant.  The one complication is the Task.WaitAll(tasks); statement. Although the continuation will not complete until all three tasks (server1, server2, and server3) have completed, there is a potential snag.  If the networkClass.GetResults method fails, and raises an exception, we want to make sure to handle it cleanly.  By using Task.WaitAll, any exceptions raised within any of our original tasks will get wrapped into a single AggregateException by the WaitAll method, providing us a simplified means of handling the exceptions.  If we wait on the continuation, we can trap this AggregateException, and handle it cleanly.  Without this line, it’s possible that an exception could remain uncaught and unhandled by a task, which later might trigger a nasty UnobservedTaskException.  This would happen any time two of our original tasks failed. Just as we can schedule a continuation to occur when an entire collection of tasks has completed, we can just as easily setup a continuation to run when any single task within a collection completes.  If, for example, we didn’t need to compare the results of all three network locations, but only use one, we could still schedule three tasks.  We could then have our completion logic work on the first task which completed, and ignore the others.  This is done via TaskFactory.ContinueWhenAny: var server1 = Task.Factory.StartNew( () => networkClass.GetResults(firstServer) ); var server2 = Task.Factory.StartNew( () => networkClass.GetResults(secondServer) ); var server3 = Task.Factory.StartNew( () => networkClass.GetResults(thirdServer) ); var result = Task.Factory.ContinueWhenAny( new[] {server1, server2, server3 }, (firstTask) => { return this.ProcessTaskResult(firstTask.Result); }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, instead of working with all three tasks, we’re just using the first task which finishes.  This is very useful, as it allows us to easily work with results of multiple operations, and “throw away” the others.  However, you must take care when using ContinueWhenAny to properly handle exceptions.  At some point, you should always wait on each task (or use the Task.Result property) in order to propogate any exceptions raised from within the task.  Failing to do so can lead to an UnobservedTaskException.

    Read the article

  • Introducing Oracle VM Server for SPARC

    - by Honglin Su
    As you are watching Oracle's Virtualization Strategy Webcast and exploring the great virtualization offerings of Oracle VM product line, I'd like to introduce Oracle VM Server for SPARC --  highly efficient, enterprise-class virtualization solution for Sun SPARC Enterprise Systems with Chip Multithreading (CMT) technology. Oracle VM Server for SPARC, previously called Sun Logical Domains, leverages the built-in SPARC hypervisor to subdivide supported platforms' resources (CPUs, memory, network, and storage) by creating partitions called logical (or virtual) domains. Each logical domain can run an independent operating system. Oracle VM Server for SPARC provides the flexibility to deploy multiple Oracle Solaris operating systems simultaneously on a single platform. Oracle VM Server also allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by the CMT architecture. Oracle VM Server for SPARC integrates both the industry-leading CMT capability of the UltraSPARC T1, T2 and T2 Plus processors and the Oracle Solaris operating system. This combination helps to increase flexibility, isolate workload processing, and improve the potential for maximum server utilization. Oracle VM Server for SPARC delivers the following: Leading Price/Performance - The low-overhead architecture provides scalable performance under increasing workloads without additional license cost. This enables you to meet the most aggressive price/performance requirement Advanced RAS - Each logical domain is an entirely independent virtual machine with its own OS. It supports virtual disk mutipathing and failover as well as faster network failover with link-based IP multipathing (IPMP) support. Moreover, it's fully integrated with Solaris FMA (Fault Management Architecture), which enables predictive self healing. CPU Dynamic Resource Management (DRM) - Enable your resource management policy and domain workload to trigger the automatic addition and removal of CPUs. This ability helps you to better align with your IT and business priorities. Enhanced Domain Migrations - Perform domain migrations interactively and non-interactively to bring more flexibility to the management of your virtualized environment. Improve active domain migration performance by compressing memory transfers and taking advantage of cryptographic acceleration hardware. These methods provide faster migration for load balancing, power saving, and planned maintenance. Dynamic Crypto Control - Dynamically add and remove cryptographic units (aka MAU) to and from active domains. Also, migrate active domains that have cryptographic units. Physical-to-virtual (P2V) Conversion - Quickly convert an existing SPARC server running the Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle Solaris 10 image. Use this image to facilitate OS migration into the virtualized environment. Virtual I/O Dynamic Reconfiguration (DR) - Add and remove virtual I/O services and devices without needing to reboot the system. CPU Power Management - Implement power saving by disabling each core on a Sun UltraSPARC T2 or T2 Plus processor that has all of its CPU threads idle. Advanced Network Configuration - Configure the following network features to obtain more flexible network configurations, higher performance, and scalability: Jumbo frames, VLANs, virtual switches for link aggregations, and network interface unit (NIU) hybrid I/O. Official Certification Based On Real-World Testing - Use Oracle VM Server for SPARC with the most sophisticated enterprise workloads under real-world conditions, including Oracle Real Application Clusters (RAC). Affordable, Full-Stack Enterprise Class Support - Obtain worldwide support from Oracle for the entire virtualization environment and workloads together. The support covers hardware, firmware, OS, virtualization, and the software stack. SPARC Server Virtualization Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation. Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single Sun SPARC T-Series server with finer granularity for computing resources.  For SPARC CMT processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is naturally built into the CPU for lower overhead and higher performance. Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Sun SPARC Enterprise systems with CMT technology to deliver a more agile, responsive, and low-cost environment. Management with Oracle Enterprise Manager Ops Center The Oracle Enterprise Manager Ops Center Virtualization Management Pack provides full lifecycle management of virtual guests, including Oracle VM Server for SPARC and Oracle Solaris Containers. It helps you streamline operations and reduce downtime. Together, the Virtualization Management Pack and the Ops Center Provisioning and Patch Automation Pack provide an end-to-end management solution for physical and virtual systems through a single web-based console. This solution automates the lifecycle management of physical and virtual systems and is the most effective systems management solution for Oracle's Sun infrastructure. Ease of Deployment with Configuration Assistant The Oracle VM Server for SPARC Configuration Assistant can help you easily create logical domains. After gathering the configuration data, the Configuration Assistant determines the best way to create a deployment to suit your requirements. The Configuration Assistant is available as both a graphical user interface (GUI) and terminal-based tool. Oracle Solaris Cluster HA Support The Oracle Solaris Cluster HA for Oracle VM Server for SPARC data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the Oracle VM Server guest domain service. In addition, applications that run on a logical domain, as well as its resources and dependencies can be controlled and managed independently. These are managed as if they were running in a classical Solaris Cluster hardware node. Supported Systems Oracle VM Server for SPARC is supported on all Sun SPARC Enterprise Systems with CMT technology. UltraSPARC T2 Plus Systems ·   Sun SPARC Enterprise T5140 Server ·   Sun SPARC Enterprise T5240 Server ·   Sun SPARC Enterprise T5440 Server ·   Sun Netra T5440 Server ·   Sun Blade T6340 Server Module ·   Sun Netra T6340 Server Module UltraSPARC T2 Systems ·   Sun SPARC Enterprise T5120 Server ·   Sun SPARC Enterprise T5220 Server ·   Sun Netra T5220 Server ·   Sun Blade T6320 Server Module ·   Sun Netra CP3260 ATCA Blade Server Note that UltraSPARC T1 systems are supported on earlier versions of the software.Sun SPARC Enterprise Systems with CMT technology come with the right to use (RTU) of Oracle VM Server, and the software is pre-installed. If you have the systems under warranty or with support, you can download the software and system firmware as well as their updates. Oracle Premier Support for Systems provides fully-integrated support for your server hardware, firmware, OS, and virtualization software. Visit oracle.com/support for information about Oracle's support offerings for Sun systems. For more information about Oracle's virtualization offerings, visit oracle.com/virtualization.

    Read the article

  • I&rsquo;m sorry RPGs, it&rsquo;s not you, it&rsquo;s me: The birth of my game idea

    - by George Clingerman
    One of the things I’ve had to give up in order to have some development time at night is gaming. It’s something I refused to admit for years but I’ve just had to face the facts. I’m no longer a gamer. I just don’t have hours and hours of free time to pour into gaming and when I do have hours and hours of free time I want to pour them into game development. That doesn’t mean I don’t game at all! I play games pretty much every day. It just means I’ve moved more into the casual game realm. It’s all I have time for when juggling priorities in my life. That means that games like Gears of War 2 sit shrink wrapped on my shelf and although I popped Dragon Age into my Xbox 360 one time, I barely made it through the opening sequence and haven’t had time to sit down and play again. Instead I’m playing short games like Jamestown, Atom Zombie Smasher, Fortix or if I have time to jump in and play a few rounds maybe some Monday Night Combat or Team Fortress 2. These are games I can instantly get into and play for just a short period of time and then walk away. Breath of Death VII saved my life: Back in the day (way, way back in the day) I used to be a pretty big RPG fan. Not big by a lot of RPG gamers' standards (most of the RPGs RPG fans about I’ve never heard of) but I used to LOVE to play them on the NES, SNES and Genesis and considered that my genre. Final Fantasy, Shining in the Darkness, Bard’s Tale, Faxanadu, Shadowrun, Ultima, Dragon Warrior, Chrono Trigger, Phantasy Star, Shining Force and well the list could go on but those are the ones I remember off the top of my head. I loved playing RPGs and they were my games of choice. After my first son was born (this was just about 12 years ago), I tried to continue playing RPGs and purchased games like Baldur’s Gate I & II, Neverwinter Nights, Fable, then a few of the Final Fantasy’s then Kingdom Hearts. I kept buying these games and then only playing for about fifteen minutes and never getting back to them. I still loved RPGs but they just no longer fit into my life (I still haven’t accepted that since I still purchased Dragon Age II for some reason and convinced myself I’d find the time). Adding three more sons to the mix (that’s 4 total) didn’t help much to finding more RPG time (except for Breath of Death VII and other XBLIG RPG titles, thanks guys!) All work and no RPG: A few months ago as I was sitting thinking about the lack of RPGs in my life and talking to my wife about why I wish RPGs were different and easier for a dad like me to get into. She seemed like she was listening, so I started listing all the things that made them impossible for me to play. Here’s a short list I came up with. They take 15 billion hours to complete I have a few minutes at a time I can grab to play them if I want to have time to code. At that rate it would take me 9 trillion years to beat just one RPG. There’s such long spans of times between when I can play them I forget what I was even doing so I have to spend most of the playtime I have just figuring that out and then my play time is over. Repeat. I’ll never finish one and since it takes so long to get to the fun part in an RPG, I’m never having fun. RPGs aren’t fun if you don’t have hours to play them at a time. As you can see based on my science and math, RPGs aren’t fun for me any more. From there my brain started toying around with ideas of RPGs that would work for me. They would have to be a short RPG, you know one you could beat in a single play session. A dad sized play session. I started thinking, wouldn’t it be awesome if there was a fifteen minute RPG? That got me laughing and I took that as a good sign that it sounded fun and so I thought about it a little more. I immediately discarded the idea of doing a real RPG. I’m sure a short RPG like that could be done but it wasn’t the vibe that I had in my head. No this was going to be something that just had the core essence of an RPG. In reality what I’d be making would be more of an arcade style game. One with high scores and lots of crazy action on the screen. And that’s when it hit me. It would be a speed run RPG. That’s the basics of the game I’m working on.   The Elevator Pitch: It’s a 2D top down RPG themed arcade game focused on speed. It sounds like an RPG, smells like an RPG but it’s merely emulating an RPG. The game is focused on fun and mayhem in RPG form with players leveling up in seconds instead of hours and rushing to finish quests as quickly as possible because they’ve only got fifteen minutes before EVIL overtakes the world. If the player takes longer than fifteen minutes, it’s game over man. One to four player co-operative play to really see just how fast players can level up and beat the game. Gamers will compete on leaderboards for bragging rights for fastest 1, 2, 3, and 4 player speed runs, lowest leveled characters to beat the game, highest leveled characters to beat the game and so on. Times will be tracked for everything from how long a player sat distributing stats, equipping items, talking to NPCs to running around the level. These stats will be shown at the end of each quest/level so the players can work on improving their speed run for that part of the game next time around. It’s the perfect RPG for those of us who only have fifteen minutes of game time! Where I’m at: I’m still at the prototyping stage attempting to but all the basic framework pieces in place that will at minimum give me one level to rush through. I’ve been working on this prototype for about a month now though so I’m going to have to step it up a bit or I’m not going to get finished in time (remember I’ve only got 85 days left!) Lots of the game code is in place (although pretty sloppy) but I still can’t play through that first quest/level just yet. That’s my goal to finish up by the end of next Sunday (3/25/2012). You can all hold me to that and cheer me on or heckle me throughout the week. Either way that should help me stay a bit more motivated and focused. In my head this feels like it’s going to be a fun game so I’m looking forward to seeing how it actually plays!

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >