Search Results

Search found 18156 results on 727 pages for 'tear down'.

Page 69/727 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • How to autostart this slide

    - by lchales
    Hello there: first of all i have no idea on coding or anything related, simple question: is there any simple way to tell this code to autostart the slide? at the current moment the images change on click. currently the index page only have one image, what i want is to add a few but without the need to click to see the next one here is the code from my index: <script type="text/javascript"> //<![CDATA[ /* the images preload plugin */ (function($) { $.fn.preload = function(options) { var opts = $.extend({}, $.fn.preload.defaults, options), o = $.meta ? $.extend({}, opts, this.data()) : opts; var c = this.length, l = 0; return this.each(function() { var $i = $(this); $('<img/>').load(function(i){ ++l; if(l == c) o.onComplete(); }).attr('src',$i.attr('src')); }); }; $.fn.preload.defaults = { onComplete : function(){return false;} }; })(jQuery); //]]> </script><script type="text/javascript"> //<![CDATA[ $(function() { var $tf_bg = $('#tf_bg'), $tf_bg_images = $tf_bg.find('img'), $tf_bg_img = $tf_bg_images.eq(0), $tf_thumbs = $('#tf_thumbs'), total = $tf_bg_images.length, current = 0, $tf_content_wrapper = $('#tf_content_wrapper'), $tf_next = $('#tf_next'), $tf_prev = $('#tf_prev'), $tf_loading = $('#tf_loading'); //preload the images $tf_bg_images.preload({ onComplete : function(){ $tf_loading.hide(); init(); } }); //shows the first image and initializes events function init(){ //get dimentions for the image, based on the windows size var dim = getImageDim($tf_bg_img); //set the returned values and show the image $tf_bg_img.css({ width : dim.width, height : dim.height, left : dim.left, top : dim.top }).fadeIn(); //resizing the window resizes the $tf_bg_img $(window).bind('resize',function(){ var dim = getImageDim($tf_bg_img); $tf_bg_img.css({ width : dim.width, height : dim.height, left : dim.left, top : dim.top }); }); //expand and fit the image to the screen $('#tf_zoom').live('click', function(){ if($tf_bg_img.is(':animated')) return false; var $this = $(this); if($this.hasClass('tf_zoom')){ resize($tf_bg_img); $this.addClass('tf_fullscreen') .removeClass('tf_zoom'); } else{ var dim = getImageDim($tf_bg_img); $tf_bg_img.animate({ width : dim.width, height : dim.height, top : dim.top, left : dim.left },350); $this.addClass('tf_zoom') .removeClass('tf_fullscreen'); } } ); //click the arrow down, scrolls down $tf_next.bind('click',function(){ if($tf_bg_img.is(':animated')) return false; scroll('tb'); }); //click the arrow up, scrolls up $tf_prev.bind('click',function(){ if($tf_bg_img.is(':animated')) return false; scroll('bt'); }); //mousewheel events - down / up button trigger the scroll down / up $(document).mousewheel(function(e, delta) { if($tf_bg_img.is(':animated')) return false; if(delta > 0) scroll('bt'); else scroll('tb'); return false; }); //key events - down / up button trigger the scroll down / up $(document).keydown(function(e){ if($tf_bg_img.is(':animated')) return false; switch(e.which){ case 38: scroll('bt'); break; case 40: scroll('tb'); break; } }); } //show next / prev image function scroll(dir){ //if dir is "tb" (top -> bottom) increment current, //else if "bt" decrement it current = (dir == 'tb')?current + 1:current - 1; //we want a circular slideshow, //so we need to check the limits of current if(current == total) current = 0; else if(current < 0) current = total - 1; //flip the thumb $tf_thumbs.flip({ direction : dir, speed : 400, onBefore : function(){ //the new thumb is set here var content = '<span id="tf_zoom" class="tf_zoom"><\/span>'; content +='<img src="' + $tf_bg_images.eq(current).attr('longdesc') + '" alt="Thumb' + (current+1) + '"/>'; $tf_thumbs.html(content); } }); //we get the next image var $tf_bg_img_next = $tf_bg_images.eq(current), //its dimentions dim = getImageDim($tf_bg_img_next), //the top should be one that makes the image out of the viewport //the image should be positioned up or down depending on the direction top = (dir == 'tb')?$(window).height() + 'px':-parseFloat(dim.height,10) + 'px'; //set the returned values and show the next image $tf_bg_img_next.css({ width : dim.width, height : dim.height, left : dim.left, top : top }).show(); //now slide it to the viewport $tf_bg_img_next.stop().animate({ top : dim.top },700); //we want the old image to slide in the same direction, out of the viewport var slideTo = (dir == 'tb')?-$tf_bg_img.height() + 'px':$(window).height() + 'px'; $tf_bg_img.stop().animate({ top : slideTo },700,function(){ //hide it $(this).hide(); //the $tf_bg_img is now the shown image $tf_bg_img = $tf_bg_img_next; //show the description for the new image $tf_content_wrapper.children() .eq(current) .show(); }); //hide the current description $tf_content_wrapper.children(':visible') .hide() } //animate the image to fit in the viewport function resize($img){ var w_w = $(window).width(), w_h = $(window).height(), i_w = $img.width(), i_h = $img.height(), r_i = i_h / i_w, new_w,new_h; if(i_w > i_h){ new_w = w_w; new_h = w_w * r_i; if(new_h > w_h){ new_h = w_h; new_w = w_h / r_i; } } else{ new_h = w_w * r_i; new_w = w_w; } $img.animate({ width : new_w + 'px', height : new_h + 'px', top : '0px', left : '0px' },350); } //get dimentions of the image, //in order to make it full size and centered function getImageDim($img){ var w_w = $(window).width(), w_h = $(window).height(), r_w = w_h / w_w, i_w = $img.width(), i_h = $img.height(), r_i = i_h / i_w, new_w,new_h, new_left,new_top; if(r_w > r_i){ new_h = w_h; new_w = w_h / r_i; } else{ new_h = w_w * r_i; new_w = w_w; } return { width : new_w + 'px', height : new_h + 'px', left : (w_w - new_w) / 2 + 'px', top : (w_h - new_h) / 2 + 'px' }; } }); //]]> </script>

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Brightness controls not adjusting the brightness in ubuntu 10.10 64 bit on hp dm3t notebook

    - by user26768
    I picked up an hp dm3t laptop with intel HD graphics and installed ubuntu 10.10 64 bit on it. It works great -- the only problem is that the brightness controls don't work. The brightness is always at full. When I try to adjust it down, the indicator graphic indicates that it's going down but the actual brightness doesn't change. Is there anything that I can try to make this work? I'd really appreciate any help.

    Read the article

  • Using autohotkey with a gamepad

    - by Phenom
    I can map buttons on a gamepad like this: Joy2:: Send {Up down} ; Hold down the left-arrow key. KeyWait Joy2 ; Wait for the user to release the joystick button. Send {Up up} ; Release the left-arrow key. return However I want to remap the directional pad to do the same thing instead. How can I do this?

    Read the article

  • Web log analyser with daily statistics per URL

    - by Mat
    Are there any good web server log analysis tools that can provide me with daily statistics on individual URLs? I guess I'm looking at something that can drill down into particular URLs and on particular days rather than just a monthly summary report. The following don't seem to meet my needs as they don't offer drilling down to get more detailed info: awstats analog webalizer (I'm running an nginx frontend into Apache with nginx outputting 'combined' format logfiles if it makes any difference.)

    Read the article

  • MacBook hangs when restarting to Vista through BootCamp

    - by John
    I have Vista Pro installed on my Macbook and while it mostly works well, sometimes when I select Windows from the boot screen after turning the thing on, the screen just goes black and it all stops. If I hold the power button until it powers down and repeat, it works. It's not getting as far as Windows at all, I think, because if Windows was previously shut down to Hibernate, it still restores fine... and I'm pretty sure if Windows' startup had hung and been force-rebooted my session would be borked?

    Read the article

  • How to shutdown VMware Fusion virtual machine on host shutdown

    - by Nikksno
    I have a Mac mini running Mavericks server. I installed the Atmail server + webmail vm [a linux centos distribution] in VMware Fusion Professional 6 with the VMware Tools addon. It works flawlessly. I've set it to start on boot and that works very reliably. However I've been looking for a way to also safely and gracefully shut it down whenever OS X shuts down for whatever reason. The Mac is connected to a UPS and configured to perform an automatic shutdown in case the battery starts running low so that's no additional problem. Now the first thing I did was to go into Fusion's prefs and select "Power off the vm" when closing it. However I noticed that for some arcane reason closing the vm window would actually forcibly power off the vm: so then I found this post that showed me how to change the default power options and I managed to have the vm cleanly shutdown when closing its window or quitting Fusion altogether. At this point I was hoping to have solved the problem but as it turns out upon invoking system shutdown OS X doesn't wait for the vm to shutdown and terminates Fusion before it has a chance to do so. At this point I started looking for a way to automate the process of shutting down the guest os via some advanced setting but had no luck in doing so. That's when I found a command to shut the vm down: vmrun and it worked. The only thing left was to find out a way to execute this script on os x shutdown and giving it a little time to power off completely. However this turned out to be a nightmare: I spent hours looking through several ways to do this with Startup Items, rc.shutdown, cron, launchd, etc... but none of them worked the way I had configured them. I have to say that I found very limited information on using launchd for a shutdown script execution and I know it's the latest thing in the OS X world so I'm hoping someone out there among you will be able to help me out with this. I still think this is an extremely basic feature to ask for and I was really surprised to find this little documentation on so many different aspects of this problem. Is Fusion too basic of an application for this? I really hope someone can help. Thank you very much in advance.

    Read the article

  • Cliq Wireless questions

    - by Nathan Adams
    Heres the deal: I am by no means a Linux expert, even less when it comes to the Android OS but lets see if we can't solve this problem. The problem I am having is that on the Cliq we have a broadcom chip. In order to use the wireless card you must first insert the module into the kernel. Fine: # insmod /system/lib/dhd.ko insmod /system/lib/dhd.ko # lsmod lsmod dhd 164936 0 - Live 0xbf000000 # BUT netcfg (or ifconfig in busybox) does not recognize that there is a wireless adapter there: # netcfg netcfg lo UP 127.0.0.1 255.0.0.0 0x00000049 dummy0 DOWN 0.0.0.0 0.0.0.0 0x00000082 rmnet0 UP 14.67.164.2 255.255.255.252 0x00001043 rmnet1 DOWN 0.0.0.0 0.0.0.0 0x00001002 rmnet2 DOWN 0.0.0.0 0.0.0.0 0x00001002 usb0 DOWN 0.0.0.0 0.0.0.0 0x00001002 # busybox ifconfig busybox ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:282 errors:0 dropped:0 overruns:0 frame:0 TX packets:282 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:18754 (18.3 KiB) TX bytes:18754 (18.3 KiB) rmnet0 Link encap:Ethernet HWaddr EE:83:E8:B4:4A:ED inet addr:14.x.x.x Bcast:14.67.164.3 Mask:255.255.255.252 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7148 errors:0 dropped:0 overruns:0 frame:0 TX packets:7659 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2609236 (2.4 MiB) TX bytes:908575 (887.2 KiB) # For giggles if we attempt to launch wpa_supplicant anyways we get this: # wpa_supplicant -Dwext -ieth0 -c/data/misc/wifi/wpa_supplicant.conf wpa_supplicant -Dwext -ieth0 -c/data/misc/wifi/wpa_supplicant.conf ioctl[SIOCSIWPMKSA]: No such device ioctl[SIOCSIWMODE]: No such device Could not configure driver to use managed mode ioctl[SIOCGIFFLAGS]: No such device Could not set interface 'eth0' UP ioctl[SIOCGIWRANGE]: No such device ioctl[SIOCGIFINDEX]: No such device CTRL-EVENT-STATE-CHANGE id=-1 state=0 ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWAUTH]: No such device WEXT auth param 7 value 0x0 - Failed to disable WPA in the driver. ioctl[SIOCSIWAUTH]: No such device WEXT auth param 5 value 0x0 - ioctl[SIOCSIWAUTH]: No such device WEXT auth param 4 value 0x0 - ioctl[SIOCSIWAP]: No such device ioctl[SIOCGIFFLAGS]: No such device # In dmesg we get: <4>[18300.494065] dhd_oob_enable_intr : enable <4>[18305.019976] dhd_net_start failed bus is not ready <4>[18305.020278] dhdsdio_probe: dhd_net_start failed! Do I need to specify the firmware with insmod? Why are we trying to control the interface manually instead of through the Android API? The Android API doesn't support ad-hoc connections as far as I can tell. The card, I am sure, most certainly can.

    Read the article

  • load balancing in Tomcat

    - by Alvin
    Hi All, I want to implement the load balancing in tomcat 6.0 so that we can create more than one instance of a tomcat and when any of the instance is down then other instance will run our application. so that our application will never be down even when the large number of concurrent request comes. But i have no idea to implement it. Please give your precious suggestions.

    Read the article

  • Sublime Text 2 'text bubbling'?

    - by Alex Mcp
    In vim and Notepad++ I have an awesome feature either mapped or built in that I've seen called text bubbling. I know about the Sublime documentation for mapping my own, but wanted to make sure I wasn't duplicating functionality: Basically when I have either block of text selected, or just a cursor on a line, I push (ctrl + up/down) or some other mapping, and the text is moved up or down, in a block, and the rest of the text 'flows' around it. Is this a native feature in Sublime Text or should I script it in?

    Read the article

  • VMWare player - how do I start VM on machine with lower RAM? [closed]

    - by katit
    I moved image from one machine to another. Problem is - I didn't shut down' instance, just suspended it. On machine #1 I have 32G and instance had 16Gb allocated. On machine #2 I have only 10G and instance won't resume (due to memory) But I can't lower amount of memory - I guess because machine in "suspended". Anyway to lower memory or "shut down" instance without powering? How do I start it on machine #2?

    Read the article

  • How to manage service failover?

    - by Jader Dias
    I am using Windows Network Load Balancing to keep my apps available even when one of the servers is down. But when all servers are up, but one instance of a service in one of them is down, I would like to not send requests to it, because those requests will be lost. Is there any solution that addresses this problem?

    Read the article

  • Typing filename in standard open file dialog (Windows 7) - file name suggestion

    - by bybor
    When you use standard windows open file dialog and start typing it puts files whose name starts with what you type to drop down list. But on another pc with same Windows 7 it also puts first of them into input box in which you type - like FF does with URLS, allowing you to immediately press Enter (without pressing 'Down' to select file). I don't know why this behavior is different, but I want suggested file name shown in input box. How could it be achieved? Thanks.

    Read the article

  • Recovering Excel file

    - by Kristin Rousselo
    I have auto recovery on my Excel program and I save my work several times a day. Tonight, I turned on the computer and it had shut down down for some apparent reason. When I pulled up my Excel program it only showed one auto recovery file form 8am this morning even though I worked/added onto my Excel sheet for several hours this afternoon. Is there any way to recover my work I did this afternoon?

    Read the article

  • Password Won't Work after Crash

    - by Jack Cornell
    My Win 7 computer locked up, so I shut it down (holding down the power button till it shut off). Logging back on, I got an error message that it couldn't load my profile (like I'm entering the wrong password). I logged on the guest account, but can't change anything because it won't accept my password. Is this a serious problem or do I just need to reset the password with one of the options available on your site?

    Read the article

  • Why won't my Western Digital My Book Home 500GB drive mount on my Macbook Pro via Firewire?

    - by Ryan O
    I can connect the drive via USB and it mounts correctly. However, when I connect via FireWire, the drive powers down. The drive realizes that it is plugged in. When I plug in via FireWire, the lights on the front of the drive flash the same as when it it mounting via USB, but then the drive powers down. I've upgraded the firmware on the drive, but it still won't mount via FireWire.

    Read the article

  • Freezing a dead(ish) laptop battery...

    - by Wesley
    I have a Compaq CQ50-215CA laptop with a battery that does not properly hold charge. Vista's battery meter does not read the remaining charge left; the laptop will randomly shut down at ~60% and sometimes the meter goes back up to 100% before shutting down without warning. So, does freezing a dead-ish laptop battery somehow repair it and allow it to hold charge again?

    Read the article

  • Windows Server 2003 W3SVC Failing, Brute Force attack possibly the cause

    - by Roaders
    This week my website has disappeared twice for no apparent reason. I logged onto my server (Windows Server 2003 Service Pack 2) and restarted the World Web Publishing service, website still down. I tried restarting a few other services like DNS and Cold Fusion and the website was still down. In the end I restarted the server and the website reappeared. Last night the website went down again. This time I logged on and looked at the event log. SCARY STUFF! There were hundreds of these: Event Type: Information Event Source: TermService Event Category: None Event ID: 1012 Date: 30/01/2012 Time: 15:25:12 User: N/A Computer: SERVER51338 Description: Remote session from client name a exceeded the maximum allowed failed logon attempts. The session was forcibly terminated. At a frequency of around 3 -5 a minute. At about the time my website died there was one of these: Event Type: Information Event Source: W3SVC Event Category: None Event ID: 1074 Date: 30/01/2012 Time: 19:36:14 User: N/A Computer: SERVER51338 Description: A worker process with process id of '6308' serving application pool 'DefaultAppPool' has requested a recycle because the worker process reached its allowed processing time limit. Which is obviously what killed the web service. There were then a few of these: Event Type: Error Event Source: TermDD Event Category: None Event ID: 50 Date: 30/01/2012 Time: 20:32:51 User: N/A Computer: SERVER51338 Description: The RDP protocol component "DATA ENCRYPTION" detected an error in the protocol stream and has disconnected the client. Data: 0000: 00 00 04 00 02 00 52 00 ......R. 0008: 00 00 00 00 32 00 0a c0 ....2..À 0010: 00 00 00 00 32 00 0a c0 ....2..À 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 92 01 00 00 ... With no more of the first error type. I am concerned that someone is trying to brute force their way into my server. I have disabled all the accounts apart from the IIS ones and Administrator (which I have renamed). I have also changed the password to an even more secure one. I don't know why this brute force attack caused the webservice to stop and I don't know why restarting the service didn't fix the problem. What should I do to make sure my server is secure and what should I do to make sure the webserver doesn't go down any more? Thanks.

    Read the article

  • how to disable these logs on the screen?

    - by user62367
    using Fedora 14: http://pastebin.com/raw.php?i=jUvcfugw i mount an anonym Samba share [checks it in every 5 sec] it's working, ok, great! But: when i shut down my Fedora box, i can see the lines containing this scripts lines! Many times, about ~50x on the screen. How could i disable these lines when shutting down? I [and other people] don't want to see those lines for about ~ 5 sec Thank you!

    Read the article

  • How to hide a trusted domain in the logon screen?

    - by Massimo
    I need to create a bidirectional trust between two Active Directory domains. But management is worried that users will be puzzled out when seeing another domain name in the drop-down list in the Windows logon screen (many of them use Windows XP), and that help desk calls for failed logins due to having selected the wrong domain will skyrocket. Also, the two domain names are quite similar, adding to the possible user confusion. Is there any way to hide a trusted domain from the drop-down list in the Windows logon screen?

    Read the article

  • Windows Datacenter 2008 recovery ISO

    - by Will
    Hello, I am using virtualBox with windows datacenter to play around with some web development. The last time I had to shut down the computer, it installed an update and shut down normally. When I rebooted, it started doing a checkdisk and processed a bunch of files (from hard poweroffs before mabey). Now when I start, I get a bluescreen of death every time it loads (Safe mode,etc) I have googled around for a boot / recovery disk, but can't seem to find one for datacenter. Cheers -Will

    Read the article

  • games not running on full screen

    - by ALY
    i have laptop hp dv6 with processor core2due hard 320 and video card intel 1 giga every time i try to play game i have to make it not on the full screen if its on the full sreen its stopped and get down to the windows bar try to get up it go down again i try this with many games its the same i try to decrease the screen or the game resolution its the same too i try to download the last version of direct x the same thing i use windows 7 so can any one help

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >