Search Results

Search found 4283 results on 172 pages for 'initial'.

Page 104/172 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Localhost stop resolving/serving local site after a few clicks IIS 7.5

    - by Jo-Pierre
    previously I have searched tried to find the answer from a previous question, however Im not sure it was resolved. I could comment on the question to find more, so decided to post a new question. Previous question found (http://serverfault.com/questions/314333/localhost-stop-resolving-after-a-few-minutes-iis-7-5) So I set up a new website on Windows 7 IIS 7.5 ... I give it a host header and in the hosts file I add the entry for 127.0.0.1 and browse the site. After about the second or third time of trying to click around on the local site, it starts hanging ... just seems to be looking like its trying to load, but just eventually comes back in Firefox with "The connection was reset" (takes about 30-50 secs before this happens). I then used a program like CurrPorts, to view the ports that are listening, and for the initial request it all seems good. Now after the site is hanging, I dont see the hit coming through anymore. Its as if the browser loses touch with IIS or something. If I open a different browser, works fine for about 2 clicks or so, then same problem. Anyone know what could be causing this? Or how to resolve?

    Read the article

  • Is there anything like Heroku for PHP and/or .NET?

    - by Wayne M
    In my area PHP is very widespread, so is .NET. Ruby not so much; most places have never heard of it. For some personal things I am "forced" to choose Rails because I want to take advantage of Heroku - the ability to deploy and scale on the cloud very easily is the main reason. Also, they offer a small FREE plan, with no ads or strings attached, that I can use for demo sites or, in this case, for my business' static page; as a totally bootstrapped startup I have maybe $50 or so in initial capital and cannot afford to pay monthly fees while I'm getting started. Are there any similar offerings for other languages? Specifically, I really like the small, 5MB site for free that Heroku offers - is there anything like that for PHP and/or .NET? I'm not even that concerned about the "cloud" part, but that would be a nice bonus. If there is, I might be able to kill two birds with one stone and pick up a useful skill as I'm doing my own thing instead of using something that nobody else knows or cares about. I should add I'm specifically interested in something that offers a free plan. As I said, Heroku has a 5mb plan that you can have as many as you want for free; I have yet to find anything similar for any other platform (most of the "free" sites require you to have ugly banners on your page, or don't allow you to use your own domain name), and to be honest I'm not too thrilled about using Ruby on Rails for everything simply to take advantage of this. I'm asking this here because I already asked it on StackOverflow and someone suggested it would be better suited here.

    Read the article

  • Remote offscreen rendering

    - by redmoskito
    My research lab recently added a server that has a beefy NVIDIA graphics card, which we would like to use to do scientific computations. Since it isn't a workstation, we'll have to run our jobs remotely, over an ssh connection. Most of our applications require doing opengl rendering to an offscreen buffer, then doing image analysis on the result in CUDA. My initial investigation suggests that X11 forwarding is a bad idea, because opengl rendering will occur on the client machine (or rather the X11 server--what a confusing naming convention!) and will suffer network bottlenecks when sending our massive textures. We will never need to display the output, so it seems like X11 forwarding shouldn't be necessary, but Opengl needs the $DISPLAY to be set to something valid or our applications won't run. I'm sure render farms exist that do this, but how is it accomplished? I think this is probably a simple X11 configuration issue, but I'm too unfamiliar with it to know where to start. We're running Ubuntu server 10.04, with no gdm, gnome, etc installed. However, xserver-xorg package is installed.

    Read the article

  • XAMPP server giving 404 error when requested by ipv4 connection

    - by boyb
    This is in reference to a previous question that I asked and was answered by womble. http://serverfault.com/a/406280/127729 So, now we have the real DNS records, we can do some diagnosis. dig for both A and AAAA on akosiboybastos.broker.freenet6.net gives a valid response, with an appropriate address. Good. dig for both A and AAAA on bastosforum.strangled.net gives the same responses (with a CNAME response thrown in). Also good. This means that the problem is not DNS-related, as those records are in order. wget -6 bastosforum.strangled.net/ gives a 200 OK response. wget -4 bastosforum.strangled.net/ gives a 404 Not Found response. This means that your webserver is misconfigured so that it's not serving the response you desire on IPv4. Given that the initial DNS problem asked in this question has been solved, I would recommend posting a new question with relevant webserver-related configuration, if you can't determine the configuration error yourself. I am using XAMPP(latest version) running phpbb3.0.10 via ipv6 tunnel from freenet6 and my domain is akosiboybastos.broker.freenet6.com, nothing fancy with the installation just out of the box install(with a few cosmetic mod). Both ipv4 and ipv6 traffic can connect using that url, but when I try to put a CNAME record on my test domain which is bastosforum.strangled.net pointing it to akosiboybastos.broker.freenet6.com only ipv6 can connect. As suggested by womble, this is a misconfigured webserver. To be honest I don't know where to start checking on the server as it is fully working if you use the domain given by freenet6 (akosiboybastos.broker.freenet6.com), any info on how to go about this server issue is welcome as i'm really a noob when it comes to computers. regards boyb

    Read the article

  • How do I rsync an entire folder based on the existence of a specific file type in that folder

    - by inquam
    I have a server set up that receives movies to a folder. I then serve these movies using DLNA. But in the initial folder where they end up all kind of files end up. Pictures, music, documents etc. I thought I'd fix this by running the following script inside that folder rsync -rvt --include='*/' --include='*.avi' --include='*.mkv' --exclude='*' . ../Movies/ This works and scans the given folder and moves all the found movies of the given extension types to the Movies folder. But I wonder if there is anyway to tell rsync to if a folder if found that includes a movie of the given extension types, sync the entire folder. Including other files such as .srt. This is to make it easier for me to get subtitles moved along with the movie. I have a solution figured out via a script made in php (yea, I actually do most of my scripting in linux using php... just a habbit that stuck a long time ago). But if rsync can handle it from the start that would be super. Also, I have noticed that this line of rsync actually copies all the root folders in the given folder. If no movie is in the folder it will create an empty folder. How do I prevent rsync from doing this... and saving me the trouble of deleting all folder in Movies that are empty.

    Read the article

  • How do I disable tablet gestures in windows 8?

    - by ???
    I'm using a Wacom Intuos4 and I have recently upgraded to Windows 8. I don't have a problem when using Photoshop however I occasionally draw on flash based online boards. The problem is, when I drag the pen in a direction repetitively (which is basically all I do when drawing) it's detected as a gesture, sometimes causing Chrome to go to the previous page (left drag) and making me lose the entire thing. Is there a way to disable these "gestures"? I believe this is not something caused by Windows 8 (or Charms) because I run Windows in English although it's not the initial language that Windows was installed in. I changed to English long after the installation. When Windows takes a move as a gesture, a small text pops up next to the cursor informing me about what I have just done and those pop ups are not even in English. I'm sorry for failing to be any more specific here but these gestures could be a feature of either Windows (unlikely), the tablet, Chrome or the computer itself. It's an Acer Aspire and it has one of those little stickers on it that specifies some of the features and one of them reads "Multi-Gesture" (referring to the touchpad, I guess). Could it be that this Multi-Gesture feature somehow decided to expand and apply for my tablet as well? If so, how do I disable it?

    Read the article

  • Windows 8 install from USB freezes

    - by Rafael Almeida
    I'm trying to install Windows 8 from an 8GB Kingston Data Traveler. I'm currently using the Windows 7 USB DVD Download Tool to put the iso into the flash drive. It copies the files, but in the end it says it 'had a problem with bootsect' and could not make the flash drive bootable. This seems to be because my current system is Windows 7 32bits, and the bootsect.exe in the ISO is a 64-bit executable. Then I downloaded the 32-bit bootsect.exe and made the drive bootable by running: bootsect /nt60 E: /mbr Then I restarted and managed to boot via the flash drive, but now everything is very slow. It takes about two minutes for the initial black screen with the Windows logo and the spinner go away, then it goes to a purple-ish blank screen that stays on for about five more minutes and then it finally shows a dialog asking for the installation, date/time and keyboard languages. I input then, click "Install Now" and it takes about three more minutes with a "Setup is starting" screen. After that, the PC apparently reboots, the CPU fan speeds up considerably, and there's no video and nothing more happens even after more than ten minutes. What is happening? I already tried using another USB port and even installing from a Samsung G3 Station 2TB external hard disk, but the same thing happens. The file transfer speed to the Kingston drive was about only 3 megabytes per second.

    Read the article

  • Using screen to monitor non-interactive scripts (or some other solution)

    - by Michael
    I have some autonomous scripts that run commands on remote machines over ssh. These scripts rely on getting stdout, stderr, and the return code of each command run. I want to be able to monitor the progress of the scripts on each target machine so that I can see if something has hung and possibly intervene if necessary. My initial idea was to have the scripts run commands in a screen session, so that the person monitoring could simply attach to the session with screen -x. However, it was hard to do that from a script since screen is an interactive program. I can send a command to the screen session with screen -S session -X stuff "command^M", but then I don't get the output and return code that I need back. My second idea was to put script /path/to/log in ~/.bash_profile and log the entire session to a file. Then the monitoring person could simply tail the log file. However, this doesn't provide the interactivity that I was looking for. Any ideas on how to solve this problem?

    Read the article

  • Blue screen of Death on Install

    - by Toby Allen
    I have a machine with Windows Vista Installed. It has an Intel X25 SSD as the System Drive I want to reinstall (I plan to format and overwrite Vista) with XP. When I boot up using the Dell XP CD it loads the initial drivers then i get a Blue Screen. This is quite concerning. The installed OS works ok, but its giving problems so I want to remove it. Should I just format the SSD and try again? Will this make any difference? Can I do something to avoid hitting the Blue Screen? Its possible I had corrupt sectors on one of the other disks, will a new XP install use the System drive or drive 0? Can I force the install to use a specific drive when installing? Error: *** STOP: 0x0000007B (0xF78D2524,0x0000034,0x00000000,0x00000000) I never did find the answer, however I removed the SSD and tried to install on other disk - CRASH I disconnected the other disk and tried to install with only SSD plugged in - CRASH I removed 1 block of RAM - CRASH I used a windows 7 CD - NO CRASH

    Read the article

  • How to divert traffic based on hostname using HAProxy?

    - by Bosky
    I've had some initial success with HAProxy setting up a bunch of app servers listening on various other ports. I now have another webserver listening on one port, and i'd like to what changes to make to my config to flow traffic by hostname as well. The following is the current setup, assuming: my apache webserver is running at examplecom:8001 my bunch of app servers 0.0.0.0:8081, 0.0.0.0:8082 , 0.0.0.0:8083 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 debug #quiet #user haproxy #group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen appservers 0.0.0.0:80 mode http balance roundrobin option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 server inst1 0.0.0.0:8081 cookie server01 check inter 2000 fall 3 server inst2 0.0.0.0:8082 cookie server02 check inter 2000 fall 3 server inst3 0.0.0.0:8083 cookie server01 check inter 2000 fall 3 server inst4 0.0.0.0:8084 cookie server02 check inter 2000 fall 3 capture cookie vgnvisitor= len 32 (any other comments on the ^ setup are welcome.) Now I'd like to continue the same above, but in addition in case - if the hostname is myspecialtopleveldomain<dot>com, then would like to flow traffic to example<dot>com:8001 ~B

    Read the article

  • Relevance and Necessity of SNMP

    - by Adam Tannon
    Edit: I am in the process of designing a Java-based monitoring tool that will send back periodic "health checks" of a Java app deployed to a cluster of GlassFish servers. I am trying to figure out the best protocol for this monitoring tool to send information back to the monitoring server on. After an initial research effort on my part, it seems like SNMP is just a protocol for monitor-type applications to communicate the "health status" of something (a part of a network, a server, a cluster, an application, etc.) to the rest of the network. If the above is incorrect, please correct me!!! Assuming the generalization is more or less accurate, my next question is: why is this a protocol!?!? In the age of REST/SOAP/TCP protocols, why is there the need for a standardized protocol that only fits one type of application (monitoring)? In other words, if I'm a developer assigned to building a new monitoring tool that periodically polls a server and reports on its CPU and available memory, what advantages does SNMP give me over just POSTing to a RESTful API via plain 'ole HTTP? I'm sure I'm missing something here - I just need someone to help connect the dots! Thanks in advance!

    Read the article

  • Putting and configuring grub on an external drive

    - by HisDudeness
    I want to put a bunch of minor emergency operating systems (such as GParted Live, DSL, Puppy Linux and so on) on a partitioned USB pen drive, with a dedicated grub boot loader on it, which I want to start when I select the drive in the BIOS to boot. The problem is: when writing grub boot options I must tell where kernel and initial ram files are located, but the USB drive can have different letters depending on when I did plug it in, if some others external drive were mounted. So, how can I write appropriate options which automatically refer to the drive grub is installed on without having to specify absolute paths, which might change (I mean, like (hd1,msdos1) ot /dev/sdb1)? And, while we are at it, can I have grub working on a device without an operating system on it to which it can refer? I mean, I want to address the command sudo grub-install /dev/sdb from the LMDE system I'm on right now, but I won't have LMDE on my pen drive. Is that a problem? And installing grub on another device, will I keep the grub I have right now on my HDD?

    Read the article

  • How does KMS (Windows Server 2008 R2) differentiate clients?

    - by Joe Taylor
    I have recently installed a KMS Server in our domain and deployed 75 new Windows 7 machines using an image I made using Acronis True Image. There are 2 variations of this image rolled out currently. When I go to activate the machines it returns that the KMS count is not sufficient. On the server with a slmgr /dlv it shows: Key Management Service is enabled on this machine. Current count: 2 Listening on Port: 1688 DNS publishing enabled KMS Priority: Normal KMS cumulative requests received from clients: 366 Failed requests received: 2 Requests with License status unlicensed: 0 Requests with License status licensed: 0 Requests with License status Initial Grace period: 1 Requests with License statusLicense expired or hardware out of tolerance: 0 Requests with License status Non genuine grace period: 0 Requests with License status Notification: 363 Is it to do with the fact that I've used the same image for all the PC's? If so how do I get round this. Would changing the SID help? OK knowing I've been thick whats the best way to rectify the situation. Can I sysprep the machines to OOBE on each individual machine? Or would NewSID work?

    Read the article

  • Ubuntu 12.04 can't boot after installing with software RAID 1

    - by Bill
    I've been trying to install Ubuntu with software RAID on my server and there is obviously something that I don't understand about the process. This is the guide that I followed: https://help.ubuntu.com/11.04/serverguide/advanced-installation.html I have two identical 1 TB disks in my server. I went through the initial install process and manually set up my partitions. On each disk I set up: (1) 100 MB partition for EFI boot (I didn't originally have this but added it based on a forum post I found after my original install failed to boot, I ended up with EFIboot since that was what the 'guided partitioning' decided to do) (1) 970 MB partition for / (1) 30 MB partition for swap I then created new RAID 1 disks combining the two partitions, one from each disk, such that each partition is mirrored. I then configured their usage as stated above. After saving the configuration I said yes to boot in a degraded state. The rest of the setup went normally, no errors of any kind. I saw GRUB being installed and again no errors. However, after rebooting the server I get the dreaded 'Insert boot media' and nothing happens. I loaded up the recovery disk and the mdadm configuration looks correct. md0 is my EFIBoot partition md1 is my \ partition using ext4 md2 is my swap partition Running file -s /dev/md0 doesn't indicate that GRUB is there and so I attempted to reinstall GRUB using the recovery disk. I selected the md0 disk and it appeared to install just fine. Running file -s /dev/md1 shows the error needs journal recovery, I'm not sure if that's related or not or how to fix that. Rebooting gives me the same problem, no boot media found. I've searched around the internet but can't figure out what to do next or more importantly how to troubleshoot what exactly is going wrong. Thanks!

    Read the article

  • Graphics card ATI Asus 9250 128 MB AGP problem.Monitor switching off.

    - by Dominick1978
    I have an old system with these specs: Motherboard: Via P4x266a (with AGP 4x) CPU: P4 1,7 Ghz Memory: 1152 MB DDR SDRAM Graphics card: Voodoo 3 2000 16 MB PSU: 300 Watt It also has 1 dvd-rom, 1 dvd-rw, 2 hard drives (all 4 connected via molex) , 1 sounblaster sound card and 1 ethernet card (both connected via pci). OS: XP Pro Recently I bought Asus 9250 128 MB AGP to replace voodoo.Wnen I switch on the pc the initial screens (until after the xp logo) sometimes are distorted with blurred colours.When XP are loaded there is some flickering but the rest are ok.XP can't recognize the card seeing it as just a VGA adapter.I have downloaded the latest xp drivers from ATI website and installed them.Then after the restart everything is ok (no distorted image or blurring) until after the xp logo.After this the monitor turns off while the pc is still running.I have tried many drivers but the problem persists (of course I removed the voodoo drivers before from the display adapter properties).Only once I have managed to enter XP (after changing BIOS features for graphics card from 256 MB to 128 MB) but the drivers on the control panel had an exclamation mark (ati 9250!) and below them ati 9250 secondary without ! under the display adapter tab and the ATI catalyst program said that it couldn;t find the card.That was the opnly time I went beyond the xp logo.Now the monitor auto switcheS off. So, what do you think? 1)Is this a broken card? 2)Is it the drivers of the card? 3)Is it PSU fault? 4)Anything else? Thanks for your help and excuse my english!

    Read the article

  • Why can't I connect to my home SSH (SFTP) server? What am I doing wrong?

    - by Rolo
    I am new to this topic of creating a SFTP server on one's computer. I would like to be able to access the folder on my Windows XP computer via SFTP from another computer or a phone. The following is what I have done so far: I have installed SSH Windows and everything is setup correctly because I can access it (the folder on my pc) via WinSCP. I however cannot access it from my phone. It doesn't connect. The phone can be on the same wireless network as the Windows XP computer, but I would prefer to be able to access this when not in the same network. Now, from what I have read and understood, the following is the information needed to connect: 1) Host Name: This would be my computer's ip address which I access by typing ipconfig in a cmd prompt (I access this easily on my computer because I simply put in localhost or 127.0.0.1) 2) Port Number: That would be port 22 (I have also added this to my router in the port forwarding section). 3) Username: This would be my Windows XP username. This however is my full name, including my middle initial followed by a period. I am wondering if this is maybe causing problems in accessing it from my phone, since the name has spaces and punctuation (the period). 4) Password: The password of my Windows XP computer Extra Info: When I say phone, I mean an Android phone and I am using an ftp / sftp app to access my pc via the phone's cellular network (I also tried the wireless, but that didn't work as well). I have tried more than one program. On one program it tells me Connection timed out and on another it tells me "timeout:socket is not established" Also, I know that I can use the site noip, but I prefer to connect this way first. Also, because I am new to this, I would like to look into what exactly noip is doing and if they would be seeing my files as they are transferred from phone to pc. Thanking you in advance for your help.

    Read the article

  • Easiest way to find out if user has either Windows 7 or Vista (through telephone support)?

    - by Rabarberski
    If you have to provide some initial troubleshooting support by phone [or email], and you don't have access to the PC itself, what is the easiest and most foolproof question to find out if the 'dumb' user is using either Windows 7 or Windows Vista? For example: determining if the user has either Windows XP or Windows Vista/7 is easy. Just ask the user if the button at the left bottom corner is (a) either square with the word 'Start' on it, or (b) it is a round button. But how to determine the difference between Vista and 7? Edit: For all the existing answers the user has to type something, and do it correctly. Sometimes even that is already hard for a computer illiterate user. My XP example just requires looking. If it exists (although I am afraid it doesn't), I think a solution that is just based on something this is visually different between Vista and 7 would stand above all others. (Which makes Dan's suggestion to turn over the box and look at the label" not so stupid). Perhaps the small 'show desktop' rectangle at the right side of the task bar (was that present in Vista)?

    Read the article

  • Lenovo Thinkpad T430 not booting from HDD if there is a USB modem connected

    - by user93353
    I have a T430 Levono Thinkpad running Win7. I use a ZTE USB modem (something like this) for my internet connection. I usually keep the modem plugged into the USB drive even when the laptop is shutdown or hibernating. This worked fine on my earlier laptops. But with the Lenovo, my laptop doesn't boot if the modem is in the USB drive. It shows the initial character based screen where it gives the Thinkpad message & BIOS details and then waits. If I pull out the modem, it goes ahead. I have disabled USB as a boot option in my BIOS settings, but even then this happens sometimes (but not all the times). Likewise while resuming from hibernation. The USB modem also has drivers & ISP connection client which getting installed the first time you use it on any machine. I have used multiple laptops (HP, DELL, Acer, Gateway) but never faced this problem before. I have friends who use other Thinkpad models but haven't faced this issue. Any resolutions, workarounds for this?

    Read the article

  • Methods and practices for managing a network that has no internet connection

    - by FaultyJuggler
    Originally asked in Super User but realized this belongs here. Long story short, I am setting up a network with 32 servers of varying specs that will be used for testing and development. We will be using RedHat Linux, we also do not have a router as of yet and were looking into making one of the servers act as our router/DHCP etc. The small cluster will be on an isolated network with no internet. I can use external harddrives and discs to transfer anything from external sources into machines on the network, so this isn't a locked down secure network, it just won't have a direct connection to the outside world. I've worked on such setups before, but always long after they were setup. So I'm reaching out to see what everyone knows as far as how groups have handled initial setup and maintenance of such a situation. What is the best way to get them all configured and up to date? What are the best ways to automate updates, network wide installs, etc. With the only given that I have large multi-terabyte external hard drives that would be used to drop whatever files are needed onto a central server, how do i then distribute those files and install their contents? I've done perl scripting, some teammates have played with puppet, so we aren't completely in the dark, I just wanted to avoid reinventing the wheel since this is a common challenge.

    Read the article

  • Git workflow for two tight-knit projects

    - by Pioul
    Two very similar projects I'm maintaining an online Markdown editor using Git as RCS (and accessorily made available on GitHub). From this web app, I've created a Chrome app: the code is the same, aside from some Chrome technicalities. I care about open sourcing these two projects. Still, the Chrome app's code being the same as the web app's except for some dull details, I've first chosen to (1) not publish the Chrome app on GitHub, and (2) not use Git to manage its code. Instead, I would manually review the web app's commits, then replicate the few changes in the Chrome app. … slightly drifting apart However, I've decided to add a feature to the Chrome app only. So, even though both codebases will remain broadly similar, they'll be diverging enough to make me reconsider the rationale behind my initial decision to not version control nor share the Chrome app's source code. Since I'm now willing to use Git to version control both apps, and that I want to share both of them on GitHub, how should I go about it? Should I use two different repositories, or one repo with two long-running branches? What would be the pros and cons of each approach in that context? What would be the easiest/fastest way to regularly "import" commits from the web app to the Chrome app, since the web app is going to remain the master branch? Is cherry-picking the only solution?

    Read the article

  • Accessing CIFS shares from an OS X machine incredibly slow

    - by Aron Rotteveel
    This is a longshot, because this issue seems over-reported and unanswered on the internet (see references below), but it is about time this issue is permanently solved. The facts: Server: Windows Server 2008, acting as a file server Client: OS X Lion 10.7.3. Method of connecting: directly via IP through Finder: smb://192.168.1.100/share The problem: The initial connection attempt takes about a minute. After the connection is made, it takes one more minute to show the directories in Finder. After navigating to any other directory, it takes several seconds/minutes to parse the directory, seemingly based on the size of the contents. Actually, my entire Finder has this problem after connecting. When using Finder to show my desktop, it can literally take up to a minute to load. Obviously, this is not right. I have no clue how to fix this and would appreciate any help I can get. I am unsure about other relevant information I can provide, but if there is any, please let me know so I can update the post. I seem to be not the only one having this problem: Most importantly, an apple.stackexchange.com entry. Unsolved and unanswered. Several users on the Apple support forums. Users on EduGeek.

    Read the article

  • Help - since adding an elastic load balancer to my EC2 web application I cannot connect with the MySQL database (not in AWS)

    - by undefined
    I have a web application that uses an EC2 instance to receive uploaded images, resize and store on S3 and update my MySQL database with the image record. This database is hosted outside Amazon Web Services and so obviously involves communication between the EC2 instance and the database. Images are posted to the upload server from a Flash client which receives the IP address of the upload server when it is loaded and so sends images to 1.12.23.34/resize_script.php This has worked great .. until i started to try and include a load balancer. Since the ELBs do not use an IP address but a DNS address I am now passing this to Flash. Now when I upload images I get the following response from the server - Could not connect to MySQL: Lost connection to MySQL server at 'reading initial communication packet', system error: 111 What might be causing the lost connection to MySQL server. Is there any additional steps I need to take to allow my upload servers to be load balanced? I have set the host property of my MySQL privileges for this user to % any pointers greatly appreciated thanks.

    Read the article

  • How do I change the default ftp folder in Mac OS X 10.6?

    - by Wild_Eep
    I'm running WordPress 2.9.1 from a Mac running 10.6.3. WordPress is installed to the /Library/WebServer/Documents folder. WordPress has a feature called Auto Update. Clicking an auto update button will download and install updated versions of the WordPress software, or third-party plugin tools. It's a convenient way to keep things up to date. WordPress uses FTP to download the files. I've enabled FTP and set up a user account and opened the requisite ports in my firewall for FTP traffic. This doesn't seem to be enough for my self-hosted installation, though. I'm sure this feature was originally designed for someone who has access to a remote shared webserver, and that it's merely a configuration challenge related to the FTP setup. I feel that if I can adjust the initial directory that the FTP service presents to the AutoUpdate feature, everything else will work properly. So, my question is, how do I adjust what folder is presented when a given user connects to a Mac running 10.6.3 via FTP?

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >