Search Results

Search found 11888 results on 476 pages for 'hero vs zero'.

Page 382/476 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • ASP.Net aspx markup

    - by Batuta
    I am working on some old web forms application. When I changed from design to view source of the aspx page, the aspx markup becomes disarranged. For example, a label is written as follows: <asp:label id="Label20" style="Z-INDEX: 119; LEFT: 16px; POSITION: absolute; TOP: 424px" runat="server" Height="24px" Width="72px">Instructions:</asp:label> It suddenly becomes like this (when I toggle from design to source) <asp:label id="Label20" style="Z-INDEX: 119; LEFT: 16px; POSITION: absolute; TOP: 424px" runat="server" Height="24px" Width="72px">Instructions:</asp:label> Notice that the alignment and margins, tab stops are changed. Any idea how to prevent VS from doing this? Thanks.

    Read the article

  • git destroyed my changes

    - by mare
    I made a commit of my repository a week ago but never actually pushed it to the remote at github, which I did today. However, in the time from my commit I made many changes to the source. But just the initial commit was pushed to remote and while doing it, it also overwrote my local files. What can I do to get back my current files?? For better understanding, this is what I've done: Created new VS project and created a new git repository in it, Performed an initial scan, stage and commit but without adding a remote and performing a push, Worked on files for a week, (Today) Forgot to perform rescan, new stage and commit and just created new GitHub repository and performed this: git remote add origin [email protected]:myaccount/webshop.git git push origin master Now the files in GitHub repository are the ones from inital commit and those were also copied over my current files, so I'm in the initial commit stage now locally too, which is awful. Help appreciated

    Read the article

  • Generic SQL builder .NET

    - by Patrick
    I'm looking for a way to write an SQL statement in C# targeting different providers. A typical example of SQL statements differentiating is the LIMIT in PostgreSQL vs. TOP in MSSQL. Is the only way to solve SQL-syntax like the two above to write if-statements depending on which provider the user selects or using try catch statements as flow control (LIMIT didn't work, I'll try TOP instead)? I've seen the LINQ Take method, but I'm wondering if one can do this without LINQ? In other words, does C# have some generic SQL Provider class that I have failed to find that can be used?

    Read the article

  • Why are namespaces acting up in Visual Studio 2010?

    - by duluca
    I've just converted a project to VS 2010 and something really weird is going on with namespaces. Let me give an example, the following code used to work in VS2008: MySystem.Core.Object { using MySystem.Core.OtherObject; ... } But now it doesn't, it either wants the whole thing to be put outside of the namespace like this: using MySystem.Core.OtherObject; MySystem.Core.Object { ... } or be rewritten it like: MySystem.Core.Object { using OtherObject; ... } I understand why this works and maybe is the correct way of handling this, but now we'd have to change thousands of lines of code! Which is not cool. Any idea to circumvent this requirement?

    Read the article

  • Check If Stored Procedure Returns Value

    - by Eric
    Hello all, I am using Linq 2 Sql in VS 2010, and I have the following stored procedure to check a username and password ALTER PROCEDURE dbo.CheckUser ( @username varchar(50), @password varchar(50) ) AS SELECT * FROM Users Where UserName=@username AND Password=@password The problem I'm having is that it throws an exception if the username and password are incorrect. I'd like to perform a check to see if there is a return value, rather than using try/catch to determine whether the procedure returned a value. Should I do this check in code (C#)? Or is there a way to do it in SQL? Thanks.

    Read the article

  • Difference between local and instance variables in ruby

    - by fflyer05
    I am working on a script that creates several fairly complex nested hash datastructures and then iterates through them conditionally creating database records. This is a standalone script using active record. After several minutes of running I noticed a significant lag in server responsiveness and discovered that the script, while being set to be nice +19, was enjoying a steady %85 - %90 total server memory. In this case I am using instance variables simply for readability. It helps knowing what is going to be re-used outside of the loop vs. what won't. Is there a reason to not use instance variables when they are not needed? Are there differences in memory allocation and management between local and instance variables? Would it help setting @variable = nil when its no longer needed?

    Read the article

  • How to set AssemblyInfo.cs based on the tfs project build number?

    - by Ahok Rudraraju
    The project is hosted on a tfs server and I need to access the build number which I assume is automatically generated when ever you build a project. I need to retrieve that build number and display it on the web pages so that QAs and testing people know exactly which build they are working on. I found how to create customize build numbers in the following link: http://msdn.microsoft.com/en-us/library/aa395241(v=vs.100).aspx but it dose not solve my problem as I do not have access to the build definition file. I am looking for some kind of post deployment task which can access the build number or may be generate one and probably write it down to a file, from where I can read it. I don't know if that makes any sense as this is my first time working on .Net

    Read the article

  • Timezone settings in MySQL - Using NOW()?

    - by matt74tm
    SOmewhat related to Doing calculations in MySQL vs PHP Right now, our database assumes that the system time is in UTC and uses that to calculate NOW(). PHP explicitly sets the timezone as UTC (so its impervious to server time zone shifts). An accidental shift of timezones on the server messed this relationship up at the database level and i'm now trying to figure out the ideal congiguration: configure Mysql to be in UTC, but also from the perspective that: our application may be on someone else's server where they might have a different TZ (so i cant set the timezone at the mysql/server level). How do i configure it at the specific database level?

    Read the article

  • unable to start program problem in VS2008_C++

    - by epsilon_G
    Hi .. I was trying to code some projects .. but the I could't compile it , first I guess the code .. I change the code to the simplest code at all hello ... The same think , I can't compile it ... some one code with my VS and when he had compiled the program he had pressed of dialog box with "No" .. The prob is in the configuration of the IDE .. The system cannot find the file specified unfortunately , I uploaded the prob pics from my PC but I can't post them here

    Read the article

  • Can single-buffer blocking WSASend deliver partial data?

    - by CodeAngry
    I've pretty much always used send() with sockets and now I'm moving onto the WSA functions. With send(), I have a sendall() helper that ensured all data is delivered even if it didn't happen in one try and a partial send occurred on first call. So, instead of learning the hard way or over-complicating code when I don't have to, decided to ask you: Can a blocking WSASend() send partial data or does it send everything before it returns or fails? Or should I check the bytes sent vs. expected to send and keep at it until everything is delivered? ANSWER: Overlapped WSASend() does not send partial data but if it does, it means the connection has terminated. I've never encountered the case yet.

    Read the article

  • what's the job of std::unique_lock when used with std::conditional_variable::wait()

    - by Mike
    I'm quite confused with the need of a std::unique_lock when wait a std::conditional_variable. So I look into the library code in VS 2013 and get more confused. This is how std::conditional_variable::wait() implemented: void wait(unique_lock<mutex>& _Lck) { // wait for signal _Cnd_waitX(&_Cnd, &_Lck.mutex()->_Mtx); } Is this some kind of joke? Wrap a mutex in a unique_lock and do nothing but get it back latter? Why not just use mutex in the parameter list?

    Read the article

  • Organizing c# code into different files

    - by Adam S
    Hi everyone. I've gotten to a point where my main code file is about a thousand lines long and it's getting un-manageable; that is, I'm starting to get confused and not know where to locate some things. It's well-commented but there's just too much stuff. I'd really like to be able to organize my code into different files, each with its own purpose. I want to get all the help VS gives me as I type when I edit these other files. A picture can say a thousand words: Is what I'm trying to do even possible?

    Read the article

  • Java exception translations

    - by user3079275
    Apologies if this has been discussed on other threads but I find it helps clarify my thinking when I am forced to write down my questions. I am trying to properly understand the concept of checked vs unchecked exceptions and exception translation in Java but I am getting confused. So far I understood that checked exceptions are exceptions that need to be always caught in a try/catch block otherwise I get a compile time error. This is to force programmers to think about abnormal situations that might happen at run time (like disk full etc). Is this right? What I did not get was why we have unchecked exceptions, when are they useful? Is it only during development time to debug code that might access an illegal array index etc? This confusion is because I see that Error exceptions are also unchecked as is RunTimeException but its not clear to me why they are both lumped together into an unchecked category?

    Read the article

  • JavaScript: String Concatenation slow performance? Array.join('')?

    - by NickNick
    I've read that if I have a for loop, I should not use string concation because it's slow. Such as: for (i=0;i<10000000;i++) { str += 'a'; } And instead, I should use Array.join(), since it's much faster: var tmp = []; for (i=0;i<10000000;i++) { tmp.push('a'); } var str = tmp.join(''); However, I have also read that string concatention is ONLY a problem for Internet Explorer and that browsers such as Safari/Chrome, which use Webkit, actually perform FASTER is using string concatention than Array.join(). I've attempting to find a performance comparison between all major browser of string concatenation vs Array.join() and haven't been able to find one. As such, what is faster and more efficient JavaScript code? Using string concatenation or Array.join()?

    Read the article

  • Why is it called NoSQL?

    - by beef jerky
    I've recently worked with MongoDB and learned about its schemaless design. However, I'm confused with the term NoSQL? Why is it called that? Doesn't it use SQL or SQL-like queries? I've also read from an article that the main difference lies in how data is stored. In the case of MongoDB, it's stored like JSON documents. Is this true? Also, I'm confused why I always see 'NoSQL vs relational databases'. Aren't NoSQL databases relational? I believe documents in MongoDB are still related/linked through some keys (please correct me if I'm wrong). So why is it labeled as non-relational? Thanks in advance!

    Read the article

  • Extra line breaks inserted in MrEd text%

    - by Jesse Millikan
    In a DrScheme project, I'm using a MrEd editor-canvas% with text% and inserting a string from a literal in a Scheme file. This results in an extra blank line in the editor for each line of text I'm trying to insert. Is this a Windows vs. Unix linebreak problem? I can't find anything about text% treats line breaks in the documentation. ; Inside a class definition: (define/public (edit-pattern p j b d h) (send input-beat set-value (number->string b)) (send input-dwell set-value (number->string d)) (send hold-beats set-value (number->string h)) (send juggler-t erase) ; Why do these add extra newlines (send juggler-t insert j) (send pattern-t erase) (send pattern-t insert p)) (define juggler-ec (new editor-canvas% [parent this] [line-count 12])) (define juggler-t (new text%)) (send juggler-ec set-editor juggler-t) (define pattern-ec (new editor-canvas% [parent this] [line-count 20])) (define pattern-t (new text%)) (send pattern-ec set-editor pattern-t) ; Lots of other stuff...

    Read the article

  • Week in Geek: FBI Back Door in OpenBSD Edition

    - by Asian Angel
    This week we learned how to migrate bookmarks from Delicious to Diigo, fix annoying arrows, play old-school DOS games, schedule smart computer shutdowns, use breaks in Microsoft Word to better format documents, check the condition of hard-disks using Linux disk utilities, & what the Linux fstab is and how it works. Photo by Jameson42. Random Geek Links Another week with extra news link goodness to help keep you up to date. Photo by justmakeit. Report of FBI back door roils OpenBSD community Allegations that the FBI surreptitiously placed a back door into the OpenBSD operating system have alarmed the computer security community, prompting calls for an audit of the source code and claims that the charges must be a hoax. Fortinet: Job outlook improving for cybercrooks In an ironic twist in the job market, more positions will open up for developers who can write customized malware packers, people who can break CAPTCHA codes, and distributors who can spread malicious code, according to Fortinet. Enisa: Malware for smartphones is a ’serious risk’ Businesses and consumers are at risk of data breaches through smartphone use, according to the European Network and Information Security Agency. The trick with the f: Google and Microsoft web sites distribute malware Last week, Google’s DoubleClick advertising platform and Microsoft’s rad.msn.com online ad network briefly distributed malware to other web sites in the form of advertising banners. New scam tactic: Fake disk defraggers It would appear that scammers are trying out new programs to see which might best confuse potential victims and evade detection by legitimate antivirus software. Microsoft closes IE and Stuxnet holes As previously announced, Microsoft has released 17 security updates to close 40 security holes. All four Windows holes so far disclosed in connection with Stuxnet have now been closed. Microsoft Offers H.264 Support to Firefox on Windows via Add-On The new HTML5 Extension for Windows Media Player Firefox Plug-in add-on from Microsoft offers users that are running Firefox on Windows 7 H.264 support for HTML5 video playback. Google proclaims Chrome business-ready Google has announced that Chrome is ready for corporate use. Microsoft Tells Exchange Customers to Think Twice Before Opting for Google Message Continuity This week, Microsoft is telling companies still running Exchange 2010’s precursors that they should carefully consider the implications of embracing Google Message Continuity. Who Google has in mind for its Chrome OS users Steven Vaughan-Nichols explains why he feels that Chrome OS will be ideal for either office-workers or people who need a computer, but do not know the first thing about how to use one safely. Oracle takes office suite to the cloud Oracle has introduced Cloud Office 1.0, a cloud-based version of its office suite, which is aimed at web and mobile users. Mozilla pays premiums for reports of vulnerabilities The Mozilla Foundation has followed Google’s example by expanding its rewards program for reports of vulnerabilities in its Web applications. Who bought those 882 Novell patents? Not just Microsoft The mysterious CPTN Holdings — the organization that bought the 882 Novell patents as part of the terms of the Attachmate acquisition of Novell – has been unmasked (Microsoft, Apple, EMC and Oracle). Appeals court: Feds need warrants for e-mail Police must obtain search warrants before perusing Internet users’ e-mail records, a federal appeals court ruled today in a landmark decision that struck down part of a 1986 law allowing warrantless access. Geek Video of the Week What happens when someone plays a wicked prank by shoveling crazy snow paths that lead to dead ends or turn back on themselves? Watch to find out! Photo by CollegeHumor. Janitor Snow Shoveling Prank Random TinyHacker Links The Oatmeal on Cat vs Internet What lengths will our poor neglected kitty hero have to go to in order to get some attention? Guide On Using JoliCloud With Windows JoliCloud is a nifty operating system that’s made for people who need a light-weight OS that’s mostly cloud based. Check this guide on using it with Windows. Use Cameyo to Easily Create Portable Programs Here’s a nifty tool to make portable apps out of programs in Windows. Check out the guide to do it. Better Family Tech Support A nice new site by Google to help members of family understand how computers work. Track Your Stolen Mobile Phone With F-Secure A useful anti-theft tool for your mobile phone. Super User Questions Another week with great answers to popular questions from Super User. What Chrome password manager fits my requirements? What’s the best way to be able to reimage windows computers? Could you suggest feature-rich disk-based personal backup program for linux (and I’ve seen a few)? What is IPv6 and why should I care? Is there any way to find out what programs are trying to connect to Internet on windows? How-To Geek Weekly Article Recap Here are our hottest articles full of geeky goodness from this past week at HTG. 20 OS X Keyboard Shortcuts You Might Not Know Microsoft Security Essentials 2.0 Kills Viruses Dead. Download It Now. Is Your Desktop Printer More Expensive Than Printing Services? Ask the Readers: Would You Be Willing to Give Windows Up and Use a Different O.S.? The Twelve Days of Geekmas One Year Ago on How-To Geek Enjoy reading through our latest batch of retro-geek goodness from one year ago. Macrium Reflect is a Free and Easy To Use Backup Utility How To Turn a Physical Computer Into A Virtual Machine with Disk2vhd How To Restore Windows 7 from a System Image How To Manage Hard Drive Space Used by Windows 7 Backup and Restore How To Manage Hibernate Mode in Windows 7 The Geek Note That is all we have for you this week, so see you back here again after the holidays! Got a great tip? Send it in to us at [email protected]. Photo by mitjamavsar. Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Deathwing the Destroyer – WoW Cataclysm Dragon Wallpaper Drag2Up Lets You Drag and Drop Files to the Web With Ease The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • Apache segfault glibc segfault

    - by tester
    I keep getting (about every 5-6 hours) this segfault in apache: [Tue Jun 26 12:43:10 2012] [notice] child pid 26810 exit signal Aborted (6) *** glibc detected *** /usr/sbin/apache2: free(): invalid pointer: 0xb68c2628 *** ======= Backtrace: ========= /lib/i386-linux-gnu/libc.so.6(+0x6ff22)[0xb75aef22] /lib/i386-linux-gnu/libc.so.6(+0x70bc2)[0xb75afbc2] /lib/i386-linux-gnu/libc.so.6(cfree+0x6d)[0xb75b2cad] /usr/lib/apache2/modules/libphp5.so(destroy_zend_class+0x228)[0xb5d40518] /usr/lib/apache2/modules/libphp5.so(zend_hash_clean+0x77)[0xb5d58957] /usr/lib/php5/220100525+lfs/apc.so(apc_interned_strings_shutdown+0x32)[0xb64930b2] /usr/lib/apache2/modules/libphp5.so(+0x318ff0)[0xb5d56ff0] /usr/lib/apache2/modules/libphp5.so(zend_hash_graceful_reverse_destroy+0x27)[0xb5d58a67] /usr/lib/apache2/modules/libphp5.so(zend_destroy_modules+0x3c)[0xb5d506cc] /usr/lib/apache2/modules/libphp5.so(+0x30c743)[0xb5d4a743] /usr/lib/apache2/modules/libphp5.so(php_module_shutdown+0x42)[0xb5ce5172] /usr/lib/apache2/modules/libphp5.so(php_module_shutdown_wrapper+0x17)[0xb5ce5257] /usr/lib/apache2/modules/libphp5.so(+0x3bebe1)[0xb5dfcbe1] /usr/lib/libapr-1.so.0(+0x19846)[0xb76f2846] /usr/lib/libapr-1.so.0(apr_pool_destroy+0x52)[0xb76f19ec] /usr/sbin/apache2(+0x4ccee)[0xb77eccee] ======= Memory map: ======== b2e18000-b2e2c000 rw-s 00000000 00:04 8841030 /dev/zero (deleted) b2e2c000-b2eaa000 rw-s 00000000 00:04 8841029 /dev/zero (deleted) b2eaa000-b2eab000 ---p 00000000 00:00 0 b2eab000-b36ab000 rw-p 00000000 00:00 0 b5900000-b5921000 rw-p 00000000 00:00 0 b5921000-b5a00000 ---p 00000000 00:00 0 b5a3e000-b60bd000 r-xp 00000000 ca:00 44137 /usr/lib/apache2/modules/libphp5.so b60bd000-b611e000 r--p 0067f000 ca:00 44137 /usr/lib/apache2/modules/libphp5.so b611e000-b6123000 rw-p 006e0000 ca:00 44137 /usr/lib/apache2/modules/libphp5.so b6123000-b6142000 rw-p 00000000 00:00 0 b6142000-b6147000 r-xp 00000000 ca:00 24570 /lib/i386-linux-gnu/libnss_dns-2.13.so b6147000-b6148000 r--p 00004000 ca:00 24570 /lib/i386-linux-gnu/libnss_dns-2.13.so b6148000-b6149000 rw-p 00005000 ca:00 24570 /lib/i386-linux-gnu/libnss_dns-2.13.so b6149000-b6175000 rw-p 00000000 00:00 0 b6175000-b6180000 r-xp 00000000 ca:00 24572 /lib/i386-linux-gnu/libnss_files-2.13.so b6180000-b6181000 r--p 0000a000 ca:00 24572 /lib/i386-linux-gnu/libnss_files-2.13.so b6181000-b6182000 rw-p 0000b000 ca:00 24572 /lib/i386-linux-gnu/libnss_files-2.13.so b6182000-b618c000 r-xp 00000000 ca:00 24576 /lib/i386-linux-gnu/libnss_nis-2.13.so b618c000-b618d000 r--p 00009000 ca:00 24576 /lib/i386-linux-gnu/libnss_nis-2.13.so b618d000-b618e000 rw-p 0000a000 ca:00 24576 /lib/i386-linux-gnu/libnss_nis-2.13.so b618e000-b6196000 r-xp 00000000 ca:00 24562 /lib/i386-linux-gnu/libnss_compat-2.13.so b6196000-b6197000 r--p 00007000 ca:00 24562 /lib/i386-linux-gnu/libnss_compat-2.13.so b6197000-b6198000 rw-p 00008000 ca:00 24562 /lib/i386-linux-gnu/libnss_compat-2.13.so b6198000-b6270000 rw-p 00000000 00:00 0 b6270000-b6274000 rw-p 00000000 00:00 0 b6468000-b6474000 rw-p 00000000 00:00 0 b6475000-b6479000 rw-p 00000000 00:00 0 b6479000-b649a000 r-xp 00000000 ca:00 65670 /usr/lib/php5/220100525+lfs/apc.so b649a000-b649b000 r--p 00021000 ca:00 65670 /usr/lib/php5/220100525+lfs/apc.so b649b000-b649c000 rw-p 00022000 ca:00 65670 /usr/lib/php5/220100525+lfs/apc.so b649c000-b64a1000 rw-p 00000000 00:00 0 b64a1000-b64a6000 rw-p 00000000 00:00 0 b64a7000-b64aa000 rw-p 00000000 00:00 0 b64aa000-b64af000 rw-p 00000000 00:00 0 b64b0000-b64b3000 rw-p 00000000 00:00 0 b64bf000-b64c4000 rw-p 00000000 00:00 0 b64c4000-b64c9000 rw-p 00000000 00:00 0 b64c9000-b64cc000 rw-p 00000000 00:00 0 b64cd000-b64cf000 rw-p 00000000 00:00 0 b64ea000-b64fd000 r-xp 00000000 ca:00 24598 /lib/i386-linux-gnu/libresolv-2.13.so b64fd000-b64fe000 r--p 00012000 ca:00 24598 /lib/i386-linux-gnu/libresolv-2.13.so b64fe000-b64ff000 rw-p 00013000 ca:00 24598 /lib/i386-linux-gnu/libresolv-2.13.so b64ff000-b6501000 rw-p 00000000 00:00 0 b650e000-b652a000 r-xp 00000000 ca:00 22450 /lib/i386-linux-gnu/libgcc_s.so.1 b652a000-b652b000 r--p 0001b000 ca:00 22450 /lib/i386-linux-gnu/libgcc_s.so.1 b652b000-b652c000 rw-p 0001c000 ca:00 22450 /lib/i386-linux-gnu/libgcc_s.so.1 b652c000-b6534000 rw-p 00000000 00:00 0 b65dd000-b65df000 rw-p 00000000 00:00 0 b67ad000-b67c2000 r-xp 00000000 ca:00 22063 /lib/i386-linux-gnu/libnsl-2.13.so b67c2000-b67c3000 r--p 00015000 ca:00 22063 /lib/i386-linux-gnu/libnsl-2.13.so b67c3000-b67c4000 rw-p 00016000 ca:00 22063 /lib/i386-linux-gnu/libnsl-2.13.so b67c4000-b67c6000 rw-p 00000000 00:00 0 b67c6000-b67ee000 r-xp 00000000 ca:00 21904 /lib/i386-linux-gnu/libm-2.13.so b67ee000-b67ef000 r--p 00028000 ca:00 21904 /lib/i386-linux-gnu/libm-2.13.so b67ef000-b67f0000 rw-p 00029000 ca:00 21904 /lib/i386-linux-gnu/libm-2.13.so b67f0000-b67f7000 r-xp 00000000 ca:00 24600 /lib/i386-linux-gnu/librt-2.13.so b67f7000-b67f8000 r--p 00006000 ca:00 24600 /lib/i386-linux-gnu/librt-2.13.so b67f8000-b67f9000 rw-p 00007000 ca:00 24600 /lib/i386-linux-gnu/librt-2.13.so b6886000-b69af000 rw-p 00000000 00:00 0 b69af000-b6b3c000 r-xp 00000000 ca:00 23592 /lib/i386-linux-gnu/libcrypto.so.1.0.0 b6b3c000-b6b4a000 r--p 0018d000 ca:00 23592 /lib/i386-linux-gnu/libcrypto.so.1.0.0 b6b4a000-b6b50000 rw-p 0019b000 ca:00 23592 /lib/i386-linux-gnu/libcrypto.so.1.0.0 b6b50000-b6b53000 rw-p 00000000 00:00 0 b6b53000-b6b9b000 r-xp 00000000 ca:00 23621 /lib/i386-linux-gnu/libssl.so.1.0.0 b6b9b000-b6b9d000 r--p 00047000 ca:00 23621 /lib/i386-linux-gnu/libssl.so.1.0.0 b6b9d000-b6ba0000 rw-p 00049000 ca:00 23621 /lib/i386-linux-gnu/libssl.so.1.0.0 b6ba0000-b6c7e000 r-xp 00000000 ca:00 9878 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16 b6c7e000-b6c7f000 ---p 000de000 ca:00 9878 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16 b6c7f000-b6c83000 r--p 000de000 ca:00 9878 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16 b6c83000-b6c84000 rw-p 000e2000 ca:00 9878 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16 b6c84000-b6c8b000 rw-p 00000000 00:00 0 b6c93000-b6cd4000 rw-p 00000000 00:00 0 b6cd4000-b6ce0000 rw-p 00000000 00:00 0 b6cea000-b6cef000 r-xp 00000000 ca:00 45178 /usr/lib/apache2/modules/mod_status.so b6cef000-b6cf0000 r--p 00004000 ca:00 45178 /usr/lib/apache2/modules/mod_status.so b6cf0000-b6cf1000 rw-p 00005000 ca:00 45178 /usr/lib/apache2/modules/mod_status.so b6cf1000-b6d19000 r-xp 00000000 ca:00 45175 /usr/lib/apache2/modules/mod_ssl.so b6d19000-b6d1a000 ---p 00028000 ca:00 45175 /usr/lib/apache2/modules/mod_ssl.so b6d1a000-b6d1b000 r--p 00028000 ca:00 45175 /usr/lib/apache2/modules/mod_ssl.so b6d1b000-b6d1c000 rw-p 00029000 ca:00 45175 /usr/lib/apache2/modules/mod_ssl.so b6d1c000-b6d1e000 rw-p 00000000 00:00 0 b6d1e000-b6d20000 r-xp 00000000 ca:00 45166 /usr/lib/apache2/modules/mod_setenvif.so b6d20000-b6d21000 r--p 00001000 ca:00 45166 /usr/lib/apache2/modules/mod_setenvif.so b6d21000-b6d22000 rw-p 00002000 ca:00 45166 /usr/lib/apache2/modules/mod_setenvif.so b6d22000-b6d30000 r-xp 00000000 ca:00 45195 /usr/lib/apache2/modules/mod_rewrite.so b6d30000-b6d31000 r--p 0000e000 ca:00 45195 /usr/lib/apache2/modules/mod_rewrite.so b6d31000-b6d32000 rw-p 0000f000 ca:00 45195 /usr/lib/apache2/modules/mod_rewrite.so b6d32000-b6d45000 r-xp 00000000 ca:00 45168 /usr/lib/apache2/modules/mod_proxy.so b6d45000-b6d46000 r--p 00012000 ca:00 45168 /usr/lib/apache2/modules/mod_proxy.so b6d46000-b6d47000 rw-p 00013000 ca:00 45168 /usr/lib/apache2/modules/mod_proxy.so b6d47000-b6d4e000 r-xp 00000000 ca:00 9904 /usr/lib/i386-linux-gnu/libkrb5support.so.0.1 b6d4e000-b6d4f000 r--p 00006000 ca:00 9904 /usr/lib/i386-linux-gnu/libkrb5support.so.0.1 b6d4f000-b6d50000 rw-p 00007000 ca:00 9904 /usr/lib/i386-linux-gnu/libkrb5support.so.0.1 b6d50000-b6e97000 r-xp 00000000 ca:00 3416 /usr/lib/libxml2.so.2.7.8 b6e97000-b6e9b000 r--p 00147000 ca:00 3416 /usr/lib/libxml2.so.2.7.8 b6e9b000-b6e9c000 rw-p 0014b000 ca:00 3416 /usr/lib/libxml2.so.2.7.8 b6e9c000-b6e9d000 rw-p 00000000 00:00 0 b6e9d000-b6ec4000 r-xp 00000000 ca:00 12282 /usr/lib/i386-linux-gnu/libk5crypto.so.3.1 b6ec4000-b6ec5000 r--p 00026000 ca:00 12282 /usr/lib/i386-linux-gnu/libk5crypto.so.3.1 b6ec5000-b6ec6000 rw-p 00027000 ca:00 12282 /usr/lib/i386-linux-gnu/libk5crypto.so.3.1 b6ec6000-b6f88000 r-xp 00000000 ca:00 13335 /usr/lib/i386-linux-gnu/libkrb5.so.3.3 b6f88000-b6f8e000 r--p 000c1000 ca:00 13335 /usr/lib/i386-linux-gnu/libkrb5.so.3.3 b6f8e000-b6f8f000 rw-p 000c7000 ca:00 13335 /usr/lib/i386-linux-gnu/libkrb5.so.3.3 b6f8f000-b6fca000 r-xp 00000000 ca:00 9854 /usr/lib/i386-linux-gnu/libgssapi_krb5.so.2.2 b6fca000-b6fcb000 ---p 0003b000 ca:00 9854 /usr/lib/i386-linux-gnu/libgssapi_krb5.so.2.2 b6fcb000-b6fcc000 r--p 0003b000 ca:00 9854 /usr/lib/i386-linux-gnu/libgssapi_krb5.so.2.2 b6fcc000-b6fcd000 rw-p 0003c000 ca:00 9854 /usr/lib/i386-linux-gnu/libgssapi_krb5.so.2.2 b6fcd000-b6fdc000 r-xp 00000000 ca:00 21797 /lib/libbz2.so.1.0.4 b6fdc000-b6fdd000 r--p 0000e000 ca:00 21797 /lib/libbz2.so.1.0.4 b6fdd000-b6fde000 rw-p 0000f000 ca:00 21797 /lib/libbz2.so.1.0.4 b6fde000-b702a000 r-xp 00000000 ca:00 2505 /usr/lib/libqdbm.so.14.13.0 b702a000-b702b000 r--p 0004c000 ca:00 2505 /usr/lib/libqdbm.so.14.13.0 b702b000-b702c000 rw-p 0004d000 ca:00 2505 /usr/lib/libqdbm.so.14.13.0 b702c000-b71aa000 r-xp 00000000 ca:00 10201 /usr/lib/i386-linux-gnu/libdb-4.8.so b71aa000-b71ac000 r--p 0017d000 ca:00 10201 /usr/lib/i386-linux-gnu/libdb-4.8.so b71ac000-b71ad000 rw-p 0017f000 ca:00 10201 /usr/lib/i386-linux-gnu/libdb-4.8.so b71ad000-b71f7000 r-xp 00000000 ca:00 23521 /lib/libssl.so.0.9.8 b71f7000-b71f8000 r--p 0004a000 ca:00 23521 /lib/libssl.so.0.9.8 b71f8000-b71fb000 rw-p 0004b000 ca:00 23521 /lib/libssl.so.0.9.8 b71fb000-b7359000 r-xp 00000000 ca:00 835379 /lib/libcrypto.so.0.9.8 b7359000-b735a000 ---p 0015e000 ca:00 835379 /lib/libcrypto.so.0.9.8 b735a000-b7362000 r--p 0015e000 ca:00 835379 /lib/libcrypto.so.0.9.8 b7362000-b7371000 rw-p 00166000 ca:00 835379 /lib/libcrypto.so.0.9.8 b7371000-b7374000 rw-p 00000000 00:00 0 b7374000-b73ba000 r-xp 00000000 ca:00 2503 /usr/lib/libonig.so.2.0.0 b73ba000-b73bd000 rw-p 00045000 ca:00 2503 /usr/lib/libonig.so.2.0.0 b73be000-b73c0000 rw-p 00000000 00:00 0 b73c0000-b73c7000 r-xp 00000000 ca:00 45171 /usr/lib/apache2/modules/mod_proxy_http.so b73c7000-b73c8000 r--p 00006000 ca:00 45171 /usr/lib/apache2/modules/mod_proxy_http.so b73c8000-b73c9000 rw-p 00007000 ca:00 45171 /usr/lib/apache2/modules/mod_proxy_http.so b73c9000-b73dc000 r-xp 00000000 ca:00 22461 /lib/i386-linux-gnu/libz.so.1.2.3.4 b73dc000-b73dd000 r--p 00012000 ca:00 22461 /lib/i386-linux-gnu/libz.so.1.2.3.4 b73dd000-b73de000 rw-p 00013000 ca:00 22461 /lib/i386-linux-gnu/libz.so.1.2.3.4 b73de000-b73e3000 rw-p 00000000 00:00 0 b73e3000-b73ea000 r-xp 00000000 ca:00 45188 /usr/lib/apache2/modules/mod_negotiation.so b73ea000-b73eb000 r--p 00006000 ca:00 45188 /usr/lib/apache2/modules/mod_negotiation.so b73eb000-b73ec000 rw-p 00007000 ca:00 45188 /usr/lib/apache2/modules/mod_negotiation.so b73ec000-b73f1000 rw-p 00000000 00:00 0 b73f2000-b73f5000 r-xp 00000000 ca:00 45149 /usr/lib/apache2/modules/mod_reqtimeout.so b73f5000-b73f6000 r--p 00002000 ca:00 45149 /usr/lib/apache2/modules/mod_reqtimeout.so b73f6000-b73f7000 rw-p 00003000 ca:00 45149 /usr/lib/apache2/modules/mod_reqtimeout.so b73f7000-b73fc000 rw-p 00000000 00:00 0 b73fc000-b73fe000 rw-p 00000000 00:00 0 b73fe000-b7400000 r-xp 00000000 ca:00 22437 /lib/i386-linux-gnu/libkeyutils.so.1.3 b7400000-b7401000 r--p 00001000 ca:00 22437 /lib/i386-linux-gnu/libkeyutils.so.1.3 b7401000-b7402000 rw-p 00002000 ca:00 22437 /lib/i386-linux-gnu/libkeyutils.so.1.3 b7402000-b7407000 rw-p 00000000 00:00 0 b7407000-b7409000 r-xp 00000000 ca:00 22344 /lib/i386-linux-gnu/libcom_err.so.2.1 b7409000-b740a000 r--p 00001000 ca:00 22344 /lib/i386-linux-gnu/libcom_err.so.2.1 b740a000-b740b000 rw-p 00002000 ca:00 22344 /lib/i386-linux-gnu/libcom_err.so.2.1 b740b000-b7410000 rw-p 00000000 00:00 0 b7411000-b7413000 rw-p 00000000 00:00 0 b7413000-b7416000 rw-p 00000000 00:00 0 b7416000-b7418000 rw-p 00000000 00:00 0 b7418000-b741c000 r-xp 00000000 ca:00 45176 /usr/lib/apache2/modules/mod_mime.so b741c000-b741d000 r--p 00003000 ca:00 45176 /usr/lib/apache2/modules/mod_mime.so b741d000-b741e000 rw-p 00004000 ca:00 45176 /usr/lib/apache2/modules/mod_mime.so b741e000-b7422000 r-xp 00000000 ca:00 45162 /usr/lib/apache2/modules/mod_headers.so b7422000-b7423000 r--p 00003000 ca:00 45162 /usr/lib/apache2/modules/mod_headers.so b7423000-b7424000 rw-p 00004000 ca:00 45162 /usr/lib/apache2/modules/mod_headers.so b7424000-b7426000 r-xp 00000000 ca:00 45161 /usr/lib/apache2/modules/mod_expires.so b7426000-b7427000 r--p 00001000 ca:00 45161 /usr/lib/apache2/modules/mod_expires.so b7427000-b7428000 rw-p 00002000 ca:00 45161 /usr/lib/apache2/modules/mod_expires.so b7428000-b742a000 r-xp 00000000 ca:00 45189 /usr/lib/apache2/modules/mod_dir.so b742a000-b742b000 r--p 00001000 ca:00 45189 /usr/lib/apache2/modules/mod_dir.so b742b000-b742c000 rw-p 00002000 ca:00 45189 /usr/lib/apache2/modules/mod_dir.so b742c000-b742e000 rw-p 00000000 00:00 0 b742f000-b7430000 r-xp 00000000 ca:00 45158 /usr/lib/apache2/modules/mod_env.so b7430000-b7431000 r--p 00000000 ca:00 45158 /usr/lib/apache2/modules/mod_env.so b7431000-b7432000 rw-p 00001000 ca:00 45158 /usr/lib/apache2/modules/mod_env.so b7432000-b7437000 rw-p 00000000 00:00 0 b7437000-b743c000 r-xp 00000000 ca:00 45155 /usr/lib/apache2/modules/mod_deflate.so b743c000-b743d000 r--p 00004000 ca:00 45155 /usr/lib/apache2/modules/mod_deflate.so b743d000-b743e000 rw-p 00005000 ca:00 45155 /usr/lib/apache2/modules/mod_deflate.so b743e000-b7443000 rw-p 00000000 00:00 0 b7443000-b7448000 r-xp 00000000 ca:00 45184 /usr/lib/apache2/modules/mod_cgi.so b7448000-b7449000 r--p 00004000 ca:00 45184 /usr/lib/apache2/modules/mod_cgi.so b7449000-b744a000 rw-p 00005000 ca:00 45184 /usr/lib/apache2/modules/mod_cgi.so b744a000-b744f000 rw-p 00000000 00:00 0 b744f000-b7457000 r-xp 00000000 ca:00 45179 /usr/lib/apache2/modules/mod_autoindex.so b7457000-b7458000 r--p 00007000 ca:00 45179 /usr/lib/apache2/modules/mod_autoindex.so b7458000-b7459000 rw-p 00008000 ca:00 45179 /usr/lib/apache2/modules/mod_autoindex.so b7459000-b745e000 rw-p 00000000 00:00 0 b745e000-b745f000 r-xp 00000000 ca:00 45136 /usr/lib/apache2/modules/mod_authz_user.so b745f000-b7460000 r--p 00000000 ca:00 45136 /usr/lib/apache2/modules/mod_authz_user.so b7460000-b7461000 rw-p 00001000 ca:00 45136 /usr/lib/apache2/modules/mod_authz_user.so b7461000-b7466000 rw-p 00000000 00:00 0 b7466000-b7468000 r-xp 00000000 ca:00 45134 /usr/lib/apache2/modules/mod_authz_host.so b7468000-b7469000 r--p 00001000 ca:00 45134 /usr/lib/apache2/modules/mod_authz_host.so b7469000-b746a000 rw-p 00002000 ca:00 45134 /usr/lib/apache2/modules/mod_authz_host.so b746a000-b746f000 rw-p 00000000 00:00 0 b746f000-b7471000 r-xp 00000000 ca:00 45135 /usr/lib/apache2/modules/mod_authz_groupfile.so b7471000-b7472000 r--p 00001000 ca:00 45135 /usr/lib/apache2/modules/mod_authz_groupfile.so b7472000-b7473000 rw-p 00002000 ca:00 45135 /usr/lib/apache2/modules/mod_authz_groupfile.so b7473000-b7478000 rw-p 00000000 00:00 0 b7478000-b7479000 r-xp 00000000 ca:00 45140 /usr/lib/apache2/modules/mod_authz_default.so b7479000-b747a000 r--p 00000000 ca:00 45140 /usr/lib/apache2/modules/mod_authz_default.so b747a000-b747b000 rw-p 00001000 ca:00 45140 /usr/lib/apache2/modules/mod_authz_default.so b747b000-b7480000 rw-p 00000000 00:00 0 b7480000-b7481000 r-xp 00000000 ca:00 44436 /usr/lib/apache2/modules/mod_authn_file.so b7481000-b7482000 ---p 00001000 ca:00 44436 /usr/lib/apache2/modules/mod_authn_file.so b7482000-b7483000 r--p 00001000 ca:00 44436 /usr/lib/apache2/modules/mod_authn_file.so b7483000-b7484000 rw-p 00002000 ca:00 44436 /usr/lib/apache2/modules/mod_authn_file.so b7484000-b7489000 rw-p 00000000 00:00 0 b7489000-b748b000 r-xp 00000000 ca:00 45141 /usr/lib/apache2/modules/mod_auth_basic.so b748b000-b748c000 r--p 00001000 ca:00 45141 /usr/lib/apache2/modules/mod_auth_basic.so b748c000-b748d000 rw-p 00002000 ca:00 45141 /usr/lib/apache2/modules/mod_auth_basic.so b748d000-b7492000 rw-p 00000000 00:00 0 b7492000-b7495000 r-xp 00000000 ca:00 45194 /usr/lib/apache2/modules/mod_alias.so b7495000-b7496000 r--p 00002000 ca:00 45194 /usr/lib/apache2/modules/mod_alias.so b7496000-b7497000 rw-p 00003000 ca:00 45194 /usr/lib/apache2/modules/mod_alias.so b7497000-b74d8000 rw-p 00000000 00:00 0 b74d8000-b74db000 r-xp 00000000 ca:00 21902 /lib/i386-linux-gnu/libdl-2.13.so b74db000-b74dc000 r--p 00002000 ca:00 21902 /lib/i386-linux-gnu/libdl-2.13.so b74dc000-b74dd000 rw-p 00003000 ca:00 21902 /lib/i386-linux-gnu/libdl-2.13.so b74dd000-b74de000 rw-p 00000000 00:00 0 b74de000-b74e2000 r-xp 00000000 ca:00 22401 /lib/i386-linux-gnu/libuuid.so.1.3.0 b74e2000-b74e3000 r--p 00003000 ca:00 22401 /lib/i386-linux-gnu/libuuid.so.1.3.0 b74e3000-b74e4000 rw-p 00004000 ca:00 22401 /lib/i386-linux-gnu/libuuid.so.1.3.0 b74e4000-b750a000 r-xp 00000000 ca:00 22420 /lib/i386-linux-gnu/libexpat.so.1.5.2 b750a000-b750b000 ---p 00026000 ca:00 22420 /lib/i386-linux-gnu/libexpat.so.1.5.2 b750b000-b750d000 r--p 00026000 ca:00 22420 /lib/i386-linux-gnu/libexpat.so.1.5.2 b750d000-b750e000 rw-p 00028000 ca:00 22420 /lib/i386-linux-gnu/libexpat.so.1.5.2 b750e000-b7516000 r-xp 00000000 ca:00 21889 /lib/i386-linux-gnu/libcrypt-2.13.so b7516000-b7517000 r--p 00007000 ca:00 21889 /lib/i386-linux-gnu/libcrypt-2.13.so b7517000-b7518000 rw-p 00008000 ca:00 21889 /lib/i386-linux-gnu/libcrypt-2.13.so b7518000-b753f000 rw-p 00000000 00:00 0 b753f000-b76b7000 r-xp 00000000 ca:00 21864 /lib/i386-linux-gnu/libc-2.13.so b76b7000-b76b9000 r--p 00178000 ca:00 21864 /lib/i386-linux-gnu/libc-2.13.so b76b9000-b76ba000 rw-p 0017a000 ca:00 21864 /lib/i386-linux-gnu/libc-2.13.so b76ba000-b76bd000 rw-p 00000000 00:00 0 b76bd000-b76d4000 r-xp 00000000 ca:00 24594 /lib/i386-linux-gnu/libpthread-2.13.so b76d4000-b76d5000 r--p 00016000 ca:00 24594 /lib/i386-linux-gnu/libpthread-2.13.so b76d5000-b76d6000 rw-p 00017000 ca:00 24594 /lib/i386-linux-gnu/libpthread-2.13.so b76d6000-b76d9000 rw-p 00000000 00:00 0 b76d9000-b770c000 r-xp 00000000 ca:00 6233 /usr/lib/libapr-1.so.0.4.5 b770c000-b770d000 r--p 00032000 ca:00 6233 /usr/lib/libapr-1.so.0.4.5 b770d000-b770e000 rw-p 00033000 ca:00 6233 /usr/lib/libapr-1.so.0.4.5 b770e000-b772f000 r-xp 00000000 ca:00 6236 /usr/lib/libaprutil-1.so.0.3.12 b772f000-b7730000 r--p 00020000 ca:00 6236 /usr/lib/libaprutil-1.so.0.3.12 b7730000-b7731000 rw-p 00021000 ca:00 6236 /usr/lib/libaprutil-1.so.0.3.12 b7731000-b776e000 r-xp 00000000 ca:00 22336 /lib/i386-linux-gnu/libpcre.so.3.12.1 b776e000-b776f000 r--p 0003c000 ca:00 22336 /lib/i386-linux-gnu/libpcre.so.3.12.1 b776f000-b7770000 rw-p 0003d000 ca:00 22336 /lib/i386-linux-gnu/libpcre.so.3.12.1 b7770000-b7780000 rw-p 00000000 00:00 0 b7780000-b779e000 r-xp 00000000 ca:00 21844 /lib/i386-linux-gnu/ld-2.13.so b779e000-b779f000 r--p 0001d000 ca:00 21844 /lib/i386-linux-gnu/ld-2.13.so b779f000-b77a0000 rw-p 0001e000 ca:00 21844 /lib/i386-linux-gnu/ld-2.13.so b77a0000-b7803000 r-xp 00000000 ca:00 44432 /usr/lib/apache2/mpm-prefork/apache2 b7803000-b7805000 r--p 00063000 ca:00 44432 /usr/lib/apache2/mpm-prefork/apache2 b7805000-b7807000 rw-p 00065000 ca:00 44432 /usr/lib/apache2/mpm-prefork/apache2 b7807000-b780a000 rw-p 00000000 00:00 0 b7a17000-b7a55000 rw-p 00000000 00:00 0 [heap] b7a55000-b7b9f000 rw-p 00000000 00:00 0 [heap] b7b9f000-b7c1a000 rw-p 00000000 00:00 0 [heap] bf9a1000-bf9c2000 rw-p 00000000 00:00 0 [stack] f57fe000-f57ff000 r-xp 00000000 00:00 0 [vdso] [Tue Jun 26 13:15:10 2012] [notice] child pid 26840 exit signal Aborted (6) Sometimes it recovers, but sometimes it kills the server. It's unclear to me what glibc is doing to crash.. can anyone decipher what's crashing in this error log?

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • SQL Server Editions and Integration Services

    The SQL Server 2005 and SQL Server 2008 product family has quite a few editions now, so what does this mean for SQL Server Integration Services? Starting from the bottom we have the free edition known as Express, and the entry level Workgroup edition, as well as the new Web edition. None of these three include the full SSIS product, but they do all include the SQL Server Import and Export Wizard, with access to basic data sources but nothing more, so for simple loading and extraction of data this should suffice. You will not be able to build packages though, this is just a one shot deal aimed at using the wizard on an ad-hoc basis. To get the full power of Integration Services you need to start with Standard edition. This includes the BI Development Studio, for building your own packages, and fully functional IDE integrated into Visual Studio. (You get the full VS 2005/2008 IDE with the product). All core functions will be available but with a restricted set of transformations and tasks. The SQL Server 2005 Features Comparison or Features Supported by the Editions of SQL Server 2008 describes standard edition as having basic transforms, compared to Enterprise which includes the advanced transforms. I think basic is a little harsh considering the power you get with Standard, but the advanced covers the truly ground-breaking capabilities of data mining, text mining and cleansing or fuzzy transforms. The power of performing these operations within your ETL pipeline should not be underestimated, but not all processes will require these capabilities, so it seems like a reasonable delineation. Thankfully there are no feature limitations or artificial governors within Standard compared to Enterprise. The same control flow and data flow engines underpin both editions, with the same configuration and deployment options allowing you to work seamlessly between environments and editions if using the common components. In fact there are no govenors at all in SSIS, so whilst the SQL Database engine is limited to 4 CPUs in Standard edition, SSIS is only limited by the base operating system. The advanced transforms only available with Enterprise edition: Data Mining Training Destination Data Mining Query Component Fuzzy Grouping Fuzzy Lookup Term Extraction Term Lookup Dimension Processing Destination Partition Processing Destination The advanced tasks only available with Enterprise edition: Data Mining Query Task So in summary, if you want SQL Server Integration Services, you need SQL Server Standard edition, and for the more advanced tasks and transforms you need SQL Server Enterprise edition. To recap, the answer to the often asked question is no, SQL Server Integration Services is not available in SQL Server Express or Workgroup editions.

    Read the article

  • Initializing and drawing a mesh using OpenTK

    - by Boreal
    I'm implementing a "Mesh" class to use in my OpenTK game. You pass in a vertex array and an index array, and then you can call Mesh.Draw() to draw it using a shader. I've heard VBO's and VAO's are the way to go for this approach, but nowhere have I found a guide that shows how to get Data Video Memory Shader. Can someone give me a quick rundown of how this works? EDIT: So far, I have this: struct Vertex { public Vector3 position; public Vector3 normal; public Vector3 color; public static int memSize = 9 * sizeof(float); public static byte[] memOffset = { 0, 3 * sizeof(float), 6 * sizeof(float) }; } class Mesh { private uint vbo; private uint ibo; // stores the numbers of vertices and indices private int numVertices; private int numIndices; public Mesh(int numVertices, Vertex[] vertices, int numIndices, ushort[] indices) { // set numbers this.numVertices = numVertices; this.numIndices = numIndices; // generate buffers GL.GenBuffers(1, out vbo); GL.GenBuffers(1, out ibo); GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo); // send data to the buffers GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(Vertex.memSize * numVertices), vertices, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, new IntPtr(sizeof(ushort) * numIndices), indices, BufferUsageHint.StaticDraw); } public void Render() { // bind buffers GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo); // define offsets GL.VertexPointer(3, VertexPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[0])); GL.NormalPointer(NormalPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[1])); GL.ColorPointer(3, ColorPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[2])); // draw GL.DrawElements(BeginMode.Triangles, numIndices, DrawElementsType.UnsignedInt, (IntPtr)0); } } class Application : GameWindow { Mesh triangle; protected override void OnLoad(EventArgs e) { base.OnLoad(e); GL.ClearColor(0.1f, 0.2f, 0.5f, 0.0f); GL.Enable(EnableCap.DepthTest); GL.Enable(EnableCap.VertexArray); GL.Enable(EnableCap.NormalArray); GL.Enable(EnableCap.ColorArray); Vertex v0 = new Vertex(); v0.position = new Vector3(-1.0f, -1.0f, 4.0f); v0.normal = new Vector3(0.0f, 0.0f, -1.0f); v0.color = new Vector3(1.0f, 1.0f, 0.0f); Vertex v1 = new Vertex(); v1.position = new Vector3(1.0f, -1.0f, 4.0f); v1.normal = new Vector3(0.0f, 0.0f, -1.0f); v1.color = new Vector3(1.0f, 0.0f, 0.0f); Vertex v2 = new Vertex(); v2.position = new Vector3(0.0f, 1.0f, 4.0f); v2.normal = new Vector3(0.0f, 0.0f, -1.0f); v2.color = new Vector3(0.2f, 0.9f, 1.0f); Vertex[] va = { v0, v1, v2 }; ushort[] ia = { 0, 1, 2 }; triangle = new Mesh(3, va, 3, ia); } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); Matrix4 modelview = Matrix4.LookAt(Vector3.Zero, Vector3.UnitZ, Vector3.UnitY); GL.MatrixMode(MatrixMode.Modelview); GL.LoadMatrix(ref modelview); triangle.Render(); SwapBuffers(); } } It doesn't draw anything.

    Read the article

  • Localization in ASP.NET MVC 2 using ModelMetadata

    - by rajbk
    This post uses an MVC 2 RTM application inside VS 2010 that is targeting the .NET Framework 4. .NET 4 DataAnnotations comes with a new Display attribute that has several properties including specifying the value that is used for display in the UI and a ResourceType. Unfortunately, this attribute is new and is not supported in MVC 2 RTM. The good news is it will be supported and is currently available in the MVC Futures release. The steps to get this working are shown below: Download the MVC futures library   Add a reference to the Microsoft.Web.MVC.AspNet4 dll.   Add a folder in your MVC project where you will store the resx files   Open the resx file and change “Access Modifier” to “Public”. This allows the resources to accessible from other assemblies. Internaly, it changes the “Custom Tool” used to generate the code behind from  ResXFileCodeGenerator to “PublicResXFileCodeGenerator”    Add your localized strings in the resx.   Register the new ModelMetadataProvider protected void Application_Start() { AreaRegistration.RegisterAllAreas();   RegisterRoutes(RouteTable.Routes);   //Add this ModelMetadataProviders.Current = new DataAnnotations4ModelMetadataProvider(); DataAnnotations4ModelValidatorProvider.RegisterProvider(); }   Use the Display attribute in your Model public class Employee { [Display(Name="ID")] public int ID { get; set; }   [Display(ResourceType = typeof(Common), Name="Name")] public string Name { get; set; } } Use the new HTML UI Helpers in your strongly typed view: <%: Html.EditorForModel() %> <%: Html.EditorFor(m => m) %> <%: Html.LabelFor(m => m.Name) %> ..and you are good to go. Adventure is out there!

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >