Search Results

Search found 3004 results on 121 pages for 'plain'.

Page 16/121 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Optional parens in Ruby for method with uppercase start letter?

    - by RasmusKL
    I just started out using IronRuby (but the behaviour seems consistent when I tested it in plain Ruby) for a DSL in my .NET application - and as part of this I'm defining methods to be called from the DSL via define_method. However, I've run into an issue regarding optional parens when calling methods starting with an uppercase letter. Given the following program: class DemoClass define_method :test do puts "output from test" end define_method :Test do puts "output from Test" end def run puts "Calling 'test'" test() puts "Calling 'test'" test puts "Calling 'Test()'" Test() puts "Calling 'Test'" Test end end demo = DemoClass.new demo.run Running this code in a console (using plain ruby) yields the following output: ruby .\test.rb Calling 'test' output from test Calling 'test' output from test Calling 'Test()' output from Test Calling 'Test' ./test.rb:13:in `run': uninitialized constant DemoClass::Test (NameError) from ./test.rb:19:in `<main>' I realize that the Ruby convention is that constants start with an uppercase letter and that the general naming convention for methods in Ruby is lowercase. But the parens are really killing my DSL syntax at the moment. Is there any way around this issue?

    Read the article

  • How can I convince IE to simply display application/json rather than offer to download it?

    - by Cheeso
    While debugging jQuery apps that use AJAX, I often have the need to see the json that is being returned by the service to the browser. So I'll drop the URL for the JSON data into the address bar. This is nice with ASPNET because in the event of a coding error, I Can see the ASPNET diagostic in the browser: But when the server-side code works correctly and actually returns JSON, IE prompts me to download it, so I can't see the response. Can I get IE to NOT do that, in other words, to just display it as if it were plain text? I know I could do this if I set the Content-Type header to be text/plain. But this is specifically an the context of an ASPNET MVC app, which sets the response automagically when I use JsonResult on one of my action methods. Also I kinda want to keep the appropriate content-type, and not change it just to support debugging efforts.

    Read the article

  • PHP and C# communication with Encrypt/Decrypt

    - by SilentWarrior
    Hello, I have been searching and cant find a consistent solution to my problem : I want to encrypt something in C# and decrypt it in PHP but also be able to encrypt in PHP and decrypt in C#, using the same key on both ends. All the solutions I found dont seem to work both ways, most of them only work on one language and then fail on the other, either by decrypting wrong or by blowing up the offsets. I would like to use TripleDES but it isnt a requirement, just want something relatively strong for plain text communication (will either use JSON or just plain key-value pairs for complex stuff). Thanks in advance PS: http://pastie.org/643106 this is what I have been testing with.

    Read the article

  • Hide and Show content based on Cookie value

    - by danit
    Here is my Jquery: $("#tool").click(function() { $(".chelp").slideToggle(); $("wrapper").animate({ opacity: 1.0 },200).slideToggle(200, function() { $("#tool img").toggle(); }); }); When you click #tool img #wrapper is hidden along with .chelp. I need to control this with a cookie, so when the user hides #wrapper it remains hidden on all pages or when they re-visit the page. I know there is a jQuery Cookie plugin, but I'd like to do this with plain Javascript rather then including another plugin. Can anyone tell me how i can build in it in plain javascript and merge with the JQuery to create a cookie, then check the cookie each time the page loads to see if #wrapper should be hidden or displayed?

    Read the article

  • Is there any sample Java code that does AES encryption exactly like this website?

    - by user1068636
    http://www.hanewin.net/encrypt/aes/aes-test.htm If you go to this website and enter the following: "Key In Hex": 00000000000000000000000000123456 "Plain Text in Hex": 00000000000000000000000000000000 And click on "Encrypt" button you will see the ciphertext in hex is: 3fa9f2a6e4c2b440fb6f676076a8ba04 Is there a Java program out there that I can do this (I.e. Is there an AES library that will input the "Key In Hex" above with the "Plain Text In Hex" above and generate the Ciphertext in Hex above? )? I would appreciate any advice or links to Java sample code that does this.

    Read the article

  • Postfix MySql Dovecot - SMTP Authentication Failure

    - by borncamp
    Hello I have a Postfix setup with Dovecot and MySql. The server is running Debian Squeeze. The MySql server is a slave that has data pushed to it from a primary (postfix) mail server(running a different os). The emails are stored on a replicated GlusterFS volume. I am able to check email using thunderbird over IMAP. However, SMTP requests fail. After turning on query logs for the MySql server I have noticed that no query statement is executed to retrieve the user information when an SMTP client tries to authenticate. I'd like to know what I'm doing wrong or what the next troubleshooting steps are. I'm about to pull my hair out. Below is some log and configuration data that I thought would be relevant. You're help is much obliged. The file /var/log/mail.log shows Oct 11 14:54:16 mailbox2 postfix/smtpd[25017]: connect from unknown[192.168.0.44] Oct 11 14:54:19 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL PLAIN authentication failed: Oct 11 14:54:25 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:48 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL PLAIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:54 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:57 mailbox2 postfix/smtpd[25017]: disconnect from unknown[192.168.0.44] This is my dovecot.conf file log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/var/mail/virtual/%d/%n/ auth_mechanisms = plain login disable_plaintext_auth = no namespace { inbox = yes location = prefix = INBOX. separator = . type = private } passdb { args = /etc/dovecot/dovecot-mysql.conf driver = sql } protocols = imap pop3 service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-master { mode = 0600 user = postfix } user = root } ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { args = /etc/dovecot/dovecot-mysql.conf driver = sql } protocol lda { auth_socket_path = /var/run/dovecot/auth-master mail_plugins = sieve postmaster_address = [email protected] } protocol pop3 { pop3_uidl_format = %08Xu%08Xv } Here is my dovecot-mysql.conf file: connect = host=127.0.0.1 dbname=postfix user=postfix password=ffjM2MYAqQtAzRHX driver = mysql default_pass_scheme = MD5-CRYPT password_query = SELECT username AS user,password FROM mailbox WHERE username = '%u' AND active='1' user_query = SELECT CONCAT('/var/mail/virtual/', maildir) AS home, 1001 AS uid, 109 AS gid, CONCAT('*:messages=10000:bytes=',quota) as quota_rule, 'Trash:ignore' AS quota_rule2 FROM mailbox WHERE username = '%u' AND active='1' Here is my output from 'postconf -n': append_dot_mydomain = no biff = no bounce_template_file = /etc/postfix/bounce.cf broken_sasl_auth_clients = yes config_directory = /etc/postfix delay_warning_time = 0h dovecot_destination_recipient_limit = 1 inet_interfaces = all local_recipient_maps = $virtual_mailbox_maps local_transport = virtual mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 maximal_queue_lifetime = 1d message_size_limit = 25600000 mydestination = mailbox2.cws.net, debian.local.cws.net, localhost.local.cws.net, localhost myhostname = mailbox2.cws.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 172.18.0.119 63.164.138.3 myorigin = /etc/mailname proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps readme_directory = no recipient_delimiter = + relay_domains = relayhost = smtp_connect_timeout = 10 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_client_message_rate_limit = 50 smtpd_client_recipient_rate_limit = 500 smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_delay_reject = yes smtpd_discard_ehlo_keyword_address_maps = hash:/etc/postfix/discard_ehlo smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_invalid_helo_hostname, permit smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_tls_security_options = $smtpd_sasl_security_options smtpd_sasl_type = dovecot smtpd_sender_restrictions = permit_mynetworks, reject_non_fqdn_sender, reject_unknown_sender_domain, permit smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_catchall_maps.cf virtual_gid_maps = static:1001 virtual_mailbox_base = /var/mail/virtual/ virtual_mailbox_domains = proxy:mysql:/etc/postfix/sql/mysql_virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/sql/mysql_virtual_mailbox_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_mailbox_maps.cf virtual_transport = dovecot virtual_uid_maps = static:1001

    Read the article

  • DKIMPROXY signing wrong domain

    - by user64566
    Just.... wont sign a thing... The dkimproxy_out.conf: # specify what address/port DKIMproxy should listen on listen 127.0.0.1:10028 # specify what address/port DKIMproxy forwards mail to relay 127.0.0.1:10029 # specify what domains DKIMproxy can sign for (comma-separated, no spaces) domain tinymagnet.com,hypnoenterprises.com # specify what signatures to add signature dkim(c=relaxed) signature domainkeys(c=nofws) # specify location of the private key keyfile /etc/postfix/dkim/private.key # specify the selector (i.e. the name of the key record put in DNS) selector mail The direct connection straight to the server, making it clear that this is a problem with dkimproxy and not postfix... mmxbass@hypno1:~$ telnet localhost 10028 Trying 127.0.0.1... Connected to localhost.localdomain. Escape character is '^]'. 220 hypno1.hypnoenterprises.com ESMTP Postfix (Debian/GNU) EHLO hypno1.hypnoenterprises.com 250-hypno1.hypnoenterprises.com 250-PIPELINING 250-SIZE 250-ETRN 250-STARTTLS 250-AUTH PLAIN LOGIN 250-AUTH=PLAIN LOGIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN MAIL FROM:<[email protected]> 250 2.1.0 Ok RCPT TO:<[email protected]> 250 2.1.5 Ok DATA 354 End data with <CR><LF>.<CR><LF> SUBJECT:test . 250 2.0.0 Ok: queued as B62A78D94F QUIT 221 2.0.0 Bye Now lets look at the mail headers as reported by myiptest.com: From [email protected] Thu Dec 23 18:57:14 2010 Return-path: Envelope-to: [email protected] Delivery-date: Thu, 23 Dec 2010 18:57:14 +0000 Received: from [184.82.95.154] (helo=hypno1.hypnoenterprises.com) by myiptest.com with esmtp (Exim 4.69) (envelope-from ) id 1PVqLi-0004YR-5f for [email protected]; Thu, 23 Dec 2010 18:57:14 +0000 Received: from hypno1.hypnoenterprises.com (localhost.localdomain [127.0.0.1]) by hypno1.hypnoenterprises.com (Postfix) with ESMTP id 878418D902 for ; Thu, 23 Dec 2010 13:57:26 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha1; c=simple; d=hypnoenterprises.com; h= from:to:subject:date:mime-version:content-type :content-transfer-encoding:message-id; s=mail; bh=uoq1oCgLlTqpdD X/iUbLy7J1Wic=; b=HxBKTGjzTpZSZU8xkICtARCKxqriqZK+qHkY1U8qQlOw+S S1wlZxzTeDGIOgeiTviGDpcKWkLLTMlUvx8dY4FuT8K1/raO9nMC7xjG2uLayPX0 zLzm4Srs44jlfRQIjrQd9tNnp35Wkry6dHPv1u21WUvnDWaKARzGGHRLfAzW4= Received: from localhost (localhost.localdomain [127.0.0.1]) by hypno1.hypnoenterprises.com (Postfix) with ESMTP id 2A04A8D945 for ; Thu, 23 Dec 2010 13:57:26 -0500 (EST) X-Virus-Scanned: Debian amavisd-new at hypno1.hypnoenterprises.com Received: from hypno1.hypnoenterprises.com ([127.0.0.1]) by localhost (hypno1.hypnoenterprises.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Ua7BnnzmIaUO for ; Thu, 23 Dec 2010 13:57:25 -0500 (EST) Received: from phoenix.localnet (c-76-23-245-211.hsd1.ma.comcast.net [76.23.245.211]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by hypno1.hypnoenterprises.com (Postfix) with ESMTPSA id 48A0D8D90D for ; Thu, 23 Dec 2010 13:57:25 -0500 (EST) From: Joshua Pech To: [email protected] Subject: test Date: Thu, 23 Dec 2010 13:57:25 -0500 User-Agent: KMail/1.13.5 (Linux/2.6.32-5-amd64; KDE/4.4.5; x86_64; ; ) MIME-Version: 1.0 Content-Type: Text/Plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Message-Id: DomainKey-Status: no signature Received-SPF: pass (myiptest.com: domain of tinymagnet.com designates 184.82.95.154 as permitted sender) Notice how the dkim signature specifies the d=hypnoenterprises.com.... why?

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Exam 71-515: TS: Web Applications Development with Microsoft .NET Framework 4

    - by Ricardo Peres
    I took the 71-515 exam today. 85 questions, 240 minutes. Here are some notes: Great number of jQuery questions, mostly having to do with AJAX Lots of MVC 2 questions also A number of classic ASP.NET web forms, of which only a few were related with the new 4 features Some Entity Framework Some plain old JavaScript, like, changing an image dynamically I think I did OK. As with my previous exam, I still don't know if I passed or not, will have to wait for the end of the beta period.

    Read the article

  • Product Naming Conventions - Does it make sense

    - by NeilHambly
    Maybe it’s just me, but with some of the MS Products being released in 2010 with "2010" in their product name, is the naming of the SQL Server product suite being released with product name that doesn’t make sense, our latest SQL Server Release which is now just about to be released is "SQL Server 2008 R2" My question is do you think this product name is ? Good, Bad or just plain confusing IMHO I think we could have been better placed if this was named "SQL Server 2010"...(read more)

    Read the article

  • Cartoon Games and Arcade Games SEO Part 2

    Now students, today we will be talking about SEO. But not just some regular, plain, old SEO. I'm talking arcades here, and that means we gotta put a little spin on it and change things up just a bit. Isn't that exciting! Learning new techniques are great, so let's start.

    Read the article

  • Cartoon Games and Arcade Games SEO Part 2

    Now students, today we will be talking about SEO. But not just some regular, plain, old SEO. I'm talking arcades here, and that means we gotta put a little spin on it and change things up just a bit. Isn't that exciting! Learning new techniques are great, so let's start.

    Read the article

  • Show raw Text Code from a URL with CodePaste.NET

    - by Rick Strahl
    I introduced CodePaste.NET more than 2 years ago. In case you haven't checked it out it's a code-sharing site where you can post some code, assign a title and syntax scheme to it and then share it with others via a short URL. The idea is super simple and it's not the first time this has been done, but it's focused on Microsoft languages and caters to that crowd. Show your own code from the Web There's another feature that I tweeted about recently that's been there for some time, but is not used very much: CodePaste.NET has the ability to show raw text based code from a URL on the Web in syntax colored format for any of the formats provided. I use this all the time with code links to my Subversion repository which only displays code as plain text. Using CodePaste.NET allows me to show syntax colored versions of the same code. For example I can go from this URL: http://www.west-wind.com:8080/svn/WestwindWebToolkit/trunk/Westwind.Utilities/SupportClasses/PropertyBag.cs To a nicely colored source code view at this Url: http://codepaste.net/ShowUrl?url=http%3A%2F%2Fwww.west-wind.com%3A8080%2Fsvn%2FWestwindWebToolkit%2Ftrunk%2FWestwind.Utilities%2FSupportClasses%2FPropertyBag.cs&Language=C%23 which looks like this:   Use the Form or access URLs directly To get there navigate to the Web Code icon on the CodePaste.NET site and paste your original URL and select a language to display: The form creates a link shown above which has two query string parameters: url - The URL for the raw text on the Web language -  The code language used for syntax highlighting Note that parameters must be URL encoded to work especially the # in C# because otherwise the # will be interpreted by the browser as a hash tag to jump to in the target URL. The URL must be Web accessible so that CodePaste can download it and then apply the syntax coloring. It doesn't work with localhost urls for example. The code returned must be returned in plain text - HTML based text doesn't work. Hope some of you find this a useful feature. Enjoy…© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How Assassin’s Creed Should Have Ended [Video]

    - by Asian Angel
    Altair is on the run yet again from Italy’s finest and keeps managing to hide in plain sight. But will his luck hold out or will his final attempt to escape end in tragedy? How It Should Have Ended: Video…: Assassin’s Creed [via Dorkly Bits] How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • Good design cannot be over-design

    - by ??? Shengyuan Lu
    Many engineers intend to design software to build "flexible" system in which many design patterns and interfaces there. Eventually too many interfaces and complex inheritances mess up the system. In most cases I think the improper design caused the mess, rather than not over-design. If design is reasonable, it's hard to be over. Alternatively, If we don't have enough skill to achieve flexible design, we choose to plain and practical design. What's your opinion about my understanding?

    Read the article

  • Apache2 "pseudo" doc root

    - by Brent
    I have several folders in my /www folder that contain various applications. To keep things organized, I keep them in their own folders -- this includes my base application. Examples: phpmyadmin = /www/phpmyadmin phpvirtualbox = /www/phpvirtualbox root domain site = /www/Landing The reason I segregate all of my sites is that I actively develop on some of these (my root site) and when I publish via Visual Studio, I choose to delete prior to upload - if I put the Landing page in the base folder, it would be devastating for me. My goal is that when I go to www.example.com - I go to my page. If I go to www.example.com/phpmyadmin, it does not work because of this in the Apache2 folder: <Location "/"> # Error is the "/" Allow from all Order allow,deny MonoSetServerAlias domain SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> If I change the location to say "/Other", then the base site is broken, and the aliases are restored for the other sites. If it is "/", then the base site works and no aliases work. What could I do to allow it to treat my /www/Landing as my webroot, but when I go to an alias, it GOES to the alias. Edit: Added in the default VirtualHost info. DocumentRoot /var/www <VirtualHost *:80> ServerAdmin [email protected] ServerName www.example.com ExpiresActive On ExpiresByType image/gif A2592000 ExpiresByType image/png A2592000 ExpiresByType image/jpg A2592000 ExpiresByType image/jpeg A2592000 ExpiresByType text/css "access plus 1 days" MonoServerPath domain "/usr/bin/mod-mono-server4" MonoDebug domain true MonoSetEnv domain MONO_IOMAP=all MonoApplications domain "/:/var/www/Landing" RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule (.*) /Landing/$1 [L] #Need to watch what the Location is set to. Can cause issues for alias <Location "/"> Allow from all Order allow,deny MonoSetServerAlias domain SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • VIM does not detect syntax of .ssh/config

    - by Erik
    On a plain Ubuntu installation (12.04 in my case) when I have no ~/.vimrc VIM does not detect syntax of .ssh/config. Syntax highlighting works, but it does not set the correct filetype. vi ~/.ssh/config :set syn? >syntax=conf When I do: set syn=sshconfig Then the syntax highlighting is as it should be. Why isn't the filetype automatically identified? And how can it be set automatically?

    Read the article

  • Service Stack

    - by csharp-source.net
    ServiceStack allows you to build re-usable SOA-style web services with plain POCO DataContract classes. The same DTO's can be shared with a .NET client application eliminating the need for any generated code. With no configuration required, web services created are immediately discoverable and callable via the following supported endpoints: - REST and XML - REST and JSON - SOAP 1.1 / 1.2 Services can run on both Mono and the .NET Framework and be hosted in either a ASP.NET Web Application, a Windows Service or Console application.

    Read the article

  • Database Activity Monitoring Part 2 - SQL Injection Attacks

    If you think through the web sites you visit on a daily basis the chances are that you will need to login to verify who you are. In most cases your username would be stored in a relational database along with all the other registered users on that web site. Hopefully your password will be encrypted and not stored in plain text.

    Read the article

  • Source of (programmer) inefficiency

    - by Daniel
    I am interested to gain a better insight about the possible reasons of personal inefficiency as programmers (and only in programming) due to – simply - our own errors (because we are humans – well, almost all of us). I am not interested in how much we are productive or in how many adjustements the customer asks for when the work is done, but where and how each of us spend that part of its time in tasks that are unproductive and there is no one to blame except ourselves. Excluding ego - feeding and / or self – gratification, what I am trying to get (for all of us) is: what are the common issues eating our time; insight on reasons for that issues; identify simple way for us, personally (not delegating actions to other or our organizations), to correct our own problems. Please, do not think in academic terms but aim at the opportunity to compare our daily experiences and understand what are and how we try to fix our personal deficiencies. If you are interested to respond to this post, please: integrate the list if you see something important (or obvious) missing; highlight or name honestly your first issue tellng the way you try to address and solve your issue acting on yourself and yourself only in a sort of "continuous quality improving" My criteria for accepting the answer is: choose the best solution (feasibility and utility) to fix one (or more) of the problems of the list. Of course, selecting an error is not a vote on our skills: maybe we are hyper professional programmers and we lose ten minutes only every year or we are terribly inefficient, losing a couple of days a week: reasons for inefficiency could be really the same - but in a different scale. A possible list: Plain error in the names (variables, functions). Inability to see the obvious in your code. Misreading. Lack of concentration. Trying to use a technology you have not mastered. Errors with data types. Time required to understand your previous code or your documentation. Trying to do something more than requested because you enjoy it Using solutions more complicated than required because you enjoy it. Plain logical errors. Errors due to your fault in communications. Distraction My first personal issue: "Trying to use a technology you do not master." I have to use daily several technologies and I often need to spend significant time correcting code because my assumptions were plainly wrong. Reasons for this: production needs put high pressure and make difficult to find the time to learn. I try to address this reading technical books - as many as I can - even if this actually consumes a lot of time.

    Read the article

  • Can't fix Plymouth resolution by any means

    - by Uri Herrera
    Due to a bad update to 11.04 i had to completely reinstall Ubuntu, now i got everything pretty much the way it was before, except for plymouth.I've tried every script i found, every turorial i've found, i've installed Plymouth Managaer, i've installed a plymouth theme that worked before...and still, just plain white text, if it feels like showing me something, 'cause at times i don't even get to see the text mode, so, is there anything i can do witthout removing the ATI driver?, can i get rid of plymouth if it's not going to properly work?, are there an alternatives to plymouth?

    Read the article

  • A big flat text file or a HTML site for language documentation?

    - by Bad Sector
    A project of mine is a small embeddable Tcl-like scripting language, LIL. While i'm mostly making it for my own use, i think it is interesting enough for others to use, so i want it to have a nice (but not very "wordy") documentation. So far i'm using a single flat readme.txt file. It explains the language's syntax, features, standard functions, how to use the C API, etc. Also it is easy to scan and read in almost every environment out there, from basic text-only terminals to full-fledged high-end graphical desktop environments. However, while i tried to keep things nicely formatted (as much as this is possible in plain text), i still think that being a big (and growing) wall of text, it isn't as easy on the eyes as it could be. Also i feel that sometimes i'm not writing as much as i want in order to avoid expanding the text too much. So i thought i could use another project of mine, QuHelp, which is basically a help site generator for sites like this one with a sidebar that provides a tree of topics/subtopics and offline full text search. With this i can use HTML to format the documentation and if i use QuHelp for some other project that uses LIL, i can import LIL's documentation as part of the other project's documentation. However converting the existing documentation to QuHelp/HTML isn't a small task, especially when it comes to functions (i'll need to put more detail on them than what currently exists in the readme.txt file). Also it loses the wide range of availability that it currently has (even if QuHelp's generated code degrades gracefully down to console-only web browsers, plain text is readable from everywhere, including from popular editors such as Vim and Emacs - i had someone once telling me that he likes LIL's documentation because it is readable without leaving his editor). So, my question is simply this: should i keep the documentation as it is now in the form of a single readme.txt file or should i convert it to something like the site i mentioned above? There is also the option to do both, but i'm not sure if i'll be able to always keep them in sync or if it is worth the effort. After asking around in IRC i've got mixed answers: some liked the wide availability of the single text file, others said that it is looks as bad as a man page (personally i don't mind that - i can read man pages just fine - but other people might have issues reading them). What do you think?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >