Search Results

Search found 4845 results on 194 pages for 'hand'.

Page 191/194 | < Previous Page | 187 188 189 190 191 192 193 194  | Next Page >

  • How to handle failure to release a resource which is contained in a smart pointer?

    - by cj
    How should an error during resource deallocation be handled, when the object representing the resource is contained in a shared pointer? Smart pointers are a useful tool to manage resources safely. Examples of such resources are memory, disk files, database connections, or network connections. // open a connection to the local HTTP port boost::shared_ptr<Socket> socket = Socket::connect("localhost:80"); In a typical scenario, the class encapsulating the resource should be noncopyable and polymorphic. A good way to support this is to provide a factory method returning a shared pointer, and declare all constructors non-public. The shared pointers can now be copied from and assigned to freely. The object is automatically destroyed when no reference to it remains, and the destructor then releases the resource. /** A TCP/IP connection. */ class Socket { public: static boost::shared_ptr<Socket> connect(const std::string& address); virtual ~Socket(); protected: Socket(const std::string& address); private: // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; But there is a problem with this approach. The destructor must not throw, so a failure to release the resource will remain undetected. A common way out of this problem is to add a public method to release the resource. class Socket { public: virtual void close(); // may throw // ... }; Unfortunately, this approach introduces another problem: Our objects may now contain resources which have already been released. This complicates the implementation of the resource class. Even worse, it makes it possible for clients of the class to use it incorrectly. The following example may seem far-fetched, but it is a common pitfall in multi-threaded code. socket->close(); // ... size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use! Either we ensure that the resource is not released before the object is destroyed, thereby losing any way to deal with a failed resource deallocation. Or we provide a way to release the resource explicitly during the object's lifetime, thereby making it possible to use the resource class incorrectly. There is a way out of this dilemma. But the solution involves using a modified shared pointer class. These modifications are likely to be controversial. Typical shared pointer implementations, such as boost::shared_ptr, require that no exception be thrown when their object's destructor is called. Generally, no destructor should ever throw, so this is a reasonable requirement. These implementations also allow a custom deleter function to be specified, which is called in lieu of the destructor when no reference to the object remains. The no-throw requirement is extended to this custom deleter function. The rationale for this requirement is clear: The shared pointer's destructor must not throw. If the deleter function does not throw, nor will the shared pointer's destructor. However, the same holds for other member functions of the shared pointer which lead to resource deallocation, e.g. reset(): If resource deallocation fails, no exception can be thrown. The solution proposed here is to allow custom deleter functions to throw. This means that the modified shared pointer's destructor must catch exceptions thrown by the deleter function. On the other hand, member functions other than the destructor, e.g. reset(), shall not catch exceptions of the deleter function (and their implementation becomes somewhat more complicated). Here is the original example, using a throwing deleter function: /** A TCP/IP connection. */ class Socket { public: static SharedPtr<Socket> connect(const std::string& address); protected: Socket(const std::string& address); virtual Socket() { } private: struct Deleter; // not implemented Socket(const Socket&); Socket& operator=(const Socket&); }; struct Socket::Deleter { void operator()(Socket* socket) { // Close the connection. If an error occurs, delete the socket // and throw an exception. delete socket; } }; SharedPtr<Socket> Socket::connect(const std::string& address) { return SharedPtr<Socket>(new Socket(address), Deleter()); } We can now use reset() to free the resource explicitly. If there is still a reference to the resource in another thread or another part of the program, calling reset() will only decrement the reference count. If this is the last reference to the resource, the resource is released. If resource deallocation fails, an exception is thrown. SharedPtr<Socket> socket = Socket::connect("localhost:80"); // ... socket.reset();

    Read the article

  • apache2 doesn't start with location

    - by Geod24
    I have a small domain, which I use only for personal purposes. I'm the main user, and have at most 3-4 users at the same time. I use apache2 with passenger to serve redmine. So I start with an empty apache2: root@xxxxx:/home/# service apache2 start [ ok ] Starting web server: apache2. root@xxxxx:/home/# a2dissite Your choices are: Which site(s) do you want to disable (wildcards ok)? Then enable my site, and restart (not reload) apache2: root@xxxxx:/home/# a2ensite 200-redmine Enabling site 200-redmine. To activate the new configuration, you need to run: service apache2 reload root@xxxxx:/home/# service apache2 restart [FAIL] Restarting web server: apache2 failed! [warn] The apache2 instance did not start within 20 seconds. Please read the log files to discover problems ... (warning). root@xxxxx:/home/# service apache2 restart [FAIL] Restarting web server: apache2 failed! [warn] There are processes named 'apache2' running which do not match your pid file which are left untouched in the name of safety, Please review the situation by hand. ... (warning). root@xxxxx:/home/# pidof apache2 20948 Here's my 200-redmine.conf: PerlLoadModule Apache::Redmine <VirtualHost *:80> ServerName redmine.xxxxx.xxx DocumentRoot /var/www/redmine/public/ ErrorLog ${APACHE_LOG_DIR}/redmine.error.log CustomLog ${APACHE_LOG_DIR}/redmine.access.log common MaxRequestLen 20971520 <Directory "/var/www/redmine/public/"> Options Indexes ExecCGI FollowSymLinks Order allow,deny Allow from all AllowOverride all </Directory> SetEnv GIT_PROJECT_ROOT /opt/git/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend/ <Location /git> PerlAuthenHandler Apache::Authn::Redmine::authen_handler PerlAccessHandler Apache::Authn::Redmine::access_handler AuthType Basic Require valid-user AuthName "Redmine Git Repository" RedmineDSN "DBI:mysql:database=redmine;host=localhost:3306" RedmineDbUser "redmine" RedmineDbPass "password" RedmineCacheCredsMax 50 </Location> </VirtualHost> Now if I comment out the ScriptAlias / stuff, it works ! In addition, starting the server with 200-redmine disabled, then enabling it works. But apache2 will die randomly. Plus the location doesn't work. The logs show nothing: root@xxxxx:/home/# ll /var/log/apache2/ total 8 drwxr-xr-x 2 root root 4096 Oct 30 07:52 coredump -rw-r--r-- 1 root root 0 Nov 4 02:39 default.access.log -rw-r--r-- 1 root root 2356 Nov 4 02:39 default.error.log -rw-r--r-- 1 root root 0 Nov 4 02:39 other_vhosts_access.log -rw-r--r-- 1 root root 0 Nov 4 02:39 redmine.access.log -rw-r--r-- 1 root root 0 Nov 4 02:39 redmine.error.log root@xxxxx:/home/# ll /var/log/apache2/coredump/ total 0 root@xxxxx:/home/# cat /var/log/apache2/default.error.log [ 2013-11-04 02:39:36.0130 21471/7fcf090f4740 agents/Watchdog/Main.cpp:452 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '21470', 'web_server_type' => 'apache', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' } [ 2013-11-04 02:39:36.0255 21474/7f9a99fda740 agents/HelperAgent/Main.cpp:597 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.21470/generation-0/request [ 2013-11-04 02:39:36.0507 21479/7f8316b0f740 agents/LoggingAgent/Main.cpp:330 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.21470/generation-0/logging [ 2013-11-04 02:39:36.0511 21471/7fcf090f4740 agents/Watchdog/Main.cpp:635 ]: All Phusion Passenger agents started! [ 2013-11-04 02:39:36.3158 21495/7fba6f686740 agents/Watchdog/Main.cpp:452 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '21491', 'web_server_type' => 'apache', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' } [ 2013-11-04 02:39:36.3304 21498/7f0106d9b740 agents/HelperAgent/Main.cpp:597 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.21491/generation-0/request [ 2013-11-04 02:39:36.3522 21503/7f92ad392740 agents/LoggingAgent/Main.cpp:330 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.21491/generation-0/logging [ 2013-11-04 02:39:36.3525 21495/7fba6f686740 agents/Watchdog/Main.cpp:635 ]: All Phusion Passenger agents started! And at last: root@xxxxx:/home/# apache2ctl -t -D DUMP_VHOSTS VirtualHost configuration: *:80 is a NameVirtualHost default server redmine.xxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) port 80 namevhost redmine.xxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) port 80 namevhost redmine.xxxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) root@xxxxx:/home/# uname -a Linux xxxx.xxx 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux root@xxxxx:/home/# dpkg --list | grep apache2 ii apache2 2.4.6-3 amd64 Apache HTTP Server ii apache2-bin 2.4.6-3 amd64 Apache HTTP Server (binary files and modules) ii apache2-data 2.4.6-3 all Apache HTTP Server (common files) ii apache2-utils 2.4.6-3 amd64 Apache HTTP Server (utility programs for web servers) ii libapache2-mod-fcgid 1:2.3.9-1 amd64 FastCGI interface module for Apache 2 ii libapache2-mod-passenger 4.0.10-1 amd64 Rails and Rack support for Apache2 ii libapache2-mod-perl2 2.0.8+httpd24-r1449661-6+b1 amd64 Integration of perl with the Apache2 web server ii libapache2-mod-perl2-dev 2.0.8+httpd24-r1449661-6 all Integration of perl with the Apache2 web server - development files ii libapache2-mod-perl2-doc 2.0.8+httpd24-r1449661-6 all Integration of perl with the Apache2 web server - documentation ii libapache2-mod-proxy-html 1:2.4.6-3 amd64 Transitional package for apache2-bin ii libapache2-mod-svn 1.7.13-2 amd64 Apache Subversion server modules for Apache httpd ii libapache2-reload-perl 0.12-2 all module for reloading Perl modules when changed on disk ii libapache2-svn 1.7.13-2 all Apache Subversion server modules for Apache httpd (dummy package) root@xxxxx:/home/# a2dismod Your choices are: access_compat alias auth_basic authn_core authn_file authz_core authz_host authz_svn authz_user autoindex dav dav_svn deflate dir env fcgid filter mime mpm_event negotiation passenger perl proxy proxy_http rewrite setenvif status Which module(s) do you want to disable (wildcards ok)?

    Read the article

  • SSRS Export to Excel not working through VPN (Juniper SA4000)

    - by Veynom
    We have a SharePoint (MOSS 2007 on Win2003 R2) with SSRS reports (from SQL 2005) embedded in it. When we connect to the SharePoint portal through our VPN (firewall is Juniper SA4000) and using Internet Explorer (6, 7, and 8) and try to export any SSRS report under Excel, we get an error message: Internet Explorer cannot download . Internet Explorer was not able to open the internet site. The requested site is either unavailable or cannot be found. Please try again later. When not using the VPN (LAN from the office), everything (exporting under Excel) works fine. When using Firefox through the VPN, it works fine. When exporting to any other format (pdf or text or whatever), everything is fine under both IE and FF. Our firewall people suspect something in SSRS/MOSS/Office. Our MOSS consultants suspect something in the firewall Juniper SA4000. When using Fiddler and when not connected through VPN, I see the following traffic once i click on the "Export button": (Response was a request for client credentials) GET /ReportServer/Reserved.ReportViewerWebControl.axd?ExecutionID=j1pqbvbqkb34qf45fhlgnx55&ControlID=733607a7d607476abb1e6b8794202158&Culture=127&UICulture=9&ReportStack=1&OpType=Export&FileName=Product+Application+Report&ContentDisposition=OnlyHtmlInline&Format=EXCEL HTTP/1.1 Accept: */* Accept-Language: en-US,fr-be;q=0.5 User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB5; .NET CLR 2.0.50727; .NET CLR 1.1.4322; InfoPath.2; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; MS-RTC LM 8; OfficeLiveConnector.1.3; OfficeLivePatch.0.0; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: r1frchcurdb01.r1.group.corp HTTP/1.1 401 Unauthorized Content-Length: 1656 Content-Type: text/html Server: Microsoft-IIS/6.0 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Mon, 08 Jun 2009 09:25:21 GMT Proxy-Support: Session-Based-Authentication then (Generic Response successful): GET /ReportServer/Reserved.ReportViewerWebControl.axd?ExecutionID=j1pqbvbqkb34qf45fhlgnx55&ControlID=733607a7d607476abb1e6b8794202158&Culture=127&UICulture=9&ReportStack=1&OpType=Export&FileName=Product+Application+Report&ContentDisposition=OnlyHtmlInline&Format=EXCEL HTTP/1.1 Accept: */* Accept-Language: en-US,fr-be;q=0.5 User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB5; .NET CLR 2.0.50727; .NET CLR 1.1.4322; InfoPath.2; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; MS-RTC LM 8; OfficeLiveConnector.1.3; OfficeLivePatch.0.0; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: r1frchcurdb01.r1.group.corp Authorization: Negotiate YIIIewYGKwYBBQUCoIIIbzCCCGugJDAiBgkqhkiC9xIBAgIGCSqGSIb3EgECAgYKKwYBBAGCNwICCqKCCEEEggg9YIIIOQYJKoZIhvcSAQICAQBugggoMIIIJKADAgEFoQMCAQ6iBwMFACAAAACjggdNYYIHSTCCB0WgAwIBBaEPGw1SMS5HUk9VUC5DT1JQoi4wLKADAgECoSUwIxsESFRUUBsbcjFmcmNoY3VyZGIwMS5yMS5ncm91cC5jb3Jwo4IG+zCCBvegAwIBF6EDAgEKooIG6QSCBuUaO7Wi3upEDz+lNA+GLwydOvR57pEzpN0hzKoBe4T6YCx8g05XW4A2cD3P29EuWz2fZiPeHFs8HYYCm7YLkXoKMayzZzgIG91s8Y/H9CZmfTH06rmJfWLyg47AvJ1gkgCLnOxqy1fHuTCeuUpDIkx2eG/bbMM1yYcgR94m+R59qeNwaNFxQJwFt8xpGFeZbfPSPK6mFRDKwbVuyW+c5BmL+SfGfPY0LBPo6wVLihS2vOgNLH/UVyoJZm5XZWTG4ou++o7AdCEoacxdKvXWWMMKweokjFVvrsffDKhu6ecgRxSpvgwNT35Wy6pvrw8hXD0QwhkJ0dB6CAfrpBtvhLThnjDZJdnP+Y+0IMAcDfPpNBteZUGCcJAdmcq/Fi+eSz6Y9T9xKJoTxdT1VssTDAlP0aHsXNmnwx7OcWo5Zgyh0Cucjd7G5LCQeodj0H65dMipYOurtCnL1K8DYwQBnxD8AtkSrPfL1LM1lWTiUCjMtNmX7oIU5mFzQzr3DqbB3TPN4tp+1xulSHuSjqGaiTmbrwaV6m0dtpGlmOO148Gzgclf5k+eZhdITje6mTZyFkunr2spXnEv7uzjghx73YKRGK/nXXQdfEWIrmUyDbc5qGxYBnXfhtg2xNdSIwTgZJTEIm7Mqnat3mpPtQiw8PUPUNfByPSlFMpvZcmy6tJoAnUyFrJfJ9NgXenVcj+YZOAo/uhFcpXgNHV0o2hi8j4vJ982HgRSB1zw+yEPQUp1P61lVIsl/M893lY2sgwFk+DCGmNwbumAKn5gDfq702yy1Drb9Vk6OwlhPXp0c98BFDvZj7sWxHE6e30f1RMOVTN1btngiWr+xO3Dnt+GpKPeKe3ISNcKyzU0zT+kaqx59IPOCRYFB/ZWBmW5+VIV3IxtdRf/rlc7b269JyUZx+xN8n5Stg/fq/XrAbKgXUlnwXGmUGh0deqzqciZc4uFFzrgA6Ba2ZG/3OIcAWPZck37YvIqiUzeustijlGr969ahw2ugHksGuKcihzCpdKLoEciphgpf+kGkXQfSnIuQCqD5N8Vx/uNs7IY87geu3+hwk1GpxUHmikb8/v6qiNL2W5LMIVrSXxuuT1Czoomqxrcic/F9PbBzk62pItS/fhWj3rj23GqwQOCzsQ+UeGTxf2JGtNZsS5nbfv9/7ZYxH0fqUdjHfJpZHd4f5Mk7L+5taNmsyY0BKU0QxIWJQZkLihpsXDInWQ67IUN5nsvvo7Ix1Rh1mZ3Wf/Z9a91O3QhUt3g1AsuzpZtWC+2oFs887KAwkGKr0ex/2pWZtzlCDCULk28f64XYcd+HYQ4Tv1l48okH43jXhSNZnLTTLM4zHcCBpBuBoiVuuKY+haHwuf50C0esn/3EdSf32P/Jvuc/Y12PlqhG8j8fPOmSfTAeT2Rh0aLGLlyOfOTdj/+Uy2RhYwck4mUr1BMJVyLCGCTsMxvxrmQj+bSQbzStj8YrDsQM6/vw3fMs+ZX3NKsADwmRL3PhNQXR1+HVD+oWC2XWITTMVO+SJjl937mvVMjNYsC1sdDUzVlyCLSJxNaa+xLEKymM33tp+4GixDar5QTaTntcFqGUZ/YSvrcBFjQOgXBS62OqIFxKWy6csu3K7eIoprDfhNS/la+4gC871TaRw6rwQNMLwsVjYDgq+e8MXACKXr7jEZyojpT5aOvd0QQte8ThCv82JYTrJvnBIiXWQSNt0MH0ynP7h0uG1eip0RH2VF5nQ7SVNmyVGDXJu/VpVAY2lgfFNIzJS9SxPXGHOn7EZ7cEcOoF7XiVV2baNwqloH3GntBTBFYZ+9N6mH6lRSH8HbOlzCXCQjZEV+qvBwzKVWuC5WbKR26KkjfQ3Ntsjo9cGV5I51Mhve7j7Hkttf1otbOzrj8+ErpFJd9/5UEC4qo6KmQ0Tar3rsfcvbBr18uzxEjV8LY+ZM6WTn5H/6hb/2eFwnVieWOhi1l+O0t+vGyUJpPwwPQQ6hzbxKIWPGjLK4lQnTbNRdQFvpmfIOKAMFui3w1SgxsWNxmMJJeGViJKBP592JzAbu3vFGPdAfnFEOb3ffTOid/hrTu/hAZRSjK+YQEKrju6sohPkHRoRyI+tlyYEWTyQ7oe3fR64J2lQJ8bCt8xtRVYIX8NY2gPvRCLTHbB3jSrK+Fnn7zqASM5yYC+JSiezdqEMkVsgc5HHH1mQxsMS8EgR6mj42QErDtKSKuGMExCPRW/8jvMK0eGepfA2U/WOiv2x2AgamQw1sI8sZtsBdD4jnP5WNkZSJ0i0SuybiCUiArPJgXjJTL+zgEX6kGOMrCFCtArunu6pDaXPQvAKngLFXc4CGdpMnDaVoeCyVO9J32tNfEauxEpIG9MIG6oAMCAReigbIEga8OSk5Xji0SAuy9NwxX1G4Avd5wZWj/OgZZ2VnSIiB7GpMJ4Dsnr/6jfLGj1V9V1zpHNoZG18ipW5R+OlF2rm7T4QKaZWsgwBvuyS5xtn9FGJqGYEmQLSqXbXsLfdT6xqPhB3poBbBtzPGCX/ZgW38iTyJOcbT6QW7TAj340+5EiZvy61jLFU1HtJ3wv41Hv6puSWCLotoS85SPerGz+SVYOZUbmKcx2lTXUTPinPhN HTTP/1.1 200 OK Date: Mon, 08 Jun 2009 09:25:21 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET WWW-Authenticate: Negotiate oYGgMIGdoAMKAQChCwYJKoZIgvcSAQICooGIBIGFYIGCBgkqhkiG9xIBAgICAG9zMHGgAwIBBaEDAgEPomUwY6ADAgEXolwEWm70xlMp4oj/PyvriNMeNDigow6/MX2DpaYQdBfGkiF0Dcc323tHLRBxBL03QpvwdGBxZGAJI6V1G8sc/lVBzhlCNsZkbJcNfnMNgOgc7UPrz+ZVav/EVm3sDQ== X-AspNet-Version: 2.0.50727 Content-Disposition: attachment; filename="Product Application Report.xls" Cache-Control: private Expires: Mon, 08 Jun 2009 09:24:21 GMT Content-Type: application/vnd.ms-excel Content-Length: 23012 When using the VPN, I see no traffic in Fiddler and the error message is displayed before anything else. Update 17/06/2009: I could get a hand on some logs from our SA4000. Maybe this could help more. Info PTR23232 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - Start Policy [WEBURL/PROTOCOL] evaluation for resource http://<DB server>:80/ReportServer/Reserved.ReportViewerWebControl.axd?ExecutionID=rua1g355tic24245f2e13lim&ControlID=44168efcd36e461493f7a69962580b91&Culture=127&UICulture=9&ReportStack=1&OpType=Export&FileName=Product+Application+Report&ContentDisposition=OnlyHtmlInline&Format=EXCEL Info PTR23233 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - Applying Policy [Enable HTTP 1.1]... Info PTR23240 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - Resource filter [http://nsrvnts2:80/*] does not match Info PTR23240 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - Resource filter [http://nsrvnts3:80/*] does not match Info PTR23233 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - Applying Policy [Disable HTTP 1.1]... Info PTR23239 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - Action [HTTP 1.0] is returned Info PTR23234 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - Policy [Disable HTTP 1.1] applies to resource Info PTR23308 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - Skip Policy [WEBURL/COMPRESSION] evaluation because Compression option is not enabled Info PTR23232 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - Start Policy [WEBURL/WEBPDSID] evaluation for resource http://<DB server>:80/ReportServer/Reserved.ReportViewerWebControl.axd?ExecutionID=rua1g355tic24245f2e13lim&ControlID=44168efcd36e461493f7a69962580b91&Culture=127&UICulture=9&ReportStack=1&OpType=Export&FileName=Product+Application+Report&ContentDisposition=OnlyHtmlInline&Format=EXCEL Info PTR23233 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - Applying Policy [Corporate BI Portal]... Info PTR23240 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - Resource filter [http://<SharePoint>:80/*] does not match Info PTR23240 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - Resource filter [http://<SharePoint>/*] does not match Info PTR23235 2009/06/15 17:22:38 - <SA4000> - [<SA4000 IP>] - <user>[SA4000 group names] - No Policy applies to resource Any tip welcome. :)

    Read the article

  • Is post-sudden-power-loss filesystem corruption on an SSD drive's ext3 partition "expected behavior"?

    - by Jeremy Friesner
    My company makes an embedded Debian Linux device that boots from an ext3 partition on an internal SSD drive. Because the device is an embedded "black box", it is usually shut down the rude way, by simply cutting power to the device via an external switch. This is normally okay, as ext3's journalling keeps things in order, so other than the occasional loss of part of a log file, things keep chugging along fine. However, we've recently seen a number of units where after a number of hard-power-cycles the ext3 partition starts to develop structural issues -- in particular, we run e2fsck on the ext3 partition and it finds a number of issues like those shown in the output listing at the bottom of this Question. Running e2fsck until it stops reporting errors (or reformatting the partition) clears the issues. My question is... what are the implications of seeing problems like this on an ext3/SSD system that has been subjected to lots of sudden/unexpected shutdowns? My feeling is that this might be a sign of a software or hardware problem in our system, since my understanding is that (barring a bug or hardware problem) ext3's journalling feature is supposed to prevent these sorts of filesystem-integrity errors. (Note: I understand that user-data is not journalled and so munged/missing/truncated user-files can happen; I'm specifically talking here about filesystem-metadata errors like those shown below) My co-worker, on the other hand, says that this is known/expected behavior because SSD controllers sometimes re-order write commands and that can cause the ext3 journal to get confused. In particular, he believes that even given normally functioning hardware and bug-free software, the ext3 journal only makes filesystem corruption less likely, not impossible, so we should not be surprised to see problems like this from time to time. Which of us is right? Embedded-PC-failsafe:~# ls Embedded-PC-failsafe:~# umount /mnt/unionfs Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Invalid inode number for '.' in directory inode 46948. Fix<y>? yes Directory inode 46948, block 0, offset 12: directory corrupted Salvage<y>? yes Entry 'status_2012-11-26_14h13m41.csv' in /var/log/status_logs (46956) has deleted/unused inode 47075. Clear<y>? yes Entry 'status_2012-11-26_10h42m58.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47076. Clear<y>? yes Entry 'status_2012-11-26_11h29m41.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47080. Clear<y>? yes Entry 'status_2012-11-26_11h42m13.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47081. Clear<y>? yes Entry 'status_2012-11-26_12h07m17.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47083. Clear<y>? yes Entry 'status_2012-11-26_12h14m53.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47085. Clear<y>? yes Entry 'status_2012-11-26_15h06m49.csv' in /var/log/status_logs (46956) has deleted/unused inode 47088. Clear<y>? yes Entry 'status_2012-11-20_14h50m09.csv' in /var/log/status_logs (46956) has deleted/unused inode 47073. Clear<y>? yes Entry 'status_2012-11-20_14h55m32.csv' in /var/log/status_logs (46956) has deleted/unused inode 47074. Clear<y>? yes Entry 'status_2012-11-26_11h04m36.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47078. Clear<y>? yes Entry 'status_2012-11-26_11h54m45.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47082. Clear<y>? yes Entry 'status_2012-11-26_12h12m20.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47084. Clear<y>? yes Entry 'status_2012-11-26_12h33m52.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47086. Clear<y>? yes Entry 'status_2012-11-26_10h51m59.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47077. Clear<y>? yes Entry 'status_2012-11-26_11h17m09.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47079. Clear<y>? yes Entry 'status_2012-11-26_12h54m11.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47087. Clear<y>? yes Pass 3: Checking directory connectivity '..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953). Fix<y>? yes Couldn't fix parent of inode 46948: Couldn't find parent directory entry Pass 4: Checking reference counts Unattached inode 46945 Connect to /lost+found<y>? yes Inode 46945 ref count is 2, should be 1. Fix<y>? yes Inode 46953 ref count is 5, should be 4. Fix<y>? yes Pass 5: Checking group summary information Block bitmap differences: -(208264--208266) -(210062--210068) -(211343--211491) -(213241--213250) -(213344--213393) -213397 -(213457--213463) -(213516--213521) -(213628--213655) -(213683--213688) -(213709--213728) -(215265--215300) -(215346--215365) -(221541--221551) -(221696--221704) -227517 Fix<y>? yes Free blocks count wrong for group #6 (17247, counted=17611). Fix<y>? yes Free blocks count wrong (161691, counted=162055). Fix<y>? yes Inode bitmap differences: +(47089--47090) +47093 +47095 +(47097--47099) +(47101--47104) -(47219--47220) -47222 -47224 -47228 -47231 -(47347--47348) -47350 -47352 -47356 -47359 -(47457--47488) -47985 -47996 -(47999--48000) -48017 -(48027--48028) -(48030--48032) -48049 -(48059--48060) -(48062--48064) -48081 -(48091--48092) -(48094--48096) Fix<y>? yes Free inodes count wrong for group #6 (7608, counted=7624). Fix<y>? yes Free inodes count wrong (61919, counted=61935). Fix<y>? yes embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED ***** embeddedrootwrite: ********** WARNING: Filesystem still has errors ********** embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks Embedded-PC-failsafe:~# Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Directory entry for '.' in ... (46948) is big. Split<y>? yes Missing '..' in directory inode 46948. Fix<y>? yes Setting filetype for entry '..' in ... (46948) to 2. Pass 3: Checking directory connectivity '..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953). Fix<y>? yes Pass 4: Checking reference counts Inode 2 ref count is 12, should be 13. Fix<y>? yes Pass 5: Checking group summary information embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED ***** embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks Embedded-PC-failsafe:~# Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite: clean, 657/62592 files, 87882/249937 blocks

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Allowing connections initiated from outside

    - by Mark S. Rasmussen
    I've got an old Juniper SSG5 running ScreenOS 5.4.0r6.0. Once a day, more or less, it'll start randomly dropping packets at a rate of ~5-10%. We currently solve this issue by simply rebooting the unit, after which it resumes working in perfect condition. As this error has started appearing randomly, without any configuration or hardware changes, I'm assuming I've got an aging unit about to fail. As such, I've got a replacement SSG5 running ScreenOS 6.0. I've dumped the config on the 5.4 and imported it into a clean 6.0, and it seems to gladly accept it, and all my configuration seems to be A-OK. However, upon connecting the new unit, all outside-initiated connections seem to be blocked. If I browse our external IP from the inside, everything works perfectly, and it's not just port 80, SSH, Crashplan - all of our policies route correctly. All normal networking, initiated from the inside, work perfectly as well. If on the other hand I browse our external IP from the outside, everything is blocked. Barring differences between ScreenOS 5.4 and 6.0, the config is identical. Is there a setting somewhere that defines whether outside/inside initiated connections are allowed? unset key protection enable set clock timezone 1 set vrouter trust-vr sharable set vrouter "untrust-vr" exit set vrouter "trust-vr" unset auto-route-export exit set service "MyVOIP_UDP4569" protocol udp src-port 0-65535 dst-port 4569-4569 set service "MyVOIP_TCP22" protocol tcp src-port 0-65535 dst-port 22-22 set service "MyRDP" protocol tcp src-port 0-65535 dst-port 3389-3389 set service "MyRsync" protocol tcp src-port 0-65535 dst-port 873-873 set service "NZ_FTP" protocol tcp src-port 0-65535 dst-port 40000-41000 set service "NZ_FTP" + tcp src-port 0-65535 dst-port 21-21 set service "PPTP-VPN" protocol 47 src-port 2048-2048 dst-port 2048-2048 set service "PPTP-VPN" + tcp src-port 1024-65535 dst-port 1723-1723 set service "NZ_FMS_1935" protocol tcp src-port 0-65535 dst-port 1935-1935 set service "NZ_FMS_1935" + udp src-port 0-65535 dst-port 1935-1935 set service "NZ_FMS_8080" protocol tcp src-port 0-65535 dst-port 8080-8080 set service "CrashPlan Server" protocol tcp src-port 0-65535 dst-port 4280-4280 set service "CrashPlan Console" protocol tcp src-port 0-65535 dst-port 4282-4282 unset alg sip enable set alg appleichat enable unset alg appleichat re-assembly enable set alg sctp enable set auth-server "Local" id 0 set auth-server "Local" server-name "Local" set auth default auth server "Local" set auth radius accounting port 1646 set admin name "netscreen" set admin password "XXX" set admin auth web timeout 10 set admin auth dial-in timeout 3 set admin auth server "Local" set admin format dos set vip multi-port set zone "Trust" vrouter "trust-vr" set zone "Untrust" vrouter "trust-vr" set zone "DMZ" vrouter "trust-vr" set zone "VLAN" vrouter "trust-vr" set zone "Untrust-Tun" vrouter "trust-vr" set zone "Trust" tcp-rst set zone "Untrust" block unset zone "Untrust" tcp-rst set zone "MGT" block unset zone "V1-Trust" tcp-rst unset zone "V1-Untrust" tcp-rst set zone "DMZ" tcp-rst unset zone "V1-DMZ" tcp-rst unset zone "VLAN" tcp-rst set zone "Untrust" screen tear-drop set zone "Untrust" screen syn-flood set zone "Untrust" screen ping-death set zone "Untrust" screen ip-filter-src set zone "Untrust" screen land set zone "V1-Untrust" screen tear-drop set zone "V1-Untrust" screen syn-flood set zone "V1-Untrust" screen ping-death set zone "V1-Untrust" screen ip-filter-src set zone "V1-Untrust" screen land set interface ethernet0/0 phy full 100mb set interface ethernet0/3 phy full 100mb set interface ethernet0/4 phy full 100mb set interface ethernet0/5 phy full 100mb set interface ethernet0/6 phy full 100mb set interface "ethernet0/0" zone "Untrust" set interface "ethernet0/1" zone "Null" set interface "bgroup0" zone "Trust" set interface "bgroup1" zone "Trust" set interface "bgroup2" zone "Trust" set interface bgroup2 port ethernet0/2 set interface bgroup0 port ethernet0/3 set interface bgroup0 port ethernet0/4 set interface bgroup1 port ethernet0/5 set interface bgroup1 port ethernet0/6 unset interface vlan1 ip set interface ethernet0/0 ip 215.173.182.18/29 set interface ethernet0/0 route set interface bgroup0 ip 192.168.1.1/24 set interface bgroup0 nat set interface bgroup1 ip 192.168.2.1/24 set interface bgroup1 nat set interface bgroup2 ip 192.168.3.1/24 set interface bgroup2 nat set interface ethernet0/0 gateway 215.173.182.17 unset interface vlan1 bypass-others-ipsec unset interface vlan1 bypass-non-ip set interface ethernet0/0 ip manageable set interface bgroup0 ip manageable set interface bgroup1 ip manageable set interface bgroup2 ip manageable set interface bgroup0 manage mtrace unset interface bgroup1 manage ssh unset interface bgroup1 manage telnet unset interface bgroup1 manage snmp unset interface bgroup1 manage ssl unset interface bgroup1 manage web unset interface bgroup2 manage ssh unset interface bgroup2 manage telnet unset interface bgroup2 manage snmp unset interface bgroup2 manage ssl unset interface bgroup2 manage web set interface ethernet0/0 vip 215.173.182.19 2048 "PPTP-VPN" 192.168.1.131 set interface ethernet0/0 vip 215.173.182.19 + 4280 "CrashPlan Server" 192.168.1.131 set interface ethernet0/0 vip 215.173.182.19 + 4282 "CrashPlan Console" 192.168.1.131 set interface ethernet0/0 vip 215.173.182.22 22 "MyVOIP_TCP22" 192.168.2.127 set interface ethernet0/0 vip 215.173.182.22 + 4569 "MyVOIP_UDP4569" 192.168.2.127 set interface ethernet0/0 vip 215.173.182.22 + 3389 "MyRDP" 192.168.2.202 set interface ethernet0/0 vip 215.173.182.22 + 873 "MyRsync" 192.168.2.201 set interface ethernet0/0 vip 215.173.182.22 + 80 "HTTP" 192.168.2.202 set interface ethernet0/0 vip 215.173.182.22 + 2048 "PPTP-VPN" 192.168.2.201 set interface ethernet0/0 vip 215.173.182.22 + 8080 "NZ_FMS_8080" 192.168.2.216 set interface ethernet0/0 vip 215.173.182.22 + 1935 "NZ_FMS_1935" 192.168.2.216 set interface bgroup0 dhcp server service set interface bgroup1 dhcp server service set interface bgroup2 dhcp server service set interface bgroup0 dhcp server auto set interface bgroup1 dhcp server auto set interface bgroup2 dhcp server auto set interface bgroup0 dhcp server option domainname companyalan set interface bgroup0 dhcp server option dns1 192.168.1.131 set interface bgroup1 dhcp server option domainname companyblan set interface bgroup1 dhcp server option dns1 192.168.2.202 set interface bgroup2 dhcp server option dns1 8.8.8.8 set interface bgroup2 dhcp server option wins1 8.8.4.4 set interface bgroup0 dhcp server ip 192.168.1.2 to 192.168.1.116 set interface bgroup1 dhcp server ip 192.168.2.2 to 192.168.2.116 set interface bgroup2 dhcp server ip 192.168.3.2 to 192.168.3.126 unset interface bgroup0 dhcp server config next-server-ip unset interface bgroup1 dhcp server config next-server-ip unset interface bgroup2 dhcp server config next-server-ip set interface "ethernet0/0" mip 215.173.182.21 host 192.168.2.202 netmask 255.255.255.255 vr "trust-vr" set interface "serial0/0" modem settings "USR" init "AT&F" set interface "serial0/0" modem settings "USR" active set interface "serial0/0" modem speed 115200 set interface "serial0/0" modem retry 3 set interface "serial0/0" modem interval 10 set interface "serial0/0" modem idle-time 10 set flow tcp-mss unset flow tcp-syn-check unset flow tcp-syn-bit-check set flow reverse-route clear-text prefer set flow reverse-route tunnel always set pki authority default scep mode "auto" set pki x509 default cert-path partial set pki x509 dn name "[email protected]" set dns host dns1 0.0.0.0 set dns host dns2 0.0.0.0 set dns host dns3 0.0.0.0 set address "Trust" "192.168.1.0/24" 192.168.1.0 255.255.255.0 set address "Trust" "192.168.2.0/24" 192.168.2.0 255.255.255.0 set address "Trust" "192.168.3.0/24" 192.168.3.0 255.255.255.0 set crypto-policy exit set ike respond-bad-spi 1 set ike ikev2 ike-sa-soft-lifetime 60 unset ike ikeid-enumeration unset ike dos-protection unset ipsec access-session enable set ipsec access-session maximum 5000 set ipsec access-session upper-threshold 0 set ipsec access-session lower-threshold 0 set ipsec access-session dead-p2-sa-timeout 0 unset ipsec access-session log-error unset ipsec access-session info-exch-connected unset ipsec access-session use-error-log set vrouter "untrust-vr" exit set vrouter "trust-vr" exit set l2tp default ppp-auth chap set url protocol websense exit set policy id 1 from "Trust" to "Untrust" "Any" "Any" "ANY" permit set policy id 1 exit set policy id 2 from "Untrust" to "Trust" "Any" "VIP(215.173.182.19)" "PPTP-VPN" permit traffic set policy id 2 exit set policy id 3 from "Untrust" to "Trust" "Any" "VIP(215.173.182.22)" "HTTP" permit log set policy id 3 set service "MyRDP" set service "MyRsync" set service "MyVOIP_TCP22" set service "MyVOIP_UDP4569" exit set policy id 6 from "Trust" to "Trust" "192.168.1.0/24" "192.168.2.0/24" "ANY" deny set policy id 6 exit set policy id 7 from "Trust" to "Trust" "192.168.2.0/24" "192.168.1.0/24" "ANY" deny set policy id 7 exit set policy id 8 from "Trust" to "Trust" "192.168.3.0/24" "192.168.1.0/24" "ANY" deny set policy id 8 exit set policy id 9 from "Trust" to "Trust" "192.168.3.0/24" "192.168.2.0/24" "ANY" deny set policy id 9 exit set policy id 10 from "Untrust" to "Trust" "Any" "MIP(215.173.182.21)" "NZ_FTP" permit set policy id 10 exit set policy id 11 from "Untrust" to "Trust" "Any" "VIP(215.173.182.22)" "PPTP-VPN" permit set policy id 11 exit set policy id 12 from "Untrust" to "Trust" "Any" "VIP(215.173.182.22)" "NZ_FMS_1935" permit set policy id 12 set service "NZ_FMS_8080" exit set policy id 13 from "Untrust" to "Trust" "Any" "VIP(215.173.182.19)" "CrashPlan Console" permit set policy id 13 set service "CrashPlan Server" exit set nsmgmt bulkcli reboot-timeout 60 set ssh version v2 set config lock timeout 5 unset license-key auto-update set telnet client enable set snmp port listen 161 set snmp port trap 162 set vrouter "untrust-vr" exit set vrouter "trust-vr" unset add-default-route exit set vrouter "untrust-vr" exit set vrouter "trust-vr" exit Note that I've previously posted a similar question (pertaining to the same device & replacement, but ultimately caused by a malfunctioning switch, and thus clouding the current issue): Outbound traffic being blocked for MIP/VIPped servers (Juniper SSG5)

    Read the article

  • ubuntu 10.04 logs itself out overnight

    - by Corey
    Every night when I leave work, I lock the screen via ubuntu's "power" button in the top right hand panel. When I come to work in the morning, I'm greeted with the log-in screen. This doesn't happen every night, but most. I'm running ubuntu 10.04 on a Dell inspiron. I've included some HW specs, and also dmesg output. Please let me know what other logs may be useful. thanks! Corey ~$ dmesg [20559.696062] type=1503 audit(1285957687.048:16): operation="open" pid=6212 parent=1 profile="/usr/bin/evince" requested_mask="::r" denied_mask="::r" fsuid=1000 ouid=0 name="/usr/local/lib/libltdl.so.7.2.2" [21127.951621] type=1503 audit(1285958255.300:17): operation="open" pid=6390 parent=1 profile="/usr/bin/evince" requested_mask="::r" denied_mask="::r" fsuid=1000 ouid=0 name="/usr/local/lib/libltdl.so.7.2.2" [291038.528014] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung [291038.528025] render error detected, EIR: 0x00000000 [291038.528042] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 22973891 at 22973890) [291038.828014] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung [291038.828023] render error detected, EIR: 0x00000000 [291038.828042] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 22973894 at 22973890) ~$ lspci -vv 00:00.0 Host bridge: Intel Corporation 4 Series Chipset DRAM Controller (rev 03) Subsystem: Dell Device 02e1 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx- Latency: 0 Capabilities: <access denied> Kernel driver in use: agpgart-intel Kernel modules: intel-agp 00:02.0 VGA compatible controller: Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03) Subsystem: Dell Device 02e1 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 27 Region 0: Memory at fe400000 (64-bit, non-prefetchable) [size=4M] Region 2: Memory at d0000000 (64-bit, prefetchable) [size=256M] Region 4: I/O ports at dc00 [size=8] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) Subsystem: Dell Device 02e1 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at feaf8000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Bus: primary=00, secondary=01, subordinate=01, sec-latency=0 I/O behind bridge: 00001000-00001fff Memory behind bridge: 80000000-801fffff Prefetchable memory behind bridge: 0000000080200000-00000000803fffff Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA+ VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 I/O behind bridge: 0000e000-0000efff Memory behind bridge: feb00000-febfffff Prefetchable memory behind bridge: 00000000fdf00000-00000000fdffffff Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA+ VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB Controller: Intel Corporation N10/ICH7 Family USB UHCI Controller #1 (rev 01) Subsystem: Dell Device 02e1 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 23 Region 4: I/O ports at d880 [size=32] Kernel driver in use: uhci_hcd 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) Subsystem: Dell Device 02e1 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin B routed to IRQ 19 Region 4: I/O ports at d800 [size=32] Kernel driver in use: uhci_hcd 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) Subsystem: Dell Device 02e1 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin C routed to IRQ 18 Region 4: I/O ports at d480 [size=32] Kernel driver in use: uhci_hcd 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) Subsystem: Dell Device 02e1 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin D routed to IRQ 16 Region 4: I/O ports at d400 [size=32] Kernel driver in use: uhci_hcd 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) (prog-if 20) Subsystem: Dell Device 02e1 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 23 Region 0: Memory at feaf7c00 (32-bit, non-prefetchable) [size=1K] Capabilities: <access denied> Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) (prog-if 01) Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Bus: primary=00, secondary=03, subordinate=03, sec-latency=32 Secondary status: 66MHz- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort+ <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA+ VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: <access denied> 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) Subsystem: Dell Device 02e1 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Capabilities: <access denied> Kernel modules: iTCO_wdt, intel-rng 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA IDE Controller (rev 01) (prog-if 8f [Master SecP SecO PriP PriO]) Subsystem: Dell Device 02e1 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin B routed to IRQ 19 Region 0: I/O ports at d080 [size=8] Region 1: I/O ports at d000 [size=4] Region 2: I/O ports at cc00 [size=8] Region 3: I/O ports at c880 [size=4] Region 4: I/O ports at c800 [size=16] Capabilities: <access denied> Kernel driver in use: ata_piix 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) Subsystem: Dell Device 02e1 Control: I/O+ Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin B routed to IRQ 5 Region 4: I/O ports at 0400 [size=32] Kernel modules: i2c-i801 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) Subsystem: Dell Device 02e1 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 26 Region 0: I/O ports at e800 [size=256] Region 2: Memory at fdfff000 (64-bit, prefetchable) [size=4K] Region 4: Memory at fdfe0000 (64-bit, prefetchable) [size=64K] Expansion ROM at febe0000 [disabled] [size=128K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169 log$ tail -n 15 Xorg.0.log.old for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. (II) Power Button: Close (II) UnloadModule: "evdev" (II) Power Button: Close (II) UnloadModule: "evdev" (II) USB Optical Mouse: Close (II) UnloadModule: "evdev" (II) Dell Dell USB Entry Keyboard: Close (II) UnloadModule: "evdev" (II) Macintosh mouse button emulation: Close (II) UnloadModule: "evdev" (II) AIGLX: Suspending AIGLX clients for VT switch ddxSigGiveUp: Closing log

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Windows Server 2008 R2 network adapter stops working, requires hard reboot

    - by Geoff Dalgas
    TL;DR version: Turns out this was a Windows Server 2008 R2 kernel networking bug. After siccing Microsoft support on it, we (eventually) got an unpublished kernel hotfix from Microsoft to address it. If you, too, are experiencing mysterious low-level network driver failures requiring a reboot/bluescreen cycle, you might want that hotfix (or maybe Service Pack 1 whenever it is released, too.) We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211 The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic. We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway: Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable. Running arp -a on this failed server shows that there is no entry for the gateway address (69.59.196.211): Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static On our linux gateway instances arp -a shows: peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue? THINGS WE HAVE TRIED I added a static arp entry for testing on one of the linux gateways which still didn't help. root@haproxy2:~# arp -a peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d root@haproxy2:~# ping 69.59.196.220 PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data. --- 69.59.196.220 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back. Swapping network cards and switches I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card. (We also tried another switch we have on hand, no change) Changing network hardware driver versions We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2. Replacing network cables As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred. Disable checksum offload, remove TProxy We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps. Switch Virtualization providers On the off chance this was related to Hyper-V in some way (we do host Linux VMs on it), we switched to VMWare Server. No change. Switch host model We've reached the end of our troubleshooting rope and are now formally involving Microsoft support. They recommended changing the host model: http://en.wikipedia.org/wiki/Host_model http://technet.microsoft.com/en-us/magazine/2007.09.cableguy.aspx We did that, and.. we'll see.

    Read the article

  • Set up tunnel to HE.net and now only ipv6.google.com works, but other sites ping fine.

    - by AndrejaKo
    I'm setting up IPv6 using my router which is running OpenWRT, version Backfire 10.03.1-rc4. I made a tunnel using Hurricane Electric's tunnel broker and set it up on the router and I'm using RADVD to hand out IPv6 addresses. My problem is that on computers on the network, I can only access ipv6.google.com using a browser, but other sites seem to be loading forever and won't open in any browser. I can ping and traceroute to them fine, but can't open them with a browser. I can open any site normally with a browser from the router. Stopping firewall service on the router doesn't help, so it's probably not a firewall issue. All AAAA records resolve fine, so it's probably not a DNS issue. Computers on the network get their IPv6 addresses fine, so it's probably not a radvd issue. Similar setup worked fine for SixXs, but I'm having problems with my PoP there, so I decided to move to HE. Here are some traceroutes: From a client computer: Tracing route to ipv6.he.net [2001:470:0:64::2] over a maximum of 30 hops: 1 <1 ms 1 ms 1 ms 2001:470:1f0b:de5::1 2 62 ms 63 ms 62 ms andrejako-1.tunnel.tserv6.fra1.ipv6.he.net [2001:470:1f0a:de5::1] 3 60 ms 60 ms 63 ms gige-g2-4.core1.fra1.he.net [2001:470:0:69::1] 4 63 ms 68 ms 68 ms 10gigabitethernet1-4.core1.ams1.he.net [2001:470:0:47::1] 5 84 ms 74 ms 76 ms 10gigabitethernet1-4.core1.lon1.he.net [2001:470:0:3f::1] 6 146 ms 147 ms 151 ms 10gigabitethernet4-4.core1.nyc4.he.net [2001:470:0:128::1] 7 200 ms 198 ms 202 ms 10gigabitethernet5-3.core1.lax1.he.net [2001:470:0:10e::1] 8 219 ms * 210 ms 10gigabitethernet2-2.core1.fmt2.he.net [2001:470:0:18d::1] 9 221 ms 338 ms 209 ms gige-g4-18.core1.fmt1.he.net [2001:470:0:2d::1] 10 206 ms 210 ms 207 ms ipv6.he.net [2001:470:0:64::2] Trace complete. and another from a cliet computer Tracing route to whatismyipv6.com [2001:4870:a24f:2::90] over a maximum of 30 hops: 1 7 ms 1 ms 1 ms 2001:470:1f0b:de5::1 2 69 ms 70 ms 63 ms AndrejaKo-1.tunnel.tserv6.fra1.ipv6.he.net [2001:470:1f0a:de5::1] 3 57 ms 65 ms 58 ms gige-g2-4.core1.fra1.he.net [2001:470:0:69::1] 4 73 ms 74 ms 75 ms 10gigabitethernet1-4.core1.ams1.he.net [2001:470:0:47::1] 5 71 ms 74 ms 76 ms 10gigabitethernet1-4.core1.lon1.he.net [2001:470:0:3f::1] 6 141 ms 149 ms 148 ms 10gigabitethernet2-3.core1.nyc4.he.net [2001:470:0:3e::1] 7 141 ms 147 ms 143 ms 10gigabitethernet1-2.core1.nyc1.he.net [2001:470:0:37::2] 8 144 ms 145 ms 142 ms 2001:504:1::a500:4323:1 9 226 ms 225 ms 218 ms 2001:4870:a240::2 10 220 ms 224 ms 219 ms 2001:4870:a240::2 11 219 ms 218 ms 220 ms 2001:4870:a24f::2 12 221 ms 222 ms 220 ms www.whatismyipv6.com [2001:4870:a24f:2::90] Trace complete. Here's some firewall info on the router: root@OpenWrt:/# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 syn_flood tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 input_rule all -- 0.0.0.0/0 0.0.0.0/0 input all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) target prot opt source destination zone_wan_MSSFIX all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED forwarding_rule all -- 0.0.0.0/0 0.0.0.0/0 forward all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 output_rule all -- 0.0.0.0/0 0.0.0.0/0 output all -- 0.0.0.0/0 0.0.0.0/0 Chain forward (1 references) target prot opt source destination zone_lan_forward all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_forward all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_forward all -- 0.0.0.0/0 0.0.0.0/0 Chain forwarding_lan (1 references) target prot opt source destination Chain forwarding_rule (1 references) target prot opt source destination nat_reflection_fwd all -- 0.0.0.0/0 0.0.0.0/0 Chain forwarding_wan (1 references) target prot opt source destination Chain input (1 references) target prot opt source destination zone_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan all -- 0.0.0.0/0 0.0.0.0/0 Chain input_lan (1 references) target prot opt source destination Chain input_rule (1 references) target prot opt source destination Chain input_wan (1 references) target prot opt source destination Chain nat_reflection_fwd (1 references) target prot opt source destination ACCEPT tcp -- 192.168.1.0/24 192.168.1.2 tcp dpt:80 Chain output (1 references) target prot opt source destination zone_lan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain output_rule (1 references) target prot opt source destination Chain reject (7 references) target prot opt source destination REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 reject-with tcp-reset REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain syn_flood (1 references) target prot opt source destination RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 25/sec burst 50 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan (1 references) target prot opt source destination input_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_lan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_ACCEPT (2 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_DROP (0 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_MSSFIX (0 references) target prot opt source destination TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain zone_lan_REJECT (1 references) target prot opt source destination reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_forward (1 references) target prot opt source destination zone_wan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 forwarding_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_lan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan (2 references) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ACCEPT 41 -- 0.0.0.0/0 0.0.0.0/0 input_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_ACCEPT (2 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_DROP (0 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_MSSFIX (1 references) target prot opt source destination TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain zone_wan_REJECT (2 references) target prot opt source destination reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_forward (2 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 192.168.1.2 forwarding_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Here's some routing info: root@OpenWrt:/# ip -f inet6 route 2001:470:1f0a:de5::/64 via :: dev 6in4-henet proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 2001:470:1f0b:de5::/64 dev br-lan proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev br-lan proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0.1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0.2 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 via :: dev 6in4-henet proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 default dev 6in4-henet metric 1024 mtu 1280 advmss 1220 hoplimit 0 I have computers running windows 7 SP1 and openSUSE 11.3 and all of them have same problem. I also made a thread about this on HE's forum, but it seems that people there are out of ideas what to do.

    Read the article

  • Permissions denied on apache rewrite module virtual host configuration

    - by sina
    All of a sudden I keep getting "Permissions denied" on apache 2 virtualhost once we moved it to its own conf file. I have tried all the suggestions I have found here but none work. Please can someone tell me what I am doing wrong? Thanks! <VirtualHost *:80> DocumentRoot "/var/www/mm" <Directory "/var/www/mm"> Options +Indexes +MultiViews +FollowSymLinks AllowOverride all Order deny,allow Allow from all AddType text/vnd.sun.j2me.app-descriptor .jad AddType application/vnd.rim.cod .cod </Directory> Alias /holdspace "/var/www/mm/holdspace" RewriteLogLevel 9 RewriteLog "/var/log/httpd/rewrite.log" RewriteEngine on # 91xx RewriteCond %{HTTP_USER_AGENT} BlackBerry.9105 RewriteRule ^/download/(.*) /holdspace/bb6-360x480/$1 [L] # 92xx RewriteCond %{HTTP_USER_AGENT} BlackBerry.9220 RewriteRule ^/download/(.*) /holdspace/bb5-320x240/$1 [L] Errors in error.log: [Wed May 28 12:44:58 2014] [error] [client 197.255.173.95] (13)Permission denied: access to /download/eazymoney.jad denied [Wed May 28 12:44:58 2014] [error] [client 197.255.173.95] (13)Permission denied: access to /error/HTTP_FORBIDDEN.html.var denied [Wed May 28 12:44:59 2014] [error] [client 197.255.173.95] (13)Permission denied: access to /favicon.ico denied [Wed May 28 12:44:59 2014] [error] [client 197.255.173.95] (13)Permission denied: access to /error/HTTP_FORBIDDEN.html.var denied [Wed May 28 12:44:59 2014] [error] [client 197.255.173.95] (13)Permission denied: access to /favicon.ico denied [Wed May 28 12:44:59 2014] [error] [client 197.255.173.95] (13)Permission denied: access to /error/HTTP_FORBIDDEN.html.var denied Errors in rewrite.log: 197.255.173.95 - - [28/May/2014:12:46:01 +0100] [41.203.113.103/sid#7fe41704ca28][rid#7fe417123378/initial/redir#1] (3) applying pattern '^/download/(.*)' to uri '/error/HTTP_FORBIDDEN.html.var' 197.255.173.95 - - [28/May/2014:12:46:01 +0100] [41.203.113.103/sid#7fe41704ca28][rid#7fe417123378/initial/redir#1] (3) applying pattern '^/download/(.*)' to uri '/error/HTTP_FORBIDDEN.html.var' Apache Configuration file: ServerTokens Prod ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf User apache Group apache ServerAdmin root@localhost ServerName sv001zma002.africa.int.myorg.com UseCanonicalName Off DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/var/www/html"> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <IfModule mod_userdir.c> UserDir disabled </IfModule> DirectoryIndex index.html index.html.var AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy All </Files> TypesConfig /etc/mime.types DefaultType text/plain <IfModule mod_mime_magic.c> MIMEMagicFile conf/magic </IfModule> HostnameLookups Off ErrorLog logs/error_log LogLevel warn LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log combined ServerSignature Off TraceEnable Off Alias /icons/ "/var/www/icons/" <Directory "/var/www/icons"> Options MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <IfModule mod_dav_fs.c> DAVLockDB /var/lib/dav/lockdb </IfModule> ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ DefaultIcon /icons/unknown.gif ReadmeName README.html HeaderName HEADER.html IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW ForceLanguagePriority Prefer Fallback AddDefaultCharset UTF-8 AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl AddHandler type-map var AddType text/html .shtml AddOutputFilter INCLUDES .shtml ProxyErrorOverride On Alias /error/ "/var/www/error/" <IfModule mod_negotiation.c> <IfModule mod_include.c> <Directory "/var/www/error"> AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en es de fr ForceLanguagePriority Prefer Fallback </Directory> ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var </IfModule> </IfModule> BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4\.0" force-response-1.0 BrowserMatch "Java/1\.0" force-response-1.0 BrowserMatch "JDK/1\.0" force-response-1.0 BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully ErrorDocument 400 "Bad Request"

    Read the article

  • Rails 2 and Ngnix: https pages can't load css or js (but will load graphics)

    - by Max Williams
    ADMISSION: i've posted this same question on stackoverflow, before realising it's probabaly better suited to superuser, but it kind of depends on the answer: If it turns out to be a problem in my nginx config, it's definitely superuser. If it turns out to be a problem in my Rails config (or code) then it's arguably stackoverflow. I'm adding some https pages to my rails site. In order to test it locally, i'm running my site under one mongrel_rails instance (on 3000) and nginx. I've managed to get my nginx config to the point where i can actually go to the https pages, and they load. Except, the javascript and css files all fail to load: looking in the Network tab in chrome web tools, i can see that it is trying to load them via an https url. Eg, one of the non-working file urls is https://cmw-local.co.uk/stylesheets/cmw-logged-out.css?1383759216 I have these set up (or at least think i do) in my nginx config to redirect to the http versions of the static files. This seems to be working for graphics, but not for css and js files. If i click on this in the Network tab, it takes me to the above url, which redirects to the http version. So, the redirect seems to be working in some sense, but not when they're loaded by an https page. Like i say, i thought i had this covered in the second try_files directive in my config below, but maybe not. Can anyone see what i'm doing wrong? thanks, Max Here's my nginx config - sorry it's a bit lengthy! I think the error is likely to be in the first (ssl) server block: server { listen 443 ssl; keepalive_timeout 70; ssl_certificate /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.crt; ssl_certificate_key /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; # ensure that we serve css, js, other statics when requested # as SSL, but if the files don't exist (i.e. any non /basket controller) # then redirect to the non-https version location / { try_files $uri @non-ssl-redirect; } # securely serve everything under /basket (/basket/checkout etc) # we need general too, because of the email/username checking location ~ ^/(basket|general|cmw/account/check_username_availability) { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running # other rewrite tests try_files $uri @rails-ssl; expires 1h; } location @non-ssl-redirect { return 301 http://$host$request_uri; } location @rails-ssl { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } } #upstream elrs { # server 127.0.0.1:3000; #} server { listen 80; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; access_log /home/max/work/charanga/elearn_container/elearn/log/access.log; error_log /home/max/work/charanga/elearn_container/elearn/log/error.log debug; client_max_body_size 50M; index index.html index.htm; # gzip html, css & javascript, but don't gzip javascript for pre-SP2 MSIE6 (i.e. those *without* SV1 in their user-agent string) gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #text/html # make sure gzip does not lose large gzipped js or css files # see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl gzip_buffers 16 8k; # Disable gzip for certain browsers. #gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_disable "MSIE [1-6]"; # blank gif like it's 1995 location = /images/blank.gif { empty_gif; } # don't serve files beginning with dots location ~ /\. { access_log off; log_not_found off; deny all; } # we don't care if these are missing location = /robots.txt { log_not_found off; } location = /favicon.ico { log_not_found off; } location ~ affiliate.xml { log_not_found off; } location ~ copyright.xml { log_not_found off; } # convert urls with multiple slashes to a single / if ($request ~ /+ ) { rewrite ^(/)+(.*) /$2 break; } # X-Accel-Redirect # Don't tie up mongrels with serving the lesson zips or exes, let Nginx do it instead location /zips { internal; root /var/www/apps/e_learning_resource/shared/assets; } location /tmp { internal; root /; } location /mnt{ root /; } # resource library thumbnails should be served as usual location ~ ^/resource_library/.*/*thumbnail.jpg$ { if (!-f $request_filename) { rewrite ^(.*)$ /images/no-thumb.png break; } expires 1m; } # don't make Rails generate the dynamic routes to the dcr and swf, we'll do it here location ~ "lesson viewer.dcr" { rewrite ^(.*)$ "/assets/players/lesson viewer.dcr" break; } # we need this rule so we don't serve the older lessonviewer when the rule below is matched location = /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf { rewrite ^(.*)$ /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf break; } location ~ v6lessonViewer.swf { rewrite ^(.*)$ /assets/players/v6lessonViewer.swf break; } location ~ lessonViewer.swf { rewrite ^(.*)$ /assets/players/lessonViewer.swf break; } location ~ lgn111.dat { empty_gif; } # try to get autocomplete school names from memcache first, then # fallback to rails when we can't location /schools/autocomplete { set $memcached_key $uri?q=$arg_q; memcached_pass 127.0.0.1:11211; default_type text/html; error_page 404 =200 @rails; # 404 not really! Hand off to rails } location / { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running other rewrite tests try_files $uri @rails; expires 1h; } location @rails { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } }

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Integrating HTML into Silverlight Applications

    - by dwahlin
    Looking for a way to display HTML content within a Silverlight application? If you haven’t tried doing that before it can be challenging at first until you know a few tricks of the trade.  Being able to display HTML is especially handy when you’re required to display RSS feeds (with embedded HTML), SQL Server Reporting Services reports, PDF files (not actually HTML – but the techniques discussed will work), or other HTML content.  In this post I'll discuss three options for displaying HTML content in Silverlight applications and describe how my company is using these techniques in client applications. Displaying HTML Overlays If you need to display HTML over a Silverlight application (such as an RSS feed containing HTML data in it) you’ll need to set the Silverlight control’s windowless parameter to true. This can be done using the object tag as shown next: <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/HTMLAndSilverlight.xap"/> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50401.0" /> <param name="autoUpgrade" value="true" /> <param name="windowless" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object> By setting the control to “windowless” you can overlay HTML objects by using absolute positioning and other CSS techniques. Keep in mind that on Windows machines the windowless setting can result in a performance hit when complex animations or HD video are running since the plug-in content is displayed directly by the browser window. It goes without saying that you should only set windowless to true when you really need the functionality it offers. For example, if I want to display my blog’s RSS content on top of a Silverlight application I could set windowless to true and create a user control that grabbed the content and output it using a DataList control: <style type="text/css"> a {text-decoration:none;font-weight:bold;font-size:14pt;} </style> <div style="margin-top:10px; margin-left:10px;margin-right:5px;"> <asp:DataList ID="RSSDataList" runat="server" DataSourceID="RSSDataSource"> <ItemTemplate> <a href='<%# XPath("link") %>'><%# XPath("title") %></a> <br /> <%# XPath("description") %> <br /> </ItemTemplate> </asp:DataList> <asp:XmlDataSource ID="RSSDataSource" DataFile="http://weblogs.asp.net/dwahlin/rss.aspx" XPath="rss/channel/item" CacheDuration="60" runat="server" /> </div> The user control can then be placed in the page hosting the Silverlight control as shown below. This example adds a Close button, additional content to display in the overlay window and the HTML generated from the user control. <div id="RSSDiv"> <div style="background-color:#484848;border:1px solid black;height:35px;width:100%;"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideOverlay();" style="cursor:pointer;" /> </div> <div style="overflow:auto;width:800px;height:565px;"> <div style="float:left;width:100px;height:103px;margin-left:10px;margin-top:5px;"> <img src="http://weblogs.asp.net/blogs/dwahlin/dan2008.jpg" style="border:1px solid Gray" /> </div> <div style="float:left;width:300px;height:103px;margin-top:5px;"> <a href="http://weblogs.asp.net/dwahlin" style="margin-left:10px;font-size:20pt;">Dan Wahlin's Blog</a> </div> <br /><br /><br /> <div style="clear:both;margin-top:20px;"> <uc:BlogRoller ID="BlogRoller" runat="server" /> </div> </div> </div> Of course, we wouldn’t want the RSS HTML content to be shown until requested. Once it’s requested the absolute position of where it should show above the Silverlight control can be set using standard CSS styles. The following ID selector named #RSSDiv handles hiding the overlay div shown above and determines where it will be display on the screen. #RSSDiv { background-color:White; position:absolute; top:100px; left:300px; width:800px; height:600px; border:1px solid black; display:none; } Now that the HTML content to display above the Silverlight control is set, how can we show it as a user clicks a HyperlinkButton or other control in the application? Fortunately, Silverlight provides an excellent HTML bridge that allows direct access to content hosted within a page. The following code shows two JavaScript functions that can be called from Siverlight to handle showing or hiding HTML overlay content. The two functions rely on jQuery (http://www.jQuery.com) to make it easy to select HTML objects and manipulate their properties: function ShowOverlay() { rssDiv.css('display', 'block'); } function HideOverlay() { rssDiv.css('display', 'none'); } Calling the ShowOverlay function is as simple as adding the following code into the Silverlight application within a button’s Click event handler: private void OverlayHyperlinkButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.Invoke("ShowOverlay"); } The result of setting the Silverlight control’s windowless parameter to true and showing the HTML overlay content is shown in the following screenshot:   Thinking Outside the Box to Show HTML Content Setting the windowless parameter to true may not be a viable option for some Silverlight applications or you may simply want to go about showing HTML content a different way. The next technique I’ll show takes advantage of simple HTML, CSS and JavaScript code to handle showing HTML content while a Silverlight application is running in the browser. Keep in mind that with Silverlight’s HTML bridge feature you can always pop-up HTML content in a new browser window using code similar to the following: System.Windows.Browser.HtmlPage.Window.Navigate( new Uri("http://silverlight.net"), "_blank"); For this example I’ll demonstrate how to hide the Silverlight application while maximizing a container div containing the HTML content to show. This allows HTML content to take up the full screen area of the browser without having to set windowless to true and when done right can make the user feel like they never left the Silverlight application. The following HTML shows several div elements that are used to display HTML within the same browser window as the Silverlight application: <div id="JobPlanDiv"> <div style="vertical-align:middle"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideJobPlanIFrame();" style="cursor:pointer;" /> </div> <div id="JobPlan_IFrame_Container" style="height:95%;width:100%;margin-top:37px;"></div> </div> The JobPlanDiv element acts as a container for two other divs that handle showing a close button and hosting an iframe that will be added dynamically at runtime. JobPlanDiv isn’t visible when the Silverlight application loads due to the following ID selector added into the page: #JobPlanDiv { position:absolute; background-color:#484848; overflow:hidden; left:0; top:0; height:100%; width:100%; display:none; } When the HTML content needs to be shown or hidden the JavaScript functions shown next can be used: var jobPlanIFrameID = 'JobPlan_IFrame'; var slHost = null; var jobPlanContainer = null; var jobPlanIFrameContainer = null; var rssDiv = null; $(document).ready(function () { slHost = $('#silverlightControlHost'); jobPlanContainer = $('#JobPlanDiv'); jobPlanIFrameContainer = $('#JobPlan_IFrame_Container'); rssDiv = $('#RSSDiv'); }); function ShowJobPlanIFrame(url) { jobPlanContainer.css('display', 'block'); $('<iframe id="' + jobPlanIFrameID + '" src="' + url + '" style="height:100%;width:100%;" />') .appendTo(jobPlanIFrameContainer); slHost.css('width', '0%'); } function HideJobPlanIFrame() { jobPlanContainer.css('display', 'none'); $('#' + jobPlanIFrameID).remove(); slHost.css('width', '100%'); } ShowJobPlanIFrame() handles showing the JobPlanDiv div and adding an iframe into it dynamically. Once JobPlanDiv is shown, the Silverlight control host has its width set to a value of 0% to allow the control to stay alive while making it invisible to the user. I found that this technique works better across multiple browsers as opposed to manipulating the Silverlight control host div’s display or visibility properties. Now that you’ve seen the code to handle showing and hiding the HTML content area, let’s switch focus to the Silverlight application. As a user clicks on a link such as “View Report” the ShowJobPlanIFrame() JavaScript function needs to be called. The following code handles that task: private void ReportHyperlinkButton_Click(object sender, RoutedEventArgs e) { ShowBrowser(_BaseUrl + "/Report.aspx"); } public void ShowBrowser(string url) { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } Any URL can be passed into the ShowBrowser() method which handles invoking the JavaScript function. This includes standard web pages or even PDF files. We’ve used this technique frequently with our SmartPrint control (http://www.smartwebcontrols.com) which converts Silverlight screens into PDF documents and displays them. Here’s an example of the content generated:   Silverlight 4’s WebBrowser Control Both techniques shown to this point work well when Silverlight is running in-browser but not so well when it’s running out-of-browser since there’s no host page that you can access using the HTML bridge. Fortunately, Silverlight 4 provides a WebBrowser control that can be used to perform the same functionality quite easily. We’re currently using it in client applications to display PDF documents, SSRS reports and standard HTML content. Using the WebBrowser control simplifies the application quite a bit since no JavaScript is required if the application only runs out-of-browser. Here’s a simple example of defining the WebBrowser control in XAML. I typically define it in MainPage.xaml when a Silverlight Navigation template is used to create the project so that I can re-use the functionality across multiple screens. <Grid x:Name="WebBrowserGrid" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Visibility="Collapsed"> <StackPanel HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <Border Background="#484848" HorizontalAlignment="Stretch" Height="40"> <Image x:Name="WebBrowserImage" Width="100" Height="33" Cursor="Hand" HorizontalAlignment="Right" Source="/HTMLAndSilverlight;component/Assets/Images/Close.png" MouseLeftButtonDown="WebBrowserImage_MouseLeftButtonDown" /> </Border> <WebBrowser x:Name="JobPlanReportWebBrowser" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" /> </StackPanel> </Grid> Looking through the XAML you can see that a close image is defined along with the WebBrowser control. Because the URL that the WebBrowser should navigate to isn’t known at design time no value is assigned to the control’s Source property. If the XAML shown above is left “as is” you’ll find that any HTML content assigned to the WebBrowser doesn’t display properly. This is due to no height or width being set on the control. To handle this issue the following code is added into the XAML’s code-behind file to dynamically determine the height and width of the page and assign it to the WebBrowser. This is done by handling the SizeChanged event. void MainPage_SizeChanged(object sender, SizeChangedEventArgs e) { WebBrowserGrid.Height = JobPlanReportWebBrowser.Height = ActualHeight; WebBrowserGrid.Width = JobPlanReportWebBrowser.Width = ActualWidth; } When the user wants to view HTML content they click a button which executes the code shown in next: public void ShowBrowser(string url) { if (Application.Current.IsRunningOutOfBrowser) { JobPlanReportWebBrowser.NavigateToString("<html><body><iframe src='" + url + "' style='width:100%;height:97%;' /></body></html>"); WebBrowserGrid.Visibility = Visibility.Visible; } else { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } } private void WebBrowserImage_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { WebBrowserGrid.Visibility = Visibility.Collapsed; }   Looking through the code you’ll see that it checks to see if the Silverlight application is running out-of-browser and then either displays the WebBrowser control or runs the JavaScript function discussed earlier. Although the WebBrowser control’s Source property could be assigned the URI of the page to navigate to, by assigning HTML content using the NavigateToString() method and adding an iframe, content can be shown from any site including cross-domain sites. This is especially handy when you need to grab a page from a reporting site that’s in a different domain than the Silverlight application. Here’s an example of viewing  PDF file inside of an out-of-browser application. The first image shows the application running out-of-browser before the user clicks a PDF HyperlinkButton.  The second image shows the PDF being displayed.   While there are certainly other techniques that can be used, the ones shown here have worked well for us in different applications and provide the ability to display HTML content in-browser or out-of-browser. Feel free to add a comment if you have another tip or trick you like to use when working with HTML content in Silverlight applications.   Download Code Sample   For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • Tools and Utilities for the .NET Developer

    - by mbcrump
    Tweet this list! Add a link to my site to your bookmarks to quickly find this page again! Add me to twitter! This is a list of the tools/utilities that I use to do my job/hobby. I wanted this page to load fast and contain information that only you care about. If I have missed a tool that you like, feel free to contact me and I will add it to the list. Also, this list took a lot of time to complete. Please do not steal my work, if you like the page then please link back to my site. I will keep the links/information updated as new tools/utilities are created.  Windows/.NET Development – This is a list of tools that any Windows/.NET developer should have in his bag. I have used at some point in my career everything listed on this page and below is the tools worth keeping. Name Description License AnkhSVN Subversion support for Visual Studio. It also works with VS2010. Free Aurora XAML Designer One of the best XAML creation tools available. Has a ton of built in templates that you can copy/paste into VS2010. COST/Trial BeyondCompare Beyond Compare 3 is the ideal tool for comparing files and folders on your Windows or Linux system. Visualize changes in your code and carefully reconcile them. COST/Trial BuildIT Automated Task Tool Its main purpose is to automate tasks, whether it is the final packaging of a product, an automated daily build, maybe sending out a mailing list, even backing-up files. Free C Sharper for VB Convert VB to C#. COST CLRProfiler Analyze and improve the behavior of your .NET app. Free CodeRush Direct competitor to ReSharper, contains similar feature. This is one of those decide for yourself. COST/Trial Disk2VHD Disk2vhd is a utility that creates VHD (Virtual Hard Disk - Microsoft's Virtual Machine disk format) versions of physical disks for use in Microsoft Virtual PC or Microsoft Hyper-V virtual machines (VMs). Free Eazfuscator.NET Is a free obfuscator for .NET. The main purpose is to protect intellectual property of software. Free EQATEC Profiler Make your .NET app run faster. No source code changes are needed. Just point the profiler to your app, run the modified code, and get a visual report. COST Expression Studio 3/4 Comes with Web, Blend, Sketch Flow and more. You can create websites, produce beautiful XAML and more. COST/Trial Expresso The award-winning Expresso editor is equally suitable as a teaching tool for the beginning user of regular expressions or as a full-featured development environment for the experienced programmer or web designer with an extensive knowledge of regular expressions. Free Fiddler Fiddler is a web debugging proxy which logs all HTTP(s) traffic between your computer and the internet. Free Firebug Powerful Web development tool. If you build websites, you will need this. Free FxCop FxCop is an application that analyzes managed code assemblies (code that targets the .NET Framework common language runtime) and reports information about the assemblies, such as possible design, localization, performance, and security improvements. Free GAC Browser and Remover Easy way to remove multiple assemblies from the GAC. Assemblies registered by programs like Install Shield can also be removed. Free GAC Util The Global Assembly Cache tool allows you to view and manipulate the contents of the global assembly cache and download cache. Free HelpScribble Help Scribble is a full-featured, easy-to-use help authoring tool for creating help files from start to finish. You can create Win Help (.hlp) files, HTML Help (.chm) files, a printed manual and online documentation (on a web site) all from the same Help Scribble project. COST/Trial IETester IETester is a free Web Browser that allows you to have the rendering and JavaScript engines of IE9 preview, IE8, IE7 IE 6 and IE5.5 on Windows 7, Vista and XP, as well as the installed IE in the same process. Free iTextSharp iText# (iTextSharp) is a port of the iText open source java library for PDF generation written entirely in C# for the .NET platform. Use the iText mailing list to get support. Free Kaxaml Kaxaml is a lightweight XAML editor that gives you a "split view" so you can see both your XAML and your rendered content. Free LINQPad LinqPad lets you interactively query databases in a LINQ. Free Linquer Many programmers are familiar with SQL and will need a help in the transition to LINQ. Sometimes there are complicated queries to be written and Linqer can help by converting SQL scripts to LINQ. COST/Trial LiquidXML Liquid XML Studio 2010 is an advanced XML developers toolkit and IDE, containing all the tools needed for designing and developing XML schema and applications. COST/Trial Log4Net log4net is a tool to help the programmer output log statements to a variety of output targets. log4net is a port of the excellent log4j framework to the .NET runtime. We have kept the framework similar in spirit to the original log4j while taking advantage of new features in the .NET runtime. For more information on log4net see the features document. Free Microsoft Web Platform Installer The Microsoft Web Platform Installer 2.0 (Web PI) is a free tool that makes getting the latest components of the Microsoft Web Platform, including Internet Information Services (IIS), SQL Server Express, .NET Framework and Visual Web Developer easy. Free Mono Development Don't have Visual Studio - no problem! This is an open Source C# and .NET development environment for Linux, Windows, and Mac OS X Free Net Mass Downloader While it’s great that Microsoft has released the .NET Reference Source Code, you can only get it one file at a time while you’re debugging. If you’d like to batch download it for reading or to populate the cache, you’d have to write a program that instantiated and called each method in the Framework Class Library. Fortunately, .NET Mass Downloader comes to the rescue! Free nMap Nmap ("Network Mapper") is a free and open source (license) utility for network exploration or security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Free NoScript (Firefox add-in) The NoScript Firefox extension provides extra protection for Firefox, Flock, Seamonkey and other Mozilla-based browsers: this free, open source add-on allows JavaScript, Java and Flash and other plug-ins to be executed only by trusted web sites of your choice (e.g. your online bank), and provides the most powerful Anti-XSS protection available in a browser. Free NotePad 2 Notepad2, a fast and light-weight Notepad-like text editor with syntax highlighting. This program can be run out of the box without installation, and does not touch your system's registry. Free PageSpy PageSpy is a small add-on for Internet Explorer that allows you to select any element within a webpage, select an option in the context menu, and view detailed information about both the coding behind the page and the element you selected. Free Phrase Express PhraseExpress manages your frequently used text snippets in customizable categories for quick access. Free PowerGui PowerGui is a free community for PowerGUI, a graphical user interface and script editor for Microsoft Windows PowerShell! Free Powershell Comes with Win7, but you can automate tasks by using the .NET Framework. Great for network admins. Free Process Explorer Ever wondered which program has a particular file or directory open? Now you can find out. Process Explorer shows you information about which handles and DLLs processes have opened or loaded. Also, included in the SysInterals Suite. Free Process Monitor Process Monitor is an advanced monitoring tool for Windows that shows real-time file system, Registry and process/thread activity. Free Reflector Explore and analyze compiled .NET assemblies, viewing them in C#, Visual Basic, and IL. This is an Essential for any .NET developer. Free Regular Expression Library Stuck on a Regular Expression but you think someone has already figured it out? Chances are they have. Free Regulator Regulator makes Regular Expressions easy. This is a must have for a .NET Developer. Free RenameMaestro RenameMaestro is probably the easiest batch file renamer you'll find to instantly rename multiple files COST ReSharper The one program that I cannot live without. Supports VS2010 and offers simple refactoring, code analysis/assistance/cleanup/templates. One of the few applications that is worth the $$$. COST/Trial ScrewTurn Wiki ScrewTurn Wiki allows you to create, manage and share wikis. A wiki is a collaboratively-edited, information-centered website: the most famous is Wikipedia. Free SharpDevelop What is #develop? SharpDevelop is a free IDE for C# and VB.NET projects on Microsoft's .NET platform. Free Show Me The Template Show Me The Template is a tool for exploring the templates, be their data, control or items panel, that comes with the controls built into WPF for all 6 themes. Free SnippetCompiler Compiles code snippets without opening Visual Studio. It does not support .NET 4. Free SQL Prompt SQL Prompt is a plug-in that increases how fast you can work with SQL. It provides code-completion for SQL server, reformatting, db schema information and snippets. Awesome! COST/Trial SQLinForm SQLinForm is an automatic SQL code formatter for all major databases  including ORACLE, SQL Server, DB2, UDB, Sybase, Informix, PostgreSQL, Teradata, MySQL, MS Access etc. with over 70 formatting options. COST/OnlineFree SSMS Tools SSMS Tools Pack is an add-in for Microsoft SQL Server Management Studio (SSMS) including SSMS Express. Free Storm STORM is a free and open source tool for testing web services. Free Telerik Code Convertor Convert code from VB to C Sharp and Vice Versa. Free TurtoiseSVN TortoiseSVN is a really easy to use Revision control / version control / source control software for Windows.Since it's not an integration for a specific IDE you can use it with whatever development tools you like. Free UltraEdit UltraEdit is the ideal text, HTML and hex editor, and an advanced PHP, Perl, Java and JavaScript editor for programmers. UltraEdit is also an XML editor including a tree-style XML parser. An industry-award winner, UltraEdit supports disk-based 64-bit file handling (standard) on 32-bit Windows platforms (Windows 2000 and later). COST/Trial Virtual Windows XP Comes with some W7 version and allows you to run WinXP along side W7. Free VirtualBox Virtualization by Sun Microsystems. You can virtualize Windows, Linux and more. Free Visual Log Parser SQL queries against a variety of log files and other system data sources. Free WinMerge WinMerge is an Open Source differencing and merging tool for Windows. WinMerge can compare both folders and files, presenting differences in a visual text format that is easy to understand and handle. Free Wireshark Wireshark is one of the best network protocol analyzer's for Unix and windows. This has been used several times to get me out of a bind. Free XML Notepad 07 Old, but still one of my favorite XML viewers. Free Productivity Tools – This is the list of tools that I use to save time or quickly navigate around Windows. Name Description License AutoHotKey Automate almost anything by sending keystrokes and mouse clicks. You can write a mouse or keyboard macro by hand or use the macro recorder. Free CLCL CLCL is clipboard caching utility. Free Ditto Ditto is an extension to the standard windows clipboard. It saves each item placed on the clipboard allowing you access to any of those items at a later time. Ditto allows you to save any type of information that can be put on the clipboard, text, images, html, custom formats, ..... Free Evernote Remember everything from notes to photos. It will synch between computers/devices. Free InfoRapid Inforapid is a search tool that will display all you search results in a html like browser. If you click on a word in that browser, it will start another search to the word you clicked on. Handy if you want to trackback something to it's true origin. The word you looked for will be highlighted in red. Clicking on the red word will open the containing file in a text based viewer. Clicking on any word in the opened document will start another search on that word. Free KatMouse The prime purpose of the KatMouse utility is to enhance the functionality of mice with a scroll wheel, offering 'universal' scrolling: moving the mouse wheel will scroll the window directly beneath the mouse cursor (not the one with the keyboard focus, which is default on Windows OSes). This is a major increase in the usefulness of the mouse wheel. Free ScreenR Instant Screencast with nothing to download. Works with Mac or PC and free. Free Start++ Start++ is an enhancement for the Start Menu in Windows Vista. It also extends the Run box and the command-line with customizable commands.  For example, typing "w Windows Vista" will take you to the Windows Vista page on Wikipedia! Free Synergy Synergy lets you easily share a single mouse and keyboard between multiple computers with different operating systems, each with its own display, without special hardware. It's intended for users with multiple computers on their desk since each system uses its own monitor(s). Free Texter Texter lets you define text substitution hot strings that, when triggered, will replace hotstring with a larger piece of text. By entering your most commonly-typed snippets of text into Texter, you can save countless keystrokes in the course of the day. Free Total Commander File handling, FTP, Archive handling and much more. Even works with Win3.11. COST/Trial Available Wizmouse WizMouse is a mouse enhancement utility that makes your mouse wheel work on the window currently under the mouse pointer, instead of the currently focused window. This means you no longer have to click on a window before being able to scroll it with the mouse wheel. This is a far more comfortable and practical way to make use of the mouse wheel. Free Xmarks Bookmark sync and search between computers. Free General Utilities – This is a list for power user users or anyone that wants more out of Windows. I usually install a majority of these whenever I get a new system. Name Description License µTorrent µTorrent is a lightweight and efficient BitTorrent client for Windows or Mac with many features. I use this for downloading LEGAL media. Free Audacity Audacity® is free, open source software for recording and editing sounds. It is available for Mac OS X, Microsoft Windows, GNU/Linux, and other operating systems. Learn more about Audacity... Also check our Wiki and Forum for more information. Free AVast Free FREE Antivirus. Free CD Burner XP Pro CDBurnerXP is a free application to burn CDs and DVDs, including Blu-Ray and HD-DVDs. It also includes the feature to burn and create ISOs, as well as a multilanguage interface. Free CDEX You can extract digital audio CDs into mp3/wav. Free Combofix Combofix is a freeware (a legitimate spyware remover created by sUBs), Combofix was designed to scan a computer for known malware, spyware (SurfSideKick, QooLogic, and Look2Me as well as any other combination of the mentioned spyware applications) and remove them. Free Cpu-Z Provides information about some of the main devices of your system. Free Cropper Cropper is a screen capture utility written in C#. It makes it fast and easy to grab parts of your screen. Use it to easily crop out sections of vector graphic files such as Fireworks without having to flatten the files or open in a new editor. Use it to easily capture parts of a web site, including text and images. It's also great for writing documentation that needs images of your application or web site. Free DropBox Drag and Drop files to sync between computers. Free DVD-Fab Converts/Copies DVDs/Blu-Ray to different formats. (like mp4, mkv, avi) COST/Trial Available FastStone Capture FastStone Capture is a powerful, lightweight, yet full-featured screen capture tool that allows you to easily capture and annotate anything on the screen including windows, objects, menus, full screen, rectangular/freehand regions and even scrolling windows/web pages. Free ffdshow FFDShow is a DirectShow decoding filter for decompressing DivX, XviD, H.264, FLV1, WMV, MPEG-1 and MPEG-2, MPEG-4 movies. Free Filezilla FileZilla Client is a fast and reliable cross-platform FTP, FTPS and SFTP client with lots of useful features and an intuitive graphical user interface. You can also download a server version. Free FireFox Web Browser, do you really need an explanation? Free FireGestures A customizable mouse gestures extension which enables you to execute various commands and user scripts with five types of gestures. Free FoxIt Reader Light weight PDF viewer. You should install this with the advanced setting or it will install a toolbar and setup some shortcuts. Free gSynchIt Synch Gmail and Outlook. Even supports Outlook 2010 32/64 bit COST/Trial Available Hulu Desktop At home or in a hotel, this has replaced my cable/satellite subscription. Free ImgBurn ImgBurn is a lightweight CD / DVD / HD DVD / Blu-ray burning application that everyone should have in their toolkit! Free Infrarecorder InfraRecorder is a free CD/DVD burning solution for Microsoft Windows. It offers a wide range of powerful features; all through an easy to use application interface and Windows Explorer integration. Free KeePass KeePass is a free open source password manager, which helps you to manage your passwords in a secure way. Free LastPass Another password management, synchronize between browsers, automatic form filling and more. Free Live Essentials One download and lots of programs including Mail, Live Writer, Movie Maker and more! Free Monitores MonitorES is a small windows utility that helps you to turnoff monitor display when you lock down your machine.Also when you lock your machine, it will pause all your running media programs & set your IM status message to "Away" / Custom message(via options) and restore it back to normal when you back. Free mRemote mRemote is a full-featured, multi-tab remote connections manager. Free Open Office OpenOffice.org 3 is the leading open-source office software suite for word processing, spreadsheets, presentations, graphics, databases and more. It is available in many languages and works on all common computers. It stores all your data in an international open standard format and can also read and write files from other common office software packages. It can be downloaded and used completely free of charge for any purpose. Free Paint.NET Simple, intuitive, and innovative user interface for editing photos. Free Picasa Picasa is free photo editing software from Google that makes your pictures look great. Free Pidgin Pidgin is an easy to use and free chat client used by millions. Connect to AIM, MSN, Yahoo, and more chat networks all at once. Free PING PING is a live Linux ISO, based on the excellent Linux From Scratch (LFS) documentation. It can be burnt on a CD and booted, or integrated into a PXE / RIS environment. Free Putty PuTTY is an SSH and telnet client, developed originally by Simon Tatham for the Windows platform. Free Revo Uninstaller Revo Uninstaller Pro helps you to uninstall software and remove unwanted programs installed on your computer easily! Even if you have problems uninstalling and cannot uninstall them from "Windows Add or Remove Programs" control panel applet.Revo Uninstaller is a much faster and more powerful alternative to "Windows Add or Remove Programs" applet! It has very powerful features to uninstall and remove programs. Free Security Essentials Microsoft Security Essentials is a new, free consumer anti-malware solution for your computer. Free SetupVirtualCloneDrive Virtual CloneDrive works and behaves just like a physical CD/DVD drive, however it exists only virtually. Point to the .ISO file and it appears in Windows Explorer as a Drive. Free Shark 007 Codec Pack Play just about any file format with this download. Also includes my W7 Media Playlist Generator. Free Snagit 9 Screen Capture on steroids. Add arrows, captions, etc to any screenshot. COST/Trial Available SysinternalsSuite Go ahead and download the entire sys internals suite. I have mentioned multiple programs in this suite already. Free TeraCopy TeraCopy is a compact program designed to copy and move files at the maximum possible speed, providing the user with a lot of features. Free for Home TrueCrypt Free open-source disk encryption software for Windows 7/Vista/XP, Mac OS X, and Linux Free TweetDeck Fully featured Twitter client. Free UltraVNC UltraVNC is a powerful, easy to use and free software that can display the screen of another computer (via internet or network) on your own screen. The program allows you to use your mouse and keyboard to control the other PC remotely. It means that you can work on a remote computer, as if you were sitting in front of it, right from your current location. Free Unlocker Unlocks locked files. Pretty simple right? Free VLC Media Player VLC media player is a highly portable multimedia player and multimedia framework capable of reading most audio and video formats Free Windows 7 Media Playlist This program is special to my heart because I wrote it. It has been mentioned on podcast and various websites. It allows you to quickly create wvx video playlist for Windows Media Center. Free WinRAR WinRAR is a powerful archive manager. It can backup your data and reduce the size of email attachments, decompress RAR, ZIP and other files downloaded from Internet and create new archives in RAR and ZIP file format. COST/Trial Available Blogging – I use the following for my blog. Name Description License Insert Code for Windows Live Writer Insert Code for Windows Live Writer will format a snippet of text in a number of programming languages such as C#, HTML, MSH, JavaScript, Visual Basic and TSQL. Free LiveWriter Included in Live Essentials, but the ultimate in Windows Blogging Free PasteAsVSCode Plug-in for Windows Live Writer that pastes clipboard content as Visual Studio code. Preserves syntax highlighting, indentation and background color. Converts RTF, outputted by Visual Studio, into HTML. Free Desktop Management – The list below represent the best in Windows Desktop Management. Name Description License 7 Stacks Allows users to have "stacks" of icons in their taskbar. Free Executor Executor is a multi purpose launcher and a more advanced and customizable version of windows run. Free Fences Fences is a program that helps you organize your desktop and can hide your icons when they are not in use. Free RocketDock Rocket Dock is a smoothly animated, alpha blended application launcher. It provides a nice clean interface to drop shortcuts on for easy access and organization. With each item completely customizable there is no end to what you can add and launch from the dock. Free WindowsTab Tabbing is an essential feature of modern web browsers. Window Tabs brings the productivity of tabbed window management to all of your desktop applications. Free

    Read the article

  • E-Business Suite Technology Sessions at OpenWorld 2012

    - by Max Arderius
    Oracle OpenWorld 2012 is almost here! We're looking forward to updating you on our products, strategy, and roadmaps. This year, the E-Business Suite Applications Technology Group (ATG) will participate in 25 speaker sessions, two Meet the Experts round-table discussions, five demoground booths and seven Special Interest Group meetings as guest speakers. We hope to see you at our sessions.  Please join us to hear the latest news and connect with senior ATG development staff. Here's a downloadable listing of all Applications Technology Group-related sessions with times and locations: FOCUS ON Oracle E-Business Suite - Applications Tools and Technology (PDF) General Sessions GEN8474 - Oracle E-Business Suite - Strategy, Update, and RoadmapCliff Godwin, SVP, Oracle Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone West 2002/2004 In this session, hear Oracle E-Business Suite General Manager Cliff Godwin deliver an update on the Oracle E-Business Suite product line. This session covers the value delivered by the current release of Oracle E-Business Suite, the momentum, and how Oracle E-Business Suite applications integrate into Oracle’s overall applications strategy. You’ll come away with an understanding of the value Oracle E-Business Suite applications deliver now and will deliver in the future. GEN9173 - Optimize and Extend Oracle Applications - The Path to Oracle Fusion ApplicationsNadia Bendjedou, Oracle; Corre Curtice, Bhavish Madurai (CSC) Tuesday, Oct 2, 10:15 AM - 11:15 AM - Moscone West 3002/3004 One of the main objectives of this session is to help organizations build their IT roadmap for the next five years and be aligned with the Oracle Applications strategy in general and the Oracle Fusion Applications strategy in particular. Come hear about some of the common sense, practical steps you can take to optimize the performance of your Oracle Applications today and prepare your path to Oracle Fusion Applications for when your organization is ready to embrace them. Each step you take in adopting Oracle Fusion technology gets you partway to Oracle Fusion Applications. Conference Sessions CON9024 - Oracle E-Business Suite Technology: Latest Features and Roadmap Lisa Parekh, Oracle Monday, Oct 1, 10:45 AM - 11:45 AM - Moscone West 2016 This Oracle development session provides a comprehensive overview of Oracle’s product strategy for Oracle E-Business Suite technology, the capabilities and associated business benefits of recent releases, and a review of capabilities on the product roadmap. This is the cornerstone session for the Oracle E-Business Suite technology stack. Come hear about the latest new usability enhancements of the user interface; systems administration and configuration management tools; security-related updates; and tools and options for extending, customizing, and integrating Oracle E-Business Suite with other applications. CON9021 - Oracle E-Business Suite Future Directions: Deployment and System AdministrationMax Arderius, Oracle Monday, Oct 1, 3:15 PM - 4:15 PM - Moscone West 2016  What’s coming in the next major version of Oracle E-Business Suite 12? This Oracle Development session covers the latest technology stack, including the use of Oracle WebLogic Server (Oracle Fusion Middleware 11g) and Oracle Database 11g Release 2 (11.2). Topics include an architectural overview of the latest updates, installation and upgrade options, new configuration options, and new tools for hot cloning and automated “lights-out” cloning. Come learn how online patching (based on the Oracle Database 11g Release 2 Edition-Based Redefinition feature) will reduce your database patching downtimes to however long it takes to bounce your database server. CON9017 - Desktop Integration in Oracle E-Business Suite 12.1 Padmaprabodh Ambale, Gustavo Jimenez, Oracle Monday, Oct 1, 4:45 PM - 5:45 PM - Moscone West 2016 This presentation covers the latest functional enhancements in Oracle Web Applications Desktop Integrator and Oracle Report Manager, enhanced Microsoft Office support, and greater support for building custom desktop integration solutions. The session also presents tips and tricks for upgrading from Oracle Applications Desktop Integrator to Oracle Web Applications Desktop Integrator and Oracle Report Manager. CON9023 - Oracle E-Business Suite Technology Certification Primer and Roadmap Steven Chan, Oracle Tuesday, Oct 2, 10:15 AM - 11:15 AM - Moscone West 2016  Is your Oracle E-Business Suite technology stack up to date? Are you taking advantage of all the latest options and capabilities? This Oracle development session summarizes the latest certifications and roadmap for the Oracle E-Business Suite technology stack, including elements such as database releases and options, Java, Oracle Forms, Oracle Containers for J2EE, desktop operating systems, browsers, JRE releases, development and Web authoring tools, user authentication and management, business intelligence, Oracle Application Management Packs, security options, clouds, Oracle VM, and virtualization. The session also covers the most commonly asked questions about tech stack component support dates and upgrade implications. CON9028 - Minimizing Oracle E-Business Suite Maintenance DowntimesSantiago Bastidas, Elke Phelps, Oracle Tuesday, Oct 2, 11:45 AM - 12:45 PM - Moscone West 2016 This Oracle development session features a survey of the best techniques sysadmins can use to minimize patching downtimes. It starts with an architectural-level review of Oracle E-Business Suite fundamentals and then moves to a practical view of the various tools and approaches for downtimes. Topics include patching shortcuts, merging patches, distributing worker processes across multiple servers, running ADPatch in noninteractive mode, staged APPL_TOPs, shared file systems, deferring systemwide database tasks, avoiding resource bottlenecks, and more. An added bonus: hear about the upcoming Oracle E-Business Suite 12 online patching capabilities based on the groundbreaking Oracle Database 11g Release 2 Edition-Based Redefinition feature. CON9116 - Extending the Use of Oracle E-Business Suite with the Oracle Endeca PlatformOsama Elkady, Muhannad Obeidat, Oracle Tuesday, Oct 2, 11:45 AM - 12:45 PM - Moscone West 2018 The Oracle Endeca platform includes a leading unstructured data correlation and analytics engine, together with a best-in class catalog search and guided navigation solution, to improve the productivity of all types of users in your enterprise. This development session focuses on the details behind the Oracle Endeca platform’s integration into Oracle E-Business Suite. It demonstrates how easily you can extend the use of the Oracle Endeca platform into other areas of Oracle E-Business Suite and how you can bring in your own data and build new Oracle Endeca applications for Oracle E-Business Suite. CON9005 - Oracle E-Business Suite Integration Best PracticesVeshaal Singh, Oracle, Jeffrey Hand, Zebra Technologies Tuesday, Oct 2, 1:15 PM - 2:15 PM - Moscone West 2018 Oracle is investing across applications and technologies to make the application integration experience easier for customers. Today Oracle has certified Oracle E-Business Suite on Oracle Fusion Middleware 11g and provides a comprehensive set of integration technologies. Learn about Oracle’s integration offering across data- and process-centric integrations. These technologies can be used to address various application integration challenges and styles. In this session, you will get an understanding of how, when, and where you can leverage Oracle’s integration technologies to connect end-to-end business processes across your enterprise, including your Oracle Applications portfolio.  CON9026 - Latest Oracle E-Business Suite 12.1 User Interface and Usability EnhancementsPadmaprabodh Ambale, Oracle Tuesday, Oct 2, 1:15 PM - 2:15 PM - Moscone West 2016 This Oracle development session details the latest UI enhancements to Oracle Application Framework in Oracle E-Business Suite 12.1. Developers will get a detailed look at new features to enhance usability, offer more capabilities for personalization and extensions, and support the development and use of dashboards and Web services. Topics include new rich UI capabilities such as new home page features, Navigator and Favorites pull-down menus, REST interface, embedded widgets for analytics content, Oracle Application Development Framework (Oracle ADF) task flows, third-party widgets, a look-ahead list of values, inline attachments, pop-ups, personalization and extensibility enhancements, business layer extensions, Oracle ADF integration, and mobile devices. CON8805 - Planning Your Oracle E-Business Suite Upgrade from 11i to Release 12.1 and BeyondAnne Carlson, Oracle Tuesday, Oct 2, 5:00 PM - 6:00 PM - Moscone West 3002/3004 Attend this session to hear the latest Oracle E-Business Suite 12.1 upgrade planning tips from Oracle’s support, consulting, development, and IT organizations. You’ll get specific cross-product advice on how to understand the factors that affect your project’s duration, decide on your project’s scope, develop a robust testing strategy, leverage Oracle Support resources, and more. In a nutshell, this session tells you things you need to know before embarking upon your Release 12.1 upgrade project. CON9053 - Advanced Management of Oracle E-Business Suite with Oracle Enterprise ManagerAngelo Rosado, Oracle Tuesday, Oct 2, 5:00 PM - 6:00 PM - Moscone West 2016 The task of managing and monitoring Oracle E-Business Suite environments can be very challenging. Oracle Enterprise Manager is the only product on the market that is designed to monitor and manage all the different technologies that constitute Oracle E-Business Suite applications, including end user, midtier, configuration, host, and database management—to name just a few. Customers that have implemented Oracle Enterprise Manager have experienced dramatic improvements in system visibility and diagnostic capability as well as administrator productivity. The purpose of this session is to highlight the key features and benefits of Oracle Enterprise Manager and Oracle Application Management Suite for Oracle E-Business Suite. CON8809 - Oracle E-Business Suite 12.1 Upgrade Best Practices: Technical InsightIsam Alyousfi, Udayan Parvate, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 3011 This session is ideal for organizations thinking about upgrading to Oracle E-Business Suite 12.1. It covers the fundamentals of upgrading to Release 12.1, including the technology stack components and supported upgrade paths. Hear from Oracle Development about the set of best practices for patching in general and executing the Release 12.1 technical upgrade, with special considerations for minimizing your downtime. Also get to know about relatively recent upgrade resources. CON9032 - Upgrading Your Customizations of Oracle E-Business Suite 12.1Sara Woodhull, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 2016 Have you personalized Oracle Forms or Oracle Application Framework screens in Oracle E-Business Suite? Have you used mod_plsql in Release 11i? Have you extended or customized your Release 11i environment with other tools? The technical options for upgrading these customizations as part of your Oracle E-Business Suite Release 12.1 upgrade can be bewildering. Come to this Oracle development session to learn about selecting the best upgrade approach for your existing customizations. The session will help you understand customization scenarios and use cases, tools, and technologies to ensure that your Oracle E-Business Suite Release 12.1 environment fits your users’ needs closely and that any future customizations will be easy to upgrade. CON9259 - Oracle E-Business Suite Internationalization and Multilingual FeaturesMaher Al-Nubani, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 2018 Oracle E-Business Suite supports more countries, languages, and regions than ever. Come to this Oracle development session to get an overview of internationalization features and capabilities and see new Release 12 features such as calendar support for Hijra and Thai, new group separators, lightweight multilingual support (MLS) setup, new character sets such as AL32UTF, newly supported languages, Mac certifications, Oracle iSetup support for moving MLS setups, new file export options for Unicode, new MLS number spelling options, and more. CON7188 - Mobile Apps for Oracle E-Business Suite with Oracle ADF Mobile and Oracle SOA SuiteSrikant Subramaniam, Joe Huang, Veshaal Singh, Oracle Wednesday, Oct 3, 10:15 AM - 11:15 AM - Moscone West 3001 Follow your mobile customers, employees, and partners with Oracle Fusion Middleware. See how native iPhone and iPad applications can easily be built for Oracle E-Business Suite with the new Oracle ADF Mobile and Oracle SOA Suite. Using Oracle ADF Mobile, developers can quickly develop native applications for Apple iOS and other mobile platforms. The Oracle SOA Suite/Oracle ADF Mobile combination can execute business transactions on Oracle E-Business Suite. This session includes a demo in which a mobile user approves a business transaction in Oracle E-Business Suite and a demo of the tools used to build a native on-device solution. These concepts for mobile applications also apply to other Oracle applications.CON9029 - Oracle E-Business Suite Directions: Slashing Downtimes with Online PatchingKevin Hudson, Oracle Wednesday, Oct 3, 11:45 AM - 12:45 PM - Moscone West 2016 Oracle E-Business Suite will soon include online patching (based on the Oracle Database 11g Release 2 Edition-Based Redefinition feature), which will reduce your database patching downtimes to however long it takes to bounce your database server. This Oracle development session details how online patching works, with special attention to what’s happening at a database object level when database patches are applied to an Oracle E-Business Suite environment that’s still running. Come learn about the operational and system management implications for minimizing maintenance downtimes when applying database patches with this new technology and the related impact on customizations you might have built on top of Oracle E-Business Suite. CON8806 - Upgrading to Oracle E-Business Suite 12.1: Technical and Functional PanelAndrew Katz, Komori America Corporation; Sandra Vucinic, VLAD Group, Inc. ;Srini Chavali, Cummins Inc.; Amrita Mehrok, Nadia Bendjedou, Anne Carlson Oracle Wednesday, Oct 3, 1:15 PM - 2:15 PM - Moscone West 2018 In this panel discussion, Oracle experts, customers, and partners share their experiences in upgrading to the latest release of Oracle E-Business Suite, Release 12.1. The panelists cover aspects of a typical Release 12 upgrade, technical (upgrading the technical infrastructure) as well as functional (upgrading to the new financial infrastructure). Hear directly from the experts who either develop the product or support, implement, or upgrade it, and find out how to apply their lessons learned to your organization. CON9027 - Personalize and Extend Oracle E-Business Suite Applications with Rich MashupsGustavo Jimenez, Padmaprabodh Ambale, Oracle Wednesday, Oct 3, 1:15 PM - 2:15 PM - Moscone West 2016 This session covers the use of several Oracle Fusion Middleware technologies to personalize and extend your existing Oracle E-Business Suite applications. The Oracle Fusion Middleware technologies covered include Oracle Application Development Framework (Oracle ADF), Oracle WebCenter, Oracle Endeca applications, and Oracle Business Intelligence Enterprise Edition with Oracle E-Business Suite Oracle Application Framework applications. CON9036 - Advanced Oracle E-Business Suite Architectures: Maximum Availability, Security, and MoreElke Phelps, Oracle Wednesday, Oct 3, 3:30 PM - 4:30 PM - Moscone West 2016 This session includes architecture diagrams and configuration instructions for building a maximum availability architecture (MAA) that will help you design a disaster recovery solution that fits the needs of your business. Database and application high-availability features it describes include Oracle Data Guard, Oracle Real Application Clusters (Oracle RAC), Oracle Active Data Guard, load-balancing Web and forms services, parallel concurrent processing, and the use of Oracle Exalogic and Oracle Exadata to provide a highly available environment. The session also covers the latest updates to systems management tools, AutoConfig, cloud computing, virtualization, and Oracle WebLogic Server and provides sneak previews of upcoming functionality. CON9047 - Efficiently Scaling Oracle E-Business Suite on Oracle Exadata and Oracle ExalogicIsam Alyousfi, Nishit Rao, Oracle Wednesday, Oct 3, 5:00 PM - 6:00 PM - Moscone West 2016 Oracle Exadata and Oracle Exalogic are designed from the ground up with optimizations in software and hardware to deliver superfast performance for mission-critical applications such as Oracle E-Business Suite. Oracle E-Business Suite applications run three to eight times as fast on the Oracle Exadata/Oracle Exalogic platform in standard benchmark tests. Besides performance, customers benefit from simplified support, enhanced manageability, and the ability to consolidate multiple Oracle E-Business Suite instances. Attend this session to understand best practices for Oracle E-Business Suite deployment on Oracle Exalogic and Oracle Exadata through customer case studies. Learn how adopting the Exa* platform increases efficiency, simplifies scaling, and boosts performance for peak loads. CON8716 - Web Services and SOA Integration Options for Oracle E-Business SuiteRekha Ayothi, Veshaal Singh, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM - Moscone West 2016 This Oracle development session provides a deep dive into a subset of the Web services and SOA-related integration options available to Oracle E-Business Suite systems integrators. It offers a technical look at Oracle E-Business Suite Integrated SOA Gateway, Oracle SOA Suite, Oracle Application Adapters for Data Integration for Oracle E-Business Suite, and other Web services options for integrating Oracle E-Business Suite with other applications. Systems integrators and developers will get an overview of the latest integration capabilities and technologies available out of the box with Oracle E-Business Suite and possibly a sneak preview of upcoming functionality and features. CON9030 - Recommendations for Oracle E-Business Suite Performance TuningIsam Alyousfi, Samer Barakat, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM - Moscone West 2018 Need to squeeze more performance out of your existing servers? This packed Oracle development session summarizes practical tips and lessons learned from performance-tuning and benchmarking the world’s largest Oracle E-Business Suite environments. Apps sysadmins will learn concrete tips and techniques for identifying and resolving performance bottlenecks on all layers, with special attention to application- and database-tier servers. Learn about tuning Oracle Forms, Oracle Concurrent Manager, Apache, and Oracle Discoverer. Track down memory leaks and other issues at the Java and JVM layers. The session also covers Oracle E-Business Suite product-level tuning, including Oracle Workflow, Oracle Order Management, Oracle Payroll, and other modules. CON3429 - Using Oracle ADF with Oracle E-Business Suite: The Full Integration ViewSiva Puthurkattil, Lake County; Juan Camilo Ruiz, Sara Woodhull, Oracle Thursday, Oct 4, 11:15 AM - 12:15 PM - Moscone West 3003 Oracle E-Business Suite delivers functionality for handling the core business of your organization. However, user requirements and new technologies are driving an emerging need to implement new types of user interfaces for these applications. This session provides an overview of how to use Oracle Application Development Framework (Oracle ADF) to deliver cutting-edge Web 2.0 and mobile rich user interfaces that front existing Oracle E-Business Suite processes, and it also explores all the existing types of integration between the two worlds. CON9020 - Integrating Oracle E-Business Suite with Oracle Identity Management SolutionsSunil Ghosh, Elke Phelps, Oracle Thursday, Oct 4, 12:45 PM - 1:45 PM - Moscone West 2016 Need to integrate Oracle E-Business Suite with Microsoft Windows Kerberos, Active Directory, CA Netegrity SiteMinder, or other third-party authentication systems? Want to understand your options when Oracle Premier Support for Oracle Single Sign-On ends in December 2011? This Oracle Development session covers the latest certified integrations with Oracle Access Manager 11g and Oracle Internet Directory 11g, which can be used individually or as bridges for integrating with third-party authentication solutions. The session presents an architectural overview of how Oracle Access Manager, its WebGate and AccessGate components, and Oracle Internet Directory work together, with implications for Oracle Discoverer, Oracle Portal, and other Oracle Fusion identity management products. CON9019 - Troubleshooting, Diagnosing, and Optimizing Oracle E-Business Suite TechnologyGustavo Jimenez, Oracle Thursday, Oct 4, 2:15 PM - 3:15 PM - Moscone West 2016 This session covers how you can proactively diagnose Oracle E-Business Suite applications, including extensions built with Oracle Fusion Middleware technologies such as Oracle Application Development Framework (Oracle ADF) and Oracle WebCenter to catch potential issues in the middle tier before they become more serious. Topics include debugging, logging infrastructure, warning signs, performance tuning, information required when logging service requests, general JVM optimization, and an overall picture of all the moving parts that make it possible for Oracle E-Business Suite to isolate and fix problems. Also learn how Oracle Diagnostics Framework will help prevent downtime caused by failures. CON9031 - The Top 10 Things You Can Do to Secure Your Oracle E-Business Suite InstanceEric Bing, Erik Graversen, Oracle Thursday, Oct 4, 2:15 PM - 3:15 PM - Moscone West 2018 Learn the top 10 things you can do to secure your applications and your sensitive data. This Oracle development session for system administrators and security professionals explores some of the most important and overlooked things you can do to secure your Oracle E-Business Suite instance. It also covers data masking and other mechanisms for protecting sensitive data. Special Interest Groups (SIG) Some of our most senior staff have been invited to participate on the following SIG meetings as guest speakers: SIG10525 - OAUG - Archive & Purge SIGBrian Bent - Pre-Sales Engineer, TierData, Inc. Sunday, Sep 30, 10:30 AM - 12:00 PM - Moscone West 3011 The Archive and Purge SIG is an organization in which users can share their experiences and solicit functional and technical advice on archiving and purging data in Oracle E-Business Suite. This session provides an opportunity for users to network and share best practices, tips, and tricks. Guest: Oracle E-Business Suite Database Performance, Archive & Purging - Q&A SessionIsam Alyousfi, Senior Director, Applications Performance, Oracle SIG10547 - OAUG - Oracle E-Business (EBS) Applications Technology SIGSrini Chavali - IT Director, Cummins Inc Sunday, Sep 30, 10:30 AM - 12:00 PM - Moscone West 3018 The general purpose of the EBS Applications Technology SIG is to inform and educate its members about current and future components of the tech stack as they relate to Oracle E-Business Suite. Attend this meeting for networking and education and to share best practices. Guest: Oracle E-Business Suite Technology Certification Roadmap - Presentation and Q&ASteven Chan, Sr. Director, Applications Technology Group, Oracle SIG10559 - OAUG - User Management SIGSusan Behn - VP of Oracle Delivery, Infosemantics, Inc. Sunday, Sep 30, 10:30 AM - 12:00 PM - Moscone West 3024 The E-Business Suite User Management SIG focuses on the components of user management that enable Oracle E-Business Suite users to define administrative functions and manage users’ access to functions and data based on roles within an organization—rather than the user’s individual identity—which is referred to as role-based access control (RBAC). This meeting includes an introduction to Oracle User Management that covers the Oracle User Management building blocks and presents an example of creating a security policy.Guest: Security and User Management - Q&A SessionEric Bing, Sr. Director, EBS Security, OracleSara Woodhull, Principal Product Manager, Applications Technology Group, Oracle SIG10515 - OAUG – Upgrade SIGBarbara Matthews - Consultant, On Call DBASandra Vucinic, VLAD Group, Inc. Sunday, Sep 30, 12:00 PM - 2:00 PM - Moscone West 3009 This Upgrade SIG session starts with a business meeting and then features a Q&A panel discussion on Oracle E-Business Suite upgrade topics. The session• Reviews Upgrade SIG goals and objectives• Provides answers, during the Q&A session, to questions related to Oracle E-Business Suite upgrades• Shares “real world” experiences, tips, and techniques for Oracle E-Business Suite upgrades to Release 12.1. Guest: Oracle E-Business Suite Upgrade - Q&A SessionAnne Carlson - Sr. Director, Oracle E-Business Suite Product Strategy, OracleUdayan Parvate - Director, EBS Release Engineering, OracleSuzana Ferrari, Sr. Principal Consultant, OracleIsam Alyousfi, Sr. Director, Applications Performance, Oracle SIG10552 - OAUG - Oracle E-Business Suite SIGDonna Rosentrater - Manager, Global Sourcing & Procurement Systems, TJX Sunday, Sep 30, 12:15 PM - 1:45 PM - Moscone West 3020 The E-Business Suite SIG, affiliated with OAUG, supports Oracle E-Business Suite users through networking, education, and sharing of best practices. This SIG meeting will feature a general discussion of Oracle E-Business Suite product strategies in Release 12 and migration to Oracle Fusion Applications. Guest: Oracle E-Business Suite - Q&A SessionJeanne Lowell, Vice President, EBS Product Strategy, OracleNadia Bendjedou, Sr. Director, Product Strategy, Oracle SIG10556 - OAUG - SysAdmin SIGRandy Giefer - Sr Systems and Security Architect, Solution Beacon, LLC Sunday, Sep 30, 12:15 PM - 1:45 PM - Moscone West 3022 The SysAdmin SIG provides a forum in which OAUG members and participants can share updates, tips, and successful practices relating to system administration in an Oracle applications environment. The SysAdmin SIG strives to enable system administrators to become more effective and efficient in their jobs by providing them with access to people and information that can increase their system administration knowledge and experience. Attend this meeting to network, share best practices, and benefit from educational content. Guest: Oracle E-Business Suite 12.2 Online Patching- Presentation and Q&AKevin Hudson, Sr. Director, Applications Technology Group, Oracle SIG10553 - OAUG - Database SIGMichael Brown - Senior DBA, COLIBRI LTD LC Sunday, Sep 30, 2:00 PM - 3:15 PM - Moscone West 3020 The OAUG Database SIG provides an opportunity for applications database administrators to learn from and share their experiences with supporting the various Oracle applications environments. This session will include a brief business meeting followed by a short presentation. It will end with an open discussion among the attendees about items of interest to those present. Guest: Oracle E-Business Suite Database Performance - Presentation and Q&AIsam Alyousfi, Sr. Director, Applications Performance, Oracle Meet the Experts We're planning two round-table discussions where you can review your questions with senior E-Business Suite ATG staff: MTE9648 - Meet the Experts for Oracle E-Business Suite: Planning Your Upgrade Jeanne Lowell - VP, EBS Product Strategy, Oracle John Abraham - Sr. Principal Product Manager, Oracle Nadia Bendjedou - Sr. Director - Product Strategy, Oracle Anne Carlson - Sr. Director, Applications Technology Group, Oracle Udayan Parvate - Director, EBS Release Engineering, Oracle Isam Alyousfi, Sr. Director, Applications Performance, Oracle Monday, Oct 1, 3:15 PM - 4:15 PM - Moscone West 2001A Don’t miss this Oracle Applications Meet the Experts session with experts who specialize in Oracle E-Business Suite upgrade best practices. This is the place where attendees can have informal and semistructured but open one-on-one discussions with Strategy and Development regarding Oracle Applications strategy and your specific business and IT strategy. The experts will be available to discuss the value of the latest releases and share insights into the best path for your enterprise, so come ready with your questions. Space is limited, so make sure you register. MTE9649 - Meet the Oracle E-Business Suite Tools and Technology Experts Lisa Parekh - Vice President, Technology Integration, Oracle Steven Chan - Sr. Director, Oracle Elke Phelps - Sr. Principal Product Manager, Applications Technology Group, Oracle Max Arderius - Manager, Applications Technology Group, Oracle Tuesday, Oct 2, 1:15 PM - 2:15 PM - Moscone West 2001A Don’t miss this Oracle Applications Meet the Experts session with experts who specialize in Oracle E-Business Suite technology. This is the place where attendees can have informal and semistructured but open one-on-one discussions with Strategy and Development regarding Oracle Applications strategy and your specific business and IT strategy. The experts will be available to discuss the value of the latest releases and share insights into the best path for your enterprise, so come ready with your questions. Space is limited, so make sure you register. Demos We have five booths in the exhibition demogrounds this year, where you can try ATG technologies firsthand and get your questions answered. Please stop by and meet our staff at the following locations: Advanced Architecture and Technology Stack for Oracle E-Business Suite (W-067) New User Productivity Capabilities in Oracle E-Business Suite (W-065) End-to-End Management of Oracle E-Business Suite (W-063) Oracle E-Business Suite 12.1 Technical Upgrade Best Practices (W-066) SOA-Based Integration for Oracle E-Business Suite (W-064)

    Read the article

  • Metro: Declarative Data Binding

    - by Stephen.Walther
    The goal of this blog post is to describe how declarative data binding works in the WinJS library. In particular, you learn how to use both the data-win-bind and data-win-bindsource attributes. You also learn how to use calculated properties and converters to format the value of a property automatically when performing data binding. By taking advantage of WinJS data binding, you can use the Model-View-ViewModel (MVVM) pattern when building Metro style applications with JavaScript. By using the MVVM pattern, you can prevent your JavaScript code from spinning into chaos. The MVVM pattern provides you with a standard pattern for organizing your JavaScript code which results in a more maintainable application. Using Declarative Bindings You can use the data-win-bind attribute with any HTML element in a page. The data-win-bind attribute enables you to bind (associate) an attribute of an HTML element to the value of a property. Imagine, for example, that you want to create a product details page. You want to show a product object in a page. In that case, you can create the following HTML page to display the product details: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Product Details</h1> <div class="field"> Product Name: <span data-win-bind="innerText:name"></span> </div> <div class="field"> Product Price: <span data-win-bind="innerText:price"></span> </div> <div class="field"> Product Picture: <br /> <img data-win-bind="src:photo;alt:name" /> </div> </body> </html> The HTML page above contains three data-win-bind attributes – one attribute for each product property displayed. You use the data-win-bind attribute to set properties of the HTML element associated with the data-win-attribute. The data-win-bind attribute takes a semicolon delimited list of element property names and data source property names: data-win-bind=”elementPropertyName:datasourcePropertyName; elementPropertyName:datasourcePropertyName;…” In the HTML page above, the first two data-win-bind attributes are used to set the values of the innerText property of the SPAN elements. The last data-win-bind attribute is used to set the values of the IMG element’s src and alt attributes. By the way, using data-win-bind attributes is perfectly valid HTML5. The HTML5 standard enables you to add custom attributes to an HTML document just as long as the custom attributes start with the prefix data-. So you can add custom attributes to an HTML5 document with names like data-stephen, data-funky, or data-rover-dog-is-hungry and your document will validate. The product object displayed in the page above with the data-win-bind attributes is created in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var product = { name: "Tesla", price: 80000, photo: "/images/TeslaPhoto.png" }; WinJS.Binding.processAll(null, product); } }; app.start(); })(); In the code above, a product object is created with a name, price, and photo property. The WinJS.Binding.processAll() method is called to perform the actual binding (Don’t confuse WinJS.Binding.processAll() and WinJS.UI.processAll() – these are different methods). The first parameter passed to the processAll() method represents the root element for the binding. In other words, binding happens on this element and its child elements. If you provide the value null, then binding happens on the entire body of the document (document.body). The second parameter represents the data context. This is the object that has the properties which are displayed with the data-win-bind attributes. In the code above, the product object is passed as the data context parameter. Another word for data context is view model.  Creating Complex View Models In the previous section, we used the data-win-bind attribute to display the properties of a simple object: a single product. However, you can use binding with more complex view models including view models which represent multiple objects. For example, the view model in the following default.js file represents both a customer and a product object. Furthermore, the customer object has a nested address object: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var viewModel = { customer: { firstName: "Fred", lastName: "Flintstone", address: { street: "1 Rocky Way", city: "Bedrock", country: "USA" } }, product: { name: "Bowling Ball", price: 34.55 } }; WinJS.Binding.processAll(null, viewModel); } }; app.start(); })(); The following page displays the customer (including the customer address) and the product. Notice that you can use dot notation to refer to child objects in a view model such as customer.address.street. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Customer Details</h1> <div class="field"> First Name: <span data-win-bind="innerText:customer.firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:customer.lastName"></span> </div> <div class="field"> Address: <address> <span data-win-bind="innerText:customer.address.street"></span> <br /> <span data-win-bind="innerText:customer.address.city"></span> <br /> <span data-win-bind="innerText:customer.address.country"></span> </address> </div> <h1>Product</h1> <div class="field"> Name: <span data-win-bind="innerText:product.name"></span> </div> <div class="field"> Price: <span data-win-bind="innerText:product.price"></span> </div> </body> </html> A view model can be as complicated as you need and you can bind the view model to a view (an HTML document) by using declarative bindings. Creating Calculated Properties You might want to modify a property before displaying the property. For example, you might want to format the product price property before displaying the property. You don’t want to display the raw product price “80000”. Instead, you want to display the formatted price “$80,000”. You also might need to combine multiple properties. For example, you might need to display the customer full name by combining the values of the customer first and last name properties. In these situations, it is tempting to call a function when performing binding. For example, you could create a function named fullName() which concatenates the customer first and last name. Unfortunately, the WinJS library does not support the following syntax: <span data-win-bind=”innerText:fullName()”></span> Instead, in these situations, you should create a new property in your view model that has a getter. For example, the customer object in the following default.js file includes a property named fullName which combines the values of the firstName and lastName properties: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var customer = { firstName: "Fred", lastName: "Flintstone", get fullName() { return this.firstName + " " + this.lastName; } }; WinJS.Binding.processAll(null, customer); } }; app.start(); })(); The customer object has a firstName, lastName, and fullName property. Notice that the fullName property is defined with a getter function. When you read the fullName property, the values of the firstName and lastName properties are concatenated and returned. The following HTML page displays the fullName property in an H1 element. You can use the fullName property in a data-win-bind attribute in exactly the same way as any other property. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1 data-win-bind="innerText:fullName"></h1> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> </body> </html> Creating a Converter In the previous section, you learned how to format the value of a property by creating a property with a getter. This approach makes sense when the formatting logic is specific to a particular view model. If, on the other hand, you need to perform the same type of formatting for multiple view models then it makes more sense to create a converter function. A converter function is a function which you can apply whenever you are using the data-win-bind attribute. Imagine, for example, that you want to create a general function for displaying dates. You always want to display dates using a short format such as 12/25/1988. The following JavaScript file – named converters.js – contains a shortDate() converter: (function (WinJS) { var shortDate = WinJS.Binding.converter(function (date) { return date.getMonth() + 1 + "/" + date.getDate() + "/" + date.getFullYear(); }); // Export shortDate WinJS.Namespace.define("MyApp.Converters", { shortDate: shortDate }); })(WinJS); The file above uses the Module Pattern, a pattern which is used through the WinJS library. To learn more about the Module Pattern, see my blog entry on namespaces and modules: http://stephenwalther.com/blog/archive/2012/02/22/windows-web-applications-namespaces-and-modules.aspx The file contains the definition for a converter function named shortDate(). This function converts a JavaScript date object into a short date string such as 12/1/1988. The converter function is created with the help of the WinJS.Binding.converter() method. This method takes a normal function and converts it into a converter function. Finally, the shortDate() converter is added to the MyApp.Converters namespace. You can call the shortDate() function by calling MyApp.Converters.shortDate(). The default.js file contains the customer object that we want to bind. Notice that the customer object has a firstName, lastName, and birthday property. We will use our new shortDate() converter when displaying the customer birthday property: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { var customer = { firstName: "Fred", lastName: "Flintstone", birthday: new Date("12/1/1988") }; WinJS.Binding.processAll(null, customer); } }; app.start(); })(); We actually use our shortDate converter in the HTML document. The following HTML document displays all of the customer properties: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script type="text/javascript" src="js/converters.js"></script> </head> <body> <h1>Customer Details</h1> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> <div class="field"> Birthday: <span data-win-bind="innerText:birthday MyApp.Converters.shortDate"></span> </div> </body> </html> Notice the data-win-bind attribute used to display the birthday property. It looks like this: <span data-win-bind="innerText:birthday MyApp.Converters.shortDate"></span> The shortDate converter is applied to the birthday property when the birthday property is bound to the SPAN element’s innerText property. Using data-win-bindsource Normally, you pass the view model (the data context) which you want to use with the data-win-bind attributes in a page by passing the view model to the WinJS.Binding.processAll() method like this: WinJS.Binding.processAll(null, viewModel); As an alternative, you can specify the view model declaratively in your markup by using the data-win-datasource attribute. For example, the following default.js script exposes a view model with the fully-qualified name of MyWinWebApp.viewModel: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { // Create view model var viewModel = { customer: { firstName: "Fred", lastName: "Flintstone" }, product: { name: "Bowling Ball", price: 12.99 } }; // Export view model to be seen by universe WinJS.Namespace.define("MyWinWebApp", { viewModel: viewModel }); // Process data-win-bind attributes WinJS.Binding.processAll(); } }; app.start(); })(); In the code above, a view model which represents a customer and a product is exposed as MyWinWebApp.viewModel. The following HTML page illustrates how you can use the data-win-bindsource attribute to bind to this view model: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Application1</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- Application1 references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> </head> <body> <h1>Customer Details</h1> <div data-win-bindsource="MyWinWebApp.viewModel.customer"> <div class="field"> First Name: <span data-win-bind="innerText:firstName"></span> </div> <div class="field"> Last Name: <span data-win-bind="innerText:lastName"></span> </div> </div> <h1>Product</h1> <div data-win-bindsource="MyWinWebApp.viewModel.product"> <div class="field"> Name: <span data-win-bind="innerText:name"></span> </div> <div class="field"> Price: <span data-win-bind="innerText:price"></span> </div> </div> </body> </html> The data-win-bindsource attribute is used twice in the page above: it is used with the DIV element which contains the customer details and it is used with the DIV element which contains the product details. If an element has a data-win-bindsource attribute then all of the child elements of that element are affected. The data-win-bind attributes of all of the child elements are bound to the data source represented by the data-win-bindsource attribute. Summary The focus of this blog entry was data binding using the WinJS library. You learned how to use the data-win-bind attribute to bind the properties of an HTML element to a view model. We also discussed several advanced features of data binding. We examined how to create calculated properties by including a property with a getter in your view model. We also discussed how you can create a converter function to format the value of a view model property when binding the property. Finally, you learned how to use the data-win-bindsource attribute to specify a view model declaratively.

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Creating STA COM compatible ASP.NET Applications

    - by Rick Strahl
    When building ASP.NET applications that interface with old school COM objects like those created with VB6 or Visual FoxPro (MTDLL), it's extremely important that the threads that are serving requests use Single Threaded Apartment Threading. STA is a COM built-in technology that allows essentially single threaded components to operate reliably in a multi-threaded environment. STA's guarantee that COM objects instantiated on a specific thread stay on that specific thread and any access to a COM object from another thread automatically marshals that thread to the STA thread. The end effect is that you can have multiple threads, but a COM object instance lives on a fixed never changing thread. ASP.NET by default uses MTA (multi-threaded apartment) threads which are truly free spinning threads that pay no heed to COM object marshaling. This is vastly more efficient than STA threading which has a bit of overhead in determining whether it's OK to run code on a given thread or whether some sort of thread/COM marshaling needs to occur. MTA COM components can be very efficient, but STA COM components in a multi-threaded environment always tend to have a fair amount of overhead. It's amazing how much COM Interop I still see today so while it seems really old school to be talking about this topic, it's actually quite apropos for me as I have many customers using legacy COM systems that need to interface with other .NET applications. In this post I'm consolidating some of the hacks I've used to integrate with various ASP.NET technologies when using STA COM Components. STA in ASP.NET Support for STA threading in the ASP.NET framework is fairly limited. Specifically only the original ASP.NET WebForms technology supports STA threading directly via its STA Page Handler implementation or what you might know as ASPCOMPAT mode. For WebForms running STA components is as easy as specifying the ASPCOMPAT attribute in the @Page tag:<%@ Page Language="C#" AspCompat="true" %> which runs the page in STA mode. Removing it runs in MTA mode. Simple. Unfortunately all other ASP.NET technologies built on top of the core ASP.NET engine do not support STA natively. So if you want to use STA COM components in MVC or with class ASMX Web Services, there's no automatic way like the ASPCOMPAT keyword available. So what happens when you run an STA COM component in an MTA application? In low volume environments - nothing much will happen. The COM objects will appear to work just fine as there are no simultaneous thread interactions and the COM component will happily run on a single thread or multiple single threads one at a time. So for testing running components in MTA environments may appear to work just fine. However as load increases and threads get re-used by ASP.NET COM objects will end up getting created on multiple different threads. This can result in crashes or hangs, or data corruption in the STA components which store their state in thread local storage on the STA thread. If threads overlap this global store can easily get corrupted which in turn causes problems. STA ensures that any COM object instance loaded always stays on the same thread it was instantiated on. What about COM+? COM+ is supposed to address the problem of STA in MTA applications by providing an abstraction with it's own thread pool manager for COM objects. It steps in to the COM instantiation pipeline and hands out COM instances from its own internally maintained STA Thread pool. This guarantees that the COM instantiation threads are STA threads if using STA components. COM+ works, but in my experience the technology is very, very slow for STA components. It adds a ton of overhead and reduces COM performance noticably in load tests in IIS. COM+ can make sense in some situations but for Web apps with STA components it falls short. In addition there's also the need to ensure that COM+ is set up and configured on the target machine and the fact that components have to be registered in COM+. COM+ also keeps components up at all times, so if a component needs to be replaced the COM+ package needs to be unloaded (same is true for IIS hosted components but it's more common to manage that). COM+ is an option for well established components, but native STA support tends to provide better performance and more consistent usability, IMHO. STA for non supporting ASP.NET Technologies As mentioned above only WebForms supports STA natively. However, by utilizing the WebForms ASP.NET Page handler internally it's actually possible to trick various other ASP.NET technologies and let them work with STA components. This is ugly but I've used each of these in various applications and I've had minimal problems making them work with FoxPro STA COM components which is about as dififcult as it gets for COM Interop in .NET. In this post I summarize several STA workarounds that enable you to use STA threading with these ASP.NET Technologies: ASMX Web Services ASP.NET MVC WCF Web Services ASP.NET Web API ASMX Web Services I start with classic ASP.NET ASMX Web Services because it's the easiest mechanism that allows for STA modification. It also clearly demonstrates how the WebForms STA Page Handler is the key technology to enable the various other solutions to create STA components. Essentially the way this works is to override the WebForms Page class and hijack it's init functionality for processing requests. Here's what this looks like for Web Services:namespace FoxProAspNet { public class WebServiceStaHandler : System.Web.UI.Page, IHttpAsyncHandler { protected override void OnInit(EventArgs e) { IHttpHandler handler = new WebServiceHandlerFactory().GetHandler( this.Context, this.Context.Request.HttpMethod, this.Context.Request.FilePath, this.Context.Request.PhysicalPath); handler.ProcessRequest(this.Context); this.Context.ApplicationInstance.CompleteRequest(); } public IAsyncResult BeginProcessRequest( HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } } public class AspCompatWebServiceStaHandlerWithSessionState : WebServiceStaHandler, IRequiresSessionState { } } This class overrides the ASP.NET WebForms Page class which has a little known AspCompatBeginProcessRequest() and AspCompatEndProcessRequest() method that is responsible for providing the WebForms ASPCOMPAT functionality. These methods handle routing requests to STA threads. Note there are two classes - one that includes session state and one that does not. If you plan on using ASP.NET Session state use the latter class, otherwise stick to the former. This maps to the EnableSessionState page setting in WebForms. This class simply hooks into this functionality by overriding the BeginProcessRequest and EndProcessRequest methods and always forcing it into the AspCompat methods. The way this works is that BeginProcessRequest() fires first to set up the threads and starts intializing the handler. As part of that process the OnInit() method is fired which is now already running on an STA thread. The code then creates an instance of the actual WebService handler factory and calls its ProcessRequest method to start executing which generates the Web Service result. Immediately after ProcessRequest the request is stopped with Application.CompletRequest() which ensures that the rest of the Page handler logic doesn't fire. This means that even though the fairly heavy Page class is overridden here, it doesn't end up executing any of its internal processing which makes this code fairly efficient. In a nutshell, we're highjacking the Page HttpHandler and forcing it to process the WebService process handler in the context of the AspCompat handler behavior. Hooking up the Handler Because the above is an HttpHandler implementation you need to hook up the custom handler and replace the standard ASMX handler. To do this you need to modify the web.config file (here for IIS 7 and IIS Express): <configuration> <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated-4.0" /> <add name="Asmx STA Web Service Handler" path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" precondition="integrated"/> </handlers> </system.webServer> </configuration> (Note: The name for the WebServiceHandlerFactory-Integrated-4.0 might be slightly different depending on your server version. Check the IIS Handler configuration in the IIS Management Console for the exact name or simply remove the handler from the list there which will propagate to your web.config). For IIS 5 & 6 (Windows XP/2003) or the Visual Studio Web Server use:<configuration> <system.web> <httpHandlers> <remove path="*.asmx" verb="*" /> <add path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" /> </httpHandlers> </system.web></configuration> To test, create a new ASMX Web Service and create a method like this: [WebService(Namespace = "http://foxaspnet.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class FoxWebService : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World. Threading mode is: " + System.Threading.Thread.CurrentThread.GetApartmentState(); } } Run this before you put in the web.config configuration changes and you should get: Hello World. Threading mode is: MTA Then put the handler mapping into Web.config and you should see: Hello World. Threading mode is: STA And you're on your way to using STA COM components. It's a hack but it works well! I've used this with several high volume Web Service installations with various customers and it's been fast and reliable. ASP.NET MVC ASP.NET MVC has quickly become the most popular ASP.NET technology, replacing WebForms for creating HTML output. MVC is more complex to get started with, but once you understand the basic structure of how requests flow through the MVC pipeline it's easy to use and amazingly flexible in manipulating HTML requests. In addition, MVC has great support for non-HTML output sources like JSON and XML, making it an excellent choice for AJAX requests without any additional tools. Unlike WebForms ASP.NET MVC doesn't support STA threads natively and so some trickery is needed to make it work with STA threads as well. MVC gets its handler implementation through custom route handlers using ASP.NET's built in routing semantics. To work in an STA handler requires working in the Page Handler as part of the Route Handler implementation. As with the Web Service handler the first step is to create a custom HttpHandler that can instantiate an MVC request pipeline properly:public class MvcStaThreadHttpAsyncHandler : Page, IHttpAsyncHandler, IRequiresSessionState { private RequestContext _requestContext; public MvcStaThreadHttpAsyncHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); _requestContext = requestContext; } public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } protected override void OnInit(EventArgs e) { var controllerName = _requestContext.RouteData.GetRequiredString("controller"); var controllerFactory = ControllerBuilder.Current.GetControllerFactory(); var controller = controllerFactory.CreateController(_requestContext, controllerName); if (controller == null) throw new InvalidOperationException("Could not find controller: " + controllerName); try { controller.Execute(_requestContext); } finally { controllerFactory.ReleaseController(controller); } this.Context.ApplicationInstance.CompleteRequest(); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } public override void ProcessRequest(HttpContext httpContext) { throw new NotSupportedException("STAThreadRouteHandler does not support ProcessRequest called (only BeginProcessRequest)"); } } This handler code figures out which controller to load and then executes the controller. MVC internally provides the information needed to route to the appropriate method and pass the right parameters. Like the Web Service handler the logic occurs in the OnInit() and performs all the processing in that part of the request. Next, we need a RouteHandler that can actually pick up this handler. Unlike the Web Service handler where we simply registered the handler, MVC requires a RouteHandler to pick up the handler. RouteHandlers look at the URL's path and based on that decide on what handler to invoke. The route handler is pretty simple - all it does is load our custom handler: public class MvcStaThreadRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); return new MvcStaThreadHttpAsyncHandler(requestContext); } } At this point you can instantiate this route handler and force STA requests to MVC by specifying a route. The following sets up the ASP.NET Default Route:Route mvcRoute = new Route("{controller}/{action}/{id}", new RouteValueDictionary( new { controller = "Home", action = "Index", id = UrlParameter.Optional }), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute);   To make this code a little easier to work with and mimic the behavior of the routes.MapRoute() functionality extension method that MVC provides, here is an extension method for MapMvcStaRoute(): public static class RouteCollectionExtensions { public static void MapMvcStaRoute(this RouteCollection routeTable, string name, string url, object defaults = null) { Route mvcRoute = new Route(url, new RouteValueDictionary(defaults), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute); } } With this the syntax to add  route becomes a little easier and matches the MapRoute() method:RouteTable.Routes.MapMvcStaRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); The nice thing about this route handler, STA Handler and extension method is that it's fully self contained. You can put all three into a single class file and stick it into your Web app, and then simply call MapMvcStaRoute() and it just works. Easy! To see whether this works create an MVC controller like this: public class ThreadTestController : Controller { public string ThreadingMode() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Try this test both with only the MapRoute() hookup in the RouteConfiguration in which case you should get MTA as the value. Then change the MapRoute() call to MapMvcStaRoute() leaving all the parameters the same and re-run the request. You now should see STA as the result. You're on your way using STA COM components reliably in ASP.NET MVC. WCF Web Services running through IIS WCF Web Services provide a more robust and wider range of services for Web Services. You can use WCF over HTTP, TCP, and Pipes, and WCF services support WS* secure services. There are many features in WCF that go way beyond what ASMX can do. But it's also a bit more complex than ASMX. As a basic rule if you need to serve straight SOAP Services over HTTP I 'd recommend sticking with the simpler ASMX services especially if COM is involved. If you need WS* support or want to serve data over non-HTTP protocols then WCF makes more sense. WCF is not my forte but I found a solution from Scott Seely on his blog that describes the progress and that seems to work well. I'm copying his code below so this STA information is all in one place and quickly explain. Scott's code basically works by creating a custom OperationBehavior which can be specified via an [STAOperation] attribute on every method. Using his attribute you end up with a class (or Interface if you separate the contract and class) that looks like this: [ServiceContract] public class WcfService { [OperationContract] public string HelloWorldMta() { return Thread.CurrentThread.GetApartmentState().ToString(); } // Make sure you use this custom STAOperationBehavior // attribute to force STA operation of service methods [STAOperationBehavior] [OperationContract] public string HelloWorldSta() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Pretty straight forward. The latter method returns STA while the former returns MTA. To make STA work every method needs to be marked up. The implementation consists of the attribute and OperationInvoker implementation. Here are the two classes required to make this work from Scott's post:public class STAOperationBehaviorAttribute : Attribute, IOperationBehavior { public void AddBindingParameters(OperationDescription operationDescription, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.ClientOperation clientOperation) { // If this is applied on the client, well, it just doesn’t make sense. // Don’t throw in case this attribute was applied on the contract // instead of the implementation. } public void ApplyDispatchBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation) { // Change the IOperationInvoker for this operation. dispatchOperation.Invoker = new STAOperationInvoker(dispatchOperation.Invoker); } public void Validate(OperationDescription operationDescription) { if (operationDescription.SyncMethod == null) { throw new InvalidOperationException("The STAOperationBehaviorAttribute " + "only works for synchronous method invocations."); } } } public class STAOperationInvoker : IOperationInvoker { IOperationInvoker _innerInvoker; public STAOperationInvoker(IOperationInvoker invoker) { _innerInvoker = invoker; } public object[] AllocateInputs() { return _innerInvoker.AllocateInputs(); } public object Invoke(object instance, object[] inputs, out object[] outputs) { // Create a new, STA thread object[] staOutputs = null; object retval = null; Thread thread = new Thread( delegate() { retval = _innerInvoker.Invoke(instance, inputs, out staOutputs); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); thread.Join(); outputs = staOutputs; return retval; } public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state) { // We don’t handle async… throw new NotImplementedException(); } public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result) { // We don’t handle async… throw new NotImplementedException(); } public bool IsSynchronous { get { return true; } } } The key in this setup is the Invoker and the Invoke method which creates a new thread and then fires the request on this new thread. Because this approach creates a new thread for every request it's not super efficient. There's a bunch of overhead involved in creating the thread and throwing it away after each thread, but it'll work for low volume requests and insure each thread runs in STA mode. If better performance is required it would be useful to create a custom thread manager that can pool a number of STA threads and hand off threads as needed rather than creating new threads on every request. If your Web Service needs are simple and you need only to serve standard SOAP 1.x requests, I would recommend sticking with ASMX services. It's easier to set up and work with and for STA component use it'll be significantly better performing since ASP.NET manages the STA thread pool for you rather than firing new threads for each request. One nice thing about Scotts code is though that it works in any WCF environment including self hosting. It has no dependency on ASP.NET or WebForms for that matter. STA - If you must STA components are a  pain in the ass and thankfully there isn't too much stuff out there anymore that requires it. But when you need it and you need to access STA functionality from .NET at least there are a few options available to make it happen. Each of these solutions is a bit hacky, but they work - I've used all of them in production with good results with FoxPro components. I hope compiling all of these in one place here makes it STA consumption a little bit easier. I feel your pain :-) Resources Download STA Handler Code Examples Scott Seely's original STA WCF OperationBehavior Article© Rick Strahl, West Wind Technologies, 2005-2012Posted in FoxPro   ASP.NET  .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • An Introduction to Meteor

    - by Stephen.Walther
    The goal of this blog post is to give you a brief introduction to Meteor which is a framework for building Single Page Apps. In this blog entry, I provide a walkthrough of building a simple Movie database app. What is special about Meteor? Meteor has two jaw-dropping features: Live HTML – If you make any changes to the HTML, CSS, JavaScript, or data on the server then every client shows the changes automatically without a browser refresh. For example, if you change the background color of a page to yellow then every open browser will show the new yellow background color without a refresh. Or, if you add a new movie to a collection of movies, then every open browser will display the new movie automatically. With Live HTML, users no longer need a refresh button. Changes to an application happen everywhere automatically without any effort. The Meteor framework handles all of the messy details of keeping all of the clients in sync with the server for you. Latency Compensation – When you modify data on the client, these modifications appear as if they happened on the server without any delay. For example, if you create a new movie then the movie appears instantly. However, that is all an illusion. In the background, Meteor updates the database with the new movie. If, for whatever reason, the movie cannot be added to the database then Meteor removes the movie from the client automatically. Latency compensation is extremely important for creating a responsive web application. You want the user to be able to make instant modifications in the browser and the framework to handle the details of updating the database without slowing down the user. Installing Meteor Meteor is licensed under the open-source MIT license and you can start building production apps with the framework right now. Be warned that Meteor is still in the “early preview” stage. It has not reached a 1.0 release. According to the Meteor FAQ, Meteor will reach version 1.0 in “More than a month, less than a year.” Don’t be scared away by that. You should be aware that, unlike most open source projects, Meteor has financial backing. The Meteor project received an $11.2 million round of financing from Andreessen Horowitz. So, it would be a good bet that this project will reach the 1.0 mark. And, if it doesn’t, the framework as it exists right now is still very powerful. Meteor runs on top of Node.js. You write Meteor apps by writing JavaScript which runs both on the client and on the server. You can build Meteor apps on Windows, Mac, or Linux (Although the support for Windows is still officially unofficial). If you want to install Meteor on Windows then download the MSI from the following URL: http://win.meteor.com/ If you want to install Meteor on Mac/Linux then run the following CURL command from your terminal: curl https://install.meteor.com | /bin/sh Meteor will install all of its dependencies automatically including Node.js. However, I recommend that you install Node.js before installing Meteor by installing Node.js from the following address: http://nodejs.org/ If you let Meteor install Node.js then Meteor won’t install NPM which is the standard package manager for Node.js. If you install Node.js and then you install Meteor then you get NPM automatically. Creating a New Meteor App To get a sense of how Meteor works, I am going to walk through the steps required to create a simple Movie database app. Our app will display a list of movies and contain a form for creating a new movie. The first thing that we need to do is create our new Meteor app. Open a command prompt/terminal window and execute the following command: Meteor create MovieApp After you execute this command, you should see something like the following: Follow the instructions: execute cd MovieApp to change to your MovieApp directory, and run the meteor command. Executing the meteor command starts Meteor on port 3000. Open up your favorite web browser and navigate to http://localhost:3000 and you should see the default Meteor Hello World page: Open up your favorite development environment to see what the Meteor app looks like. Open the MovieApp folder which we just created. Here’s what the MovieApp looks like in Visual Studio 2012: Notice that our MovieApp contains three files named MovieApp.css, MovieApp.html, and MovieApp.js. In other words, it contains a Cascading Style Sheet file, an HTML file, and a JavaScript file. Just for fun, let’s see how the Live HTML feature works. Open up multiple browsers and point each browser at http://localhost:3000. Now, open the MovieApp.html page and modify the text “Hello World!” to “Hello Cruel World!” and save the change. The text in all of the browsers should update automatically without a browser refresh. Pretty amazing, right? Controlling Where JavaScript Executes You write a Meteor app using JavaScript. Some of the JavaScript executes on the client (the browser) and some of the JavaScript executes on the server and some of the JavaScript executes in both places. For a super simple app, you can use the Meteor.isServer and Meteor.isClient properties to control where your JavaScript code executes. For example, the following JavaScript contains a section of code which executes on the server and a section of code which executes in the browser: if (Meteor.isClient) { console.log("Hello Browser!"); } if (Meteor.isServer) { console.log("Hello Server!"); } console.log("Hello Browser and Server!"); When you run the app, the message “Hello Browser!” is written to the browser JavaScript console. The message “Hello Server!” is written to the command/terminal window where you ran Meteor. Finally, the message “Hello Browser and Server!” is execute on both the browser and server and the message appears in both places. For simple apps, using Meteor.isClient and Meteor.isServer to control where JavaScript executes is fine. For more complex apps, you should create separate folders for your server and client code. Here are the folders which you can use in a Meteor app: · client – This folder contains any JavaScript which executes only on the client. · server – This folder contains any JavaScript which executes only on the server. · common – This folder contains any JavaScript code which executes on both the client and server. · lib – This folder contains any JavaScript files which you want to execute before any other JavaScript files. · public – This folder contains static application assets such as images. For the Movie App, we need the client, server, and common folders. Delete the existing MovieApp.js, MovieApp.html, and MovieApp.css files. We will create new files in the right locations later in this walkthrough. Combining HTML, CSS, and JavaScript Files Meteor combines all of your JavaScript files, and all of your Cascading Style Sheet files, and all of your HTML files automatically. If you want to create one humongous JavaScript file which contains all of the code for your app then that is your business. However, if you want to build a more maintainable application, then you should break your JavaScript files into many separate JavaScript files and let Meteor combine them for you. Meteor also combines all of your HTML files into a single file. HTML files are allowed to have the following top-level elements: <head> — All <head> files are combined into a single <head> and served with the initial page load. <body> — All <body> files are combined into a single <body> and served with the initial page load. <template> — All <template> files are compiled into JavaScript templates. Because you are creating a single page app, a Meteor app typically will contain a single HTML file for the <head> and <body> content. However, a Meteor app typically will contain several template files. In other words, all of the interesting stuff happens within the <template> files. Displaying a List of Movies Let me start building the Movie App by displaying a list of movies. In order to display a list of movies, we need to create the following four files: · client\movies.html – Contains the HTML for the <head> and <body> of the page for the Movie app. · client\moviesTemplate.html – Contains the HTML template for displaying the list of movies. · client\movies.js – Contains the JavaScript for supplying data to the moviesTemplate. · server\movies.js – Contains the JavaScript for seeding the database with movies. After you create these files, your folder structure should looks like this: Here’s what the client\movies.html file looks like: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} </body>   Notice that it contains <head> and <body> top-level elements. The <body> element includes the moviesTemplate with the syntax {{> moviesTemplate }}. The moviesTemplate is defined in the client/moviesTemplate.html file: <template name="moviesTemplate"> <ul> {{#each movies}} <li> {{title}} </li> {{/each}} </ul> </template> By default, Meteor uses the Handlebars templating library. In the moviesTemplate above, Handlebars is used to loop through each of the movies using {{#each}}…{{/each}} and display the title for each movie using {{title}}. The client\movies.js JavaScript file is used to bind the moviesTemplate to the Movies collection on the client. Here’s what this JavaScript file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; The Movies collection is a client-side proxy for the server-side Movies database collection. Whenever you want to interact with the collection of Movies stored in the database, you use the Movies collection instead of communicating back to the server. The moviesTemplate is bound to the Movies collection by assigning a function to the Template.moviesTemplate.movies property. The function simply returns all of the movies from the Movies collection. The final file which we need is the server-side server\movies.js file: // Declare server Movies collection Movies = new Meteor.Collection("movies"); // Seed the movie database with a few movies Meteor.startup(function () { if (Movies.find().count() == 0) { Movies.insert({ title: "Star Wars", director: "Lucas" }); Movies.insert({ title: "Memento", director: "Nolan" }); Movies.insert({ title: "King Kong", director: "Jackson" }); } }); The server\movies.js file does two things. First, it declares the server-side Meteor Movies collection. When you declare a server-side Meteor collection, a collection is created in the MongoDB database associated with your Meteor app automatically (Meteor uses MongoDB as its database automatically). Second, the server\movies.js file seeds the Movies collection (MongoDB collection) with three movies. Seeding the database gives us some movies to look at when we open the Movies app in a browser. Creating New Movies Let me modify the Movies Database App so that we can add new movies to the database of movies. First, I need to create a new template file – named client\movieForm.html – which contains an HTML form for creating a new movie: <template name="movieForm"> <fieldset> <legend>Add New Movie</legend> <form> <div> <label> Title: <input id="title" /> </label> </div> <div> <label> Director: <input id="director" /> </label> </div> <div> <input type="submit" value="Add Movie" /> </div> </form> </fieldset> </template> In order for the new form to show up, I need to modify the client\movies.html file to include the movieForm.html template. Notice that I added {{> movieForm }} to the client\movies.html file: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} {{> movieForm }} </body> After I make these modifications, our Movie app will display the form: The next step is to handle the submit event for the movie form. Below, I’ve modified the client\movies.js file so that it contains a handler for the submit event raised when you submit the form contained in the movieForm.html template: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Movies.insert(newMovie); } }; The Template.movieForm.events property contains an event map which maps event names to handlers. In this case, I am mapping the form submit event to an anonymous function which handles the event. In the event handler, I am first preventing a postback by calling e.preventDefault(). This is a single page app, no postbacks are allowed! Next, I am grabbing the new movie from the HTML form. I’m taking advantage of the template find() method to retrieve the form field values. Finally, I am calling Movies.insert() to insert the new movie into the Movies collection. Here, I am explicitly inserting the new movie into the client-side Movies collection. Meteor inserts the new movie into the server-side Movies collection behind the scenes. When Meteor inserts the movie into the server-side collection, the new movie is added to the MongoDB database associated with the Movies app automatically. If server-side insertion fails for whatever reasons – for example, your internet connection is lost – then Meteor will remove the movie from the client-side Movies collection automatically. In other words, Meteor takes care of keeping the client Movies collection and the server Movies collection in sync. If you open multiple browsers, and add movies, then you should notice that all of the movies appear on all of the open browser automatically. You don’t need to refresh individual browsers to update the client-side Movies collection. Meteor keeps everything synchronized between the browsers and server for you. Removing the Insecure Module To make it easier to develop and debug a new Meteor app, by default, you can modify the database directly from the client. For example, you can delete all of the data in the database by opening up your browser console window and executing multiple Movies.remove() commands. Obviously, enabling anyone to modify your database from the browser is not a good idea in a production application. Before you make a Meteor app public, you should first run the meteor remove insecure command from a command/terminal window: Running meteor remove insecure removes the insecure package from the Movie app. Unfortunately, it also breaks our Movie app. We’ll get an “Access denied” error in our browser console whenever we try to insert a new movie. No worries. I’ll fix this issue in the next section. Creating Meteor Methods By taking advantage of Meteor Methods, you can create methods which can be invoked on both the client and the server. By taking advantage of Meteor Methods you can: 1. Perform form validation on both the client and the server. For example, even if an evil hacker bypasses your client code, you can still prevent the hacker from submitting an invalid value for a form field by enforcing validation on the server. 2. Simulate database operations on the client but actually perform the operations on the server. Let me show you how we can modify our Movie app so it uses Meteor Methods to insert a new movie. First, we need to create a new file named common\methods.js which contains the definition of our Meteor Methods: Meteor.methods({ addMovie: function (newMovie) { // Perform form validation if (newMovie.title == "") { throw new Meteor.Error(413, "Missing title!"); } if (newMovie.director == "") { throw new Meteor.Error(413, "Missing director!"); } // Insert movie (simulate on client, do it on server) return Movies.insert(newMovie); } }); The addMovie() method is called from both the client and the server. This method does two things. First, it performs some basic validation. If you don’t enter a title or you don’t enter a director then an error is thrown. Second, the addMovie() method inserts the new movie into the Movies collection. When called on the client, inserting the new movie into the Movies collection just updates the collection. When called on the server, inserting the new movie into the Movies collection causes the database (MongoDB) to be updated with the new movie. You must add the common\methods.js file to the common folder so it will get executed on both the client and the server. Our folder structure now looks like this: We actually call the addMovie() method within our client code in the client\movies.js file. Here’s what the updated file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Meteor.call( "addMovie", newMovie, function (err, result) { if (err) { alert("Could not add movie " + err.reason); } } ); } }; The addMovie() method is called – on both the client and the server – by calling the Meteor.call() method. This method accepts the following parameters: · The string name of the method to call. · The data to pass to the method (You can actually pass multiple params for the data if you like). · A callback function to invoke after the method completes. In the JavaScript code above, the addMovie() method is called with the new movie retrieved from the HTML form. The callback checks for an error. If there is an error then the error reason is displayed in an alert (please don’t use alerts for validation errors in a production app because they are ugly!). Summary The goal of this blog post was to provide you with a brief walk through of a simple Meteor app. I showed you how you can create a simple Movie Database app which enables you to display a list of movies and create new movies. I also explained why it is important to remove the Meteor insecure package from a production app. I showed you how to use Meteor Methods to insert data into the database instead of doing it directly from the client. I’m very impressed with the Meteor framework. The support for Live HTML and Latency Compensation are required features for many real world Single Page Apps but implementing these features by hand is not easy. Meteor makes it easy.

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • NTFS Corruption: Files created in Linux corrupted when Windows Boots

    - by Logan Mayfield
    I'm getting some file loss and corruption on my Win7/Ubuntu 12.04 dual boot setup. I have a large shared NTFS partition. I have my Windows Docs/Music/etc. directories on that file and have the comparable directors in Linux setup as a sym. link. I'm using ntfs-3g on the linux side of things to manage the ntfs partition. The shared partition is on a logical partition along with my Linux /home / and /swap partitions. The ntfs partition is mounted at boot time via fstab with the following options: ntfs-3g users,nls=utf8,locale=en_US.UTF-8,exec,rw The problem seems to be confined to newly created and recently edited files. I have not see data loss or corruption when creating/editing files in Windows and then moving over to Ubuntu. I've been using the sync command aggressively in Ubuntu to try to ensure everything is getting written to the HDD. I do not use hibernate in Windows so I know it's not the usual missing files due to Hibernation problem. I'm not seeing any mount related issues on dmesg. Most recently I had a set of files related to a LaTeX document go bad. Some of them show up in Ubuntu but I am unable to delete them. In the GUI file browser they are given thumbnails associated with files I created on my last boot of Windows. To be more specific: I created a few png files in Windows. The files corrupted by that Windows boot are associated with running PdfLatex on a file and are not image files. However, two of the corrupted files show up with the thumbnail image of one of the previously mentioned png files. The png files are not in the same directory as the latex files but they are both win the Document Folder tree. I've had sucess with using NTFS for shared data in the past and am hoping there's some quirk here I'm missing and it's not just bad luck. On one hand this appears to be some kind of Windows problem as data loss occurs when I boot to Windows after having worked in Ubuntu for a while. However, I'm assuming it's more on the Ubuntu end as it requires the special NTFS drivers. Edit for more info: This is a Lenovo Thinkpad L430. Purchased new in the last month. So it's a fairly fresh install. Many of the files on the shared partition were copied over from a previous NTFS formatted shared partition on another HDD. As requested: here's a sample chkdsk log. Some of the files its mentioning were files that got deleted off the partition while in Ubuntu. Others were created/edited but not deleted. Checking file system on D: Volume dismounted. All opened handles to this volume are now invalid. Volume label is Files. CHKDSK is verifying files (stage 1 of 3)... Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x789f47 for possibly 0x21 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x42 is already in use. Deleting corrupt attribute record (128, "") from file record segment 66. 86496 file records processed. File verification completed. 385 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 3)... Deleted invalid filename Screenshot from 2012-09-09 09:51:27.png (72) in directory 46. The NTFS file name attribute in file 0x48 is incorrect. 53 00 63 00 72 00 65 00 65 00 6e 00 73 00 68 00 S.c.r.e.e.n.s.h. 6f 00 74 00 20 00 66 00 72 00 6f 00 6d 00 20 00 o.t. .f.r.o.m. . 32 00 30 00 31 00 32 00 2d 00 30 00 39 00 2d 00 2.0.1.2.-.0.9.-. 30 00 39 00 20 00 30 00 39 00 3a 00 35 00 31 00 0.9. .0.9.:.5.1. 3a 00 32 00 37 00 2e 00 70 00 6e 00 67 00 0d 00 :.2.7...p.n.g... 00 00 00 00 00 00 90 94 49 1f 5e 00 00 80 d4 00 ......I.^.... File 72 has been orphaned since all its filenames were invalid Windows will recover the file in the orphan recovery phase. Correcting minor file name errors in file 72. Index entry found.000 of index $I30 in file 0x5 points to unused file 0x11. Deleting index entry found.000 in index $I30 of file 5. Index entry found.001 of index $I30 in file 0x5 points to unused file 0x16. Deleting index entry found.001 in index $I30 of file 5. Index entry found.002 of index $I30 in file 0x5 points to unused file 0x15. Deleting index entry found.002 in index $I30 of file 5. Index entry DOWNLO~1 of index $I30 in file 0x28 points to unused file 0x2b6. Deleting index entry DOWNLO~1 in index $I30 of file 40. Unable to locate the file name attribute of index entry Screenshot from 2012-09-09 09:51:27.png of index $I30 with parent 0x2e in file 0x48. Deleting index entry Screenshot from 2012-09-09 09:51:27.png in index $I30 of file 46. An index entry of index $I30 in file 0x32 points to file 0x151e8 which is beyond the MFT. Deleting index entry latexsheet.tex in index $I30 of file 50. An index entry of index $I30 in file 0x58bc points to file 0x151eb which is beyond the MFT. Deleting index entry D8CZ82PK in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151f7 which is beyond the MFT. Deleting index entry EGA4QEAX in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151e9 which is beyond the MFT. Deleting index entry NGTB469M in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151fb which is beyond the MFT. Deleting index entry WU5RKXAB in index $I30 of file 22716. Index entry comp220-lab3.synctex.gz of index $I30 in file 0xda69 points to unused file 0xd098. Deleting index entry comp220-lab3.synctex.gz in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp220-numberGrammars.aux of index $I30 with parent 0xda69 in file 0xa276. Deleting index entry comp220-numberGrammars.aux in index $I30 of file 55913. The file reference 0x500000000cd43 of index entry comp220-numberGrammars.out of index $I30 with parent 0xda69 is not the same as 0x600000000cd43. Deleting index entry comp220-numberGrammars.out in index $I30 of file 55913. The file reference 0x500000000cd45 of index entry comp220-numberGrammars.pdf of index $I30 with parent 0xda69 is not the same as 0xc00000000cd45. Deleting index entry comp220-numberGrammars.pdf in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15290 which is beyond the MFT. Deleting index entry gram.aux in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15291 which is beyond the MFT. Deleting index entry gram.out in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15292 which is beyond the MFT. Deleting index entry gram.pdf in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp230-quiz1.synctex.gz of index $I30 with parent 0xda6f in file 0xd183. Deleting index entry comp230-quiz1.synctex.gz in index $I30 of file 55919. An index entry of index $I30 in file 0xf3cc points to file 0x15283 which is beyond the MFT. Deleting index entry require-transform.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf3cc points to file 0x15284 which is beyond the MFT. Deleting index entry set.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf497 points to file 0x15280 which is beyond the MFT. Deleting index entry logger.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15281 which is beyond the MFT. Deleting index entry misc.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15282 which is beyond the MFT. Deleting index entry more-scheme.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf5bf points to file 0x15285 which is beyond the MFT. Deleting index entry core-layout.rkt in index $I30 of file 62911. An index entry of index $I30 in file 0xf5e0 points to file 0x15286 which is beyond the MFT. Deleting index entry ref.scrbl in index $I30 of file 62944. An index entry of index $I30 in file 0xf6f0 points to file 0x15287 which is beyond the MFT. Deleting index entry base-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15288 which is beyond the MFT. Deleting index entry html-properties.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15289 which is beyond the MFT. Deleting index entry html-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528b which is beyond the MFT. Deleting index entry latex-prefix.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528c which is beyond the MFT. Deleting index entry latex-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528e which is beyond the MFT. Deleting index entry scribble.tex in index $I30 of file 63216. An index entry of index $I30 in file 0xf717 points to file 0x1528a which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63255. An index entry of index $I30 in file 0xf721 points to file 0x1528d which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63265. An index entry of index $I30 in file 0xf764 points to file 0x1528f which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63332. An index entry of index $I30 in file 0x14261 points to file 0x15270 which is beyond the MFT. Deleting index entry fddff3ae9ae2221207f144821d475c08ec3d05 in index $I30 of file 82529. An index entry of index $I30 in file 0x14621 points to file 0x15268 which is beyond the MFT. Deleting index entry FETCH_HEAD in index $I30 of file 83489. An index entry of index $I30 in file 0x14650 points to file 0x15272 which is beyond the MFT. Deleting index entry 86 in index $I30 of file 83536. An index entry of index $I30 in file 0x14651 points to file 0x15266 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.idx in index $I30 of file 83537. An index entry of index $I30 in file 0x14651 points to file 0x15265 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.pack in index $I30 of file 83537. An index entry of index $I30 in file 0x146f1 points to file 0x15275 which is beyond the MFT. Deleting index entry master in index $I30 of file 83697. An index entry of index $I30 in file 0x146f6 points to file 0x15276 which is beyond the MFT. Deleting index entry remotes in index $I30 of file 83702. An index entry of index $I30 in file 0x1477d points to file 0x15278 which is beyond the MFT. Deleting index entry pad.rkt in index $I30 of file 83837. An index entry of index $I30 in file 0x14797 points to file 0x1527c which is beyond the MFT. Deleting index entry pad1.rkt in index $I30 of file 83863. An index entry of index $I30 in file 0x14810 points to file 0x1527d which is beyond the MFT. Deleting index entry cm.rkt in index $I30 of file 83984. An index entry of index $I30 in file 0x14926 points to file 0x1527e which is beyond the MFT. Deleting index entry multi-file-search.rkt in index $I30 of file 84262. An index entry of index $I30 in file 0x149ef points to file 0x1527f which is beyond the MFT. Deleting index entry com.rkt in index $I30 of file 84463. An index entry of index $I30 in file 0x14b47 points to file 0x15202 which is beyond the MFT. Deleting index entry COMMIT_EDITMSG in index $I30 of file 84807. An index entry of index $I30 in file 0x14b47 points to file 0x15279 which is beyond the MFT. Deleting index entry index in index $I30 of file 84807. An index entry of index $I30 in file 0x14b4c points to file 0x15274 which is beyond the MFT. Deleting index entry master in index $I30 of file 84812. An index entry of index $I30 in file 0x14b61 points to file 0x1520b which is beyond the MFT. Deleting index entry 02 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1525a which is beyond the MFT. Deleting index entry 28 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15208 which is beyond the MFT. Deleting index entry 29 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521f which is beyond the MFT. Deleting index entry 2c in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15261 which is beyond the MFT. Deleting index entry 2e in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f0 which is beyond the MFT. Deleting index entry 45 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1523e which is beyond the MFT. Deleting index entry 47 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151e5 which is beyond the MFT. Deleting index entry 49 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15214 which is beyond the MFT. Deleting index entry 58 in index $I30 of file 84833. Index entry 6e of index $I30 in file 0x14b61 points to unused file 0xd182. Deleting index entry 6e in index $I30 of file 84833. Unable to locate the file name attribute of index entry a0 of index $I30 with parent 0x14b61 in file 0xd29c. Deleting index entry a0 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521b which is beyond the MFT. Deleting index entry cd in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15249 which is beyond the MFT. Deleting index entry d6 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15242 which is beyond the MFT. Deleting index entry df in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15227 which is beyond the MFT. Deleting index entry ea in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1522e which is beyond the MFT. Deleting index entry f3 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f2 which is beyond the MFT. Deleting index entry ff in index $I30 of file 84833. An index entry of index $I30 in file 0x14b62 points to file 0x15254 which is beyond the MFT. Deleting index entry 1ed39b36ad4bd48c91d22cbafd7390f1ea38da in index $I30 of file 84834. An index entry of index $I30 in file 0x14b75 points to file 0x15224 which is beyond the MFT. Deleting index entry 96260247010fe9811fea773c08c5f3a314df3f in index $I30 of file 84853. An index entry of index $I30 in file 0x14b79 points to file 0x15219 which is beyond the MFT. Deleting index entry 8f689724ca23528dd4f4ab8b475ace6edcb8f5 in index $I30 of file 84857. An index entry of index $I30 in file 0x14b7c points to file 0x15223 which is beyond the MFT. Deleting index entry 1df17cf850656be42c947cba6295d29c248d94 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15217 which is beyond the MFT. Deleting index entry 31db8a3c72a3e44769bbd8db58d36f8298242c in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15267 which is beyond the MFT. Deleting index entry 8e1254d755ff1882d61c07011272bac3612f57 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b82 points to file 0x15246 which is beyond the MFT. Deleting index entry f959bfaf9643c1b9e78d5ecf8f669133efdbf3 in index $I30 of file 84866. An index entry of index $I30 in file 0x14b88 points to file 0x151fe which is beyond the MFT. Deleting index entry 7e9aa15b1196b2c60116afa4ffa613397f2185 in index $I30 of file 84872. An index entry of index $I30 in file 0x14b8a points to file 0x151ea which is beyond the MFT. Deleting index entry 73cb0cd248e494bb508f41b55d862e84cdd6e0 in index $I30 of file 84874. An index entry of index $I30 in file 0x14b8e points to file 0x15264 which is beyond the MFT. Deleting index entry bd555d9f0383cc14c317120149e9376a8094c4 in index $I30 of file 84878. An index entry of index $I30 in file 0x14b96 points to file 0x15212 which is beyond the MFT. Deleting index entry 630dba40562d991bc6cbb6fed4ba638542e9c5 in index $I30 of file 84886. An index entry of index $I30 in file 0x14b99 points to file 0x151ec which is beyond the MFT. Deleting index entry 478be31ca8e538769246e22bba3330d81dc3c8 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b99 points to file 0x15258 which is beyond the MFT. Deleting index entry 66c60c0a0f3253bc9a5112697e4cbb0dfc0c78 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b9c points to file 0x15238 which is beyond the MFT. Deleting index entry 1c7ceeddc2953496f9ffbfc0b6fb28846e3fe3 in index $I30 of file 84892. An index entry of index $I30 in file 0x14b9c points to file 0x15247 which is beyond the MFT. Deleting index entry ae6e32ffc49d897d8f8aeced970a90d3653533 in index $I30 of file 84892. An index entry of index $I30 in file 0x14ba0 points to file 0x15233 which is beyond the MFT. Deleting index entry f71c7d874e45179a32e138b49bf007e5bbf514 in index $I30 of file 84896. Index entry 2e04fefbd794f050d45e7a717d009e39204431 of index $I30 in file 0x14ba7 points to unused file 0xd097. Deleting index entry 2e04fefbd794f050d45e7a717d009e39204431 in index $I30 of file 84903. An index entry of index $I30 in file 0x14baa points to file 0x15241 which is beyond the MFT. Deleting index entry 0dda7dec1c635cd646dfef308e403c2843d5dc in index $I30 of file 84906. An index entry of index $I30 in file 0x14baa points to file 0x151fc which is beyond the MFT. Deleting index entry 98151e654dd546edcfdec630bc82d90619ac8e in index $I30 of file 84906. An index entry of index $I30 in file 0x14bb1 points to file 0x151e9 which is beyond the MFT. Deleting index entry 1997c5be62ffeebc99253cced7608415e38e4e in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x1521d which is beyond the MFT. Deleting index entry 6bf3aedefd3ac62d9c49cad72d05e8c0ad242c in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x151f4 which is beyond the MFT. Deleting index entry 907b755afdca14c00be0010962d0861af29264 in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb3 points to file 0x15218 which is beyond the MFT. Deleting index entry

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194  | Next Page >