Search Results

Search found 1822 results on 73 pages for 'bandwidth caps'.

Page 73/73 | < Previous Page | 69 70 71 72 73 

  • CodePlex Daily Summary for Saturday, August 18, 2012

    CodePlex Daily Summary for Saturday, August 18, 2012Popular ReleasesExtAspNet: ExtAspNet v3.1.9: +2012-08-18 v3.1.9 -??other/addtab.aspx???JS???BoundField??Tooltip???(Dennis_Liu)。 +??Window?GetShowReference???????????????(︶????、????、???、??~)。 -?????JavaScript?????,??????HTML????????。 -??HtmlNodeBuilder????????????????JavaScript??。 -??????WindowField、LinkButton、HyperLink????????????????????????????。 -???????????grid/griddynamiccolumns2.aspx(?????)。 -?????Type??Reset?????,??????????????????(e??)。 -?????????????????????。 -?????????int,short,double??????????(???)。 +?Window????Ge...XDA ROM HUB: Release v0.9.3: ChangelogAdded a new UI Removed unnecessary features Fixed a lot of bugs Adding new firmwares (Please help me) Added a new app for Android Ability to backup apps to the SD Card Ability to create Nandroid backups without a PC Ability to backup apps and restore from CWM Improved download manager Now supporting pause and resume Converting Bytes to MegaBytes Improved dialogs Removed support for 7z files Changed installer Known BugsDialog message will always appear when clic...Task Card Creator 2010: TaskCardCreator2010 4.0.2.0: What's New: UI/UX improved using a contextual ribbon tab for reports Finishing the "new 4.0 UI" Report template help improved New project branch to support TFS 2012: http://taskcardcreator2012.codeplex.com User interface made more modern (4.0.1.0) Smarter algorithm used for report generation (4.0.1.0) Quality setting added (4.0.1.0) Terms harmonized (4.0.1.0) Miscellaneous optimizations (4.0.1.0) Fixed critical issue introduced in 4.0.0.0 (4.0.1.0)AcDown????? - AcDown Downloader Framework: AcDown????? v4.0.1: ?? ●AcDown??????????、??、??????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ...Fluent Validation for .NET: 3.4: Changes since 3.3: Make ValidationResut.IsValid virtual Add private no-arg ctor to ValidationFailure to help with serialization Add Turkish error messages Work-around for reflection bug in .NET 4.5 that caused VerificationExceptions Assemblies are now unsigned to ease with versioning/upgrades (especially where other frameworks depend on FV) (Note if you need signed assemblies then you can use the following NuGet packages: FluentValidation-signed, FluentValidation.MVC3-signed, FluentV...fastJSON: v2.0.2: - bug fix $types and arraysAssaultCube Reloaded: 2.5.3 Unnamed Fixed: If you are using deltas, download 2.5.2 first, then overwrite with the delta packages. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we try to package for those OSes. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your ...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.6.1: Bug Fix release Bug Fixes Better support for transparent images IsFrozen respected if not bound to corrected deadlock stateWPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.7: Version: 2.5.0.7 (Milestone 7): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Add CollectionHelper.GetNextElementOrDefault method. InfoMan: Support creating a new email and saving it in the Send b...Diablo III Drop Statistics Service: D3DSS 1.0.1: ??????IP??。 ??????????,??????????。myCollections: Version 2.2.3.0: New in this version : Added setup package. Added Amazon Spain for Apps, Books, Games, Movie, Music, Nds and Tvshow. Added TVDB Spain for Tvshow. Added TMDB Spain for Movies. Added Auto rename files from title. Added more filters when adding files (vob,mpls,ifo...) Improve Books author and Music Artist Credits. Rewrite find duplicates for better performance. You can now add Custom link to items. You can now add type directly from the type list using right mouse button. Bug ...Player Framework by Microsoft: Player Framework for Windows 8 Preview 5 (Refresh): Support for Windows 8 and Visual Studio RTM Support for Smooth Streaming SDK beta 2 Support for live playback New bitrate meter and SD/HD indicators Auto smooth streaming track restriction for snapped mode to conserve bandwidth New "Go Live" button and SeekToLive API Support for offset start times Support for Live position unique from end time Support for multiple audio streams (smooth and progressive content) Improved intellisense in JS version NEW TO PREVIEW 5 REFRESH:Req...TFS Workbench: TFS Workbench v2.2.0.10: Compiled installers for TFS Workbench 2.2.0.10 Bug Fix Fixed bug that stopped the change workspace action from working.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.60: Allow for CSS3 grid-column and grid-row repeat syntax. Provide option for -analyze scope-report output to be in XML for easier programmatic processing; also allow for report to be saved to a separate output file.ClosedXML - The easy way to OpenXML: ClosedXML 0.67.2: v0.67.2 Fix when copying conditional formats with relative formulas v0.67.1 Misc fixes to the conditional formats v0.67.0 Conditional formats now accept formulas. Major performance improvement when opening files with merged ranges. Misc fixes.Umbraco CMS: Umbraco 4.8.1: Whats newBug fixes: Fixed: When upgrading to 4.8.0, the database upgrade didn't run Update: unfortunately, upgrading with SQLCE is problematic, there's a workaround here: http://bit.ly/TEmMJN The changes to the <imaging> section in umbracoSettings.config caused errors when you didn't apply them during the upgrade. Defaults will now be used if any keys are missing Scheduled unpublishes now only unpublishes nodes set to published rather than newest Work item: 30937 - Fixed problem with Fi...patterns & practices - Unity: Unity 3.0 for .NET 4.5 and WinRT - Preview: The Unity 3.0.1208.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. This is an updated version of the port after the .NET Framework 4.5 and Windows 8 have RTM'ed. Please see the Release Notes Providing feedback Post your feedback on the Unity forum Submit and vote on new features for Unity on our Uservoice site.Self-Tracking Entity Generator for WPF and Silverlight: Self-Tracking Entity Generator v 2.0.0 for VS11: Self-Tracking Entity Generator for WPF and Silverlight v 2.0.0 for Entity Framework 5.0 and Visual Studio 2012NPOI: NPOI 2.0: New features a. Implement OpenXml4Net (same as System.Packaging from Microsoft). It supports both .NET 2.0 and .NET 4.0 b. Excel 2007 read/write library (NPOI.XSSF) c. Word 2007 read/write library(NPOI.XWPF) d. NPOI.SS namespace becomes the interface shared between XSSF and HSSF e. Load xlsx template and save as new xlsx file (partially supported) f. Diagonal line in cell both in xls and xlsx g. Support isRightToLeft and setRightToLeft on the common spreadsheet Sheet interface, as per existin...BugNET Issue Tracker: BugNET 1.1: This release includes bug fixes from the 1.0 release for email notifications, RSS feeds, and several other issues. Please see the change log for a full list of changes. http://support.bugnetproject.com/Projects/ReleaseNotes.aspx?pid=1&m=76 Upgrade Notes The following changes to the web.config in the profile section have occurred: Removed <add name="NotificationTypes" type="String" defaultValue="Email" customProviderData="NotificationTypes;nvarchar;255" />Added <add name="ReceiveEmailNotifi...New Projectscocos2d simpledemo: simple demo cocos2dDiscuzViet: Discuz Vi?t Nam - discuz.vnEaseSettings: EaseSettings is a library designed to allow easy usage of settings with a highly structured in treenode-form.Face Detection & Recognition: Face Detection & Recognition project. It provide an interface for face detection and recognition FIM 2010 GoogleApps MA: Forefront Identity Manager 2010 Management Agent for Google Apps. You can synchronize users between Google Apps and Forefront Identity Manager 2010.Get System Info: Get System Information is an open source project licensed under the GNU Public License v2. This project allows you to view information about your system.go launcher: Go launchergrOOvy language (the spec): A formal specification of what the Groovy Language does.JavaScript Prototype Extension: It is a standalone JavaScript extension based on JavaScript prototype directly, to make developers use JS like C# or Java.Jump N Roll Windows Phone 7: Jump n Roll is a remake of the classic amiga game for windows phone 7, using vb.net and XNA. Version 1 is in a working but unfinished state.KingOfHorses: This is a Capstone Project from FPT University.Lan House Manager: Trabalho de conclusão de curso da faculdade de Ciências da Computação 2009 UNISANTA. -.NET -C# -WPF -ASP.NET MVC -Entity Framework -SQL ServerMy source code: This project contains my source code.MySimpleProjectForKudu: Cool stuffNWP INTRANET: NWP Intranet, a complete and useful intranet application for a wide cainds of companies.P.O.W. - PowerShell on the Web - DevOps Automation Portal: The PowerShell DevOps Portal provides a SharePoint like experience but centralized around the execution of PowerShell scripts to automate DevOps oriented tasks.Piwik for Microsoft Silverlight Analytics Framework: Piwik for Micorsoft Silverlight Analytics adds Piwik support to the MSAFPlaygroundSite: pluralsight tutSkewboid Approach to Pareto Relaxation and Filtering: For multi-objective decision-making and optimization - an adjustment parameter, alpha, is added to the conventional determination of Pareto optimality.Software Factories: Software Factory Generatorsspaceship cocos2dx demo: testing onlySpring_DWR: Present day I had a requirement where using Spring and Direct web remoting. testgit08172012git01: yuitesttom08172012tfs01: ytryytrtouchdragdemo: touch drag demoVirtual Pilot Client: An open-source pilot client for the VATSIM network. See discussions at http://forums.vatsim.net/viewforum.php?f=124 Visual Studio IDE Service - DTE: Visual Studio ServicesWCF AOP Sample: This project contains a simple implementation of AOP for WCF. It illustrates how to do simple logging and param masking using basic WCF extensions and Unity.WeatherApp: Aplicación para ver el clima????: ??????????,????

    Read the article

  • Throttling Cache Events

    - by dxfelcey
    The real-time eventing feature in Coherence is great for relaying state changes to other systems or to users. However, sometimes not all changes need to or can be sent to consumers. For instance; If rapid changes cannot be consumed or interpreted as fast as they are being sent. A user looking at changing Stock prices may only be able to interpret and react to 1 change per second. A client may be using low bandwidth connection, so rapidly sending events will only result in them being queued and delayed A large number of clients may need to be notified of state changes and sending 100 events p/s to 1000 clients cannot be supported with the available hardware, but 10 events p/s to 1000 clients can. Note this example assumes that many of the state changes are to the same value. One simple approach to throttling Coherence cache events is to use a cache store to capture changes to one cache (data cache) and insert those changes periodically in another cache (events cache). Consumers interested in state changes to entires in the first cache register an interest (event listener) against the second event cache. By using the cache store write-behind feature rapid updates to the same cache entry are coalesced so that updates are merged and written at the interval configured to the event cache. The time interval at which changes are written to the events cache can easily be configured using the write-behind delay time in the cache configuration, as shown below.   <caching-schemes>     <distributed-scheme>       <scheme-name>CustomDistributedCacheScheme</scheme-name>       <service-name>CustomDistributedCacheService</service-name>       <thread-count>1</thread-count>       <backing-map-scheme>         <read-write-backing-map-scheme>           <scheme-name>CustomRWBackingMapScheme</scheme-name>           <internal-cache-scheme>             <local-scheme />           </internal-cache-scheme>           <cachestore-scheme>             <class-scheme>               <scheme-name>CustomCacheStoreScheme</scheme-name>               <class-name>com.oracle.coherence.test.CustomCacheStore</class-name>               <init-params>                 <init-param>                   <param-type>java.lang.String</param-type>                   <param-value>{cache-name}</param-value>                 </init-param>                 <init-param>                   <param-type>java.lang.String</param-type>                   <!-- The name of the cache to write events to -->                   <param-value>cqc-test</param-value>                 </init-param>               </init-params>             </class-scheme>           </cachestore-scheme>           <write-delay>1s</write-delay>           <write-batch-factor>0</write-batch-factor>         </read-write-backing-map-scheme>       </backing-map-scheme>       <autostart>true</autostart>     </distributed-scheme>   </caching-schemes> The cache store implementation to perform this throttling is trivial and only involves overriding the basic cache store functions. public class CustomCacheStore implements CacheStore { private String publishingCacheName; private String sourceCacheName; public CustomCacheStore(String sourceCacheStore, String publishingCacheName) { this.publishingCacheName = publishingCacheName; this.sourceCacheName = sourceCacheName; } @Override public Object load(Object key) { return null; } @Override public Map loadAll(Collection keyCollection) { return null; } @Override public void erase(Object key) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).remove(key); CacheFactory.log("Erasing entry: " + key, CacheFactory.LOG_DEBUG); } } @Override public void eraseAll(Collection keyCollection) { if (sourceCacheName != publishingCacheName) { for (Object key : keyCollection) { CacheFactory.getCache(publishingCacheName).remove(key); CacheFactory.log("Erasing collection entry: " + key, CacheFactory.LOG_DEBUG); } } } @Override public void store(Object key, Object value) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).put(key, value); CacheFactory.log("Storing entry (key=value): " + key + "=" + value, CacheFactory.LOG_DEBUG); } } @Override public void storeAll(Map entryMap) { if (sourceCacheName != publishingCacheName) { CacheFactory.getCache(publishingCacheName).putAll(entryMap); CacheFactory.log("Storing entries: " + entryMap, CacheFactory.LOG_DEBUG); } } }  As you can see each cache store operation on the data cache results in a similar operation on event cache. This is a very simple pattern which has a lot of additional possibilities, but it also has a few drawbacks you should be aware of: This event throttling implementation will use additional memory as a duplicate copy of entries held in the data cache need to be held in the events cache too - 2 if the event cache has backups A data cache may already use a cache store, so a "multiplexing cache store pattern" must also be used to send changes to the existing and throttling cache store.  If you would like to try out this throttling example you can download it here. I hope its useful and let me know if you spot any further optimizations.

    Read the article

  • Cisco 800 series won't forward port

    - by sam
    Hello ServerFault, I am trying to forward port 444 from my cisco router to my Web Server (192.168.0.2). As far as I can tell, my port forwarding is configured correctly, yet no traffic will pass through on port 444. Here is my config: ! version 12.3 service config no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug uptime service timestamps log uptime service password-encryption no service dhcp ! hostname QUESTMOUNT ! logging buffered 16386 informational logging rate-limit 100 except warnings no logging console no logging monitor enable secret 5 -removed- ! username administrator secret 5 -removed- username manager secret 5 -removed- clock timezone NZST 12 clock summer-time NZDT recurring 1 Sun Oct 2:00 3 Sun Mar 3:00 aaa new-model ! ! aaa authentication login default local aaa authentication login userlist local aaa authentication ppp default local aaa authorization network grouplist local aaa session-id common ip subnet-zero no ip source-route no ip domain lookup ip domain name quest.local ! ! no ip bootp server ip inspect name firewall tcp ip inspect name firewall udp ip inspect name firewall cuseeme ip inspect name firewall h323 ip inspect name firewall rcmd ip inspect name firewall realaudio ip inspect name firewall streamworks ip inspect name firewall vdolive ip inspect name firewall sqlnet ip inspect name firewall tftp ip inspect name firewall ftp ip inspect name firewall icmp ip inspect name firewall sip ip inspect name firewall fragment maximum 256 timeout 1 ip inspect name firewall netshow ip inspect name firewall rtsp ip inspect name firewall skinny ip inspect name firewall http ip audit notify log ip audit po max-events 100 ip audit name intrusion info list 3 action alarm ip audit name intrusion attack list 3 action alarm drop reset no ftp-server write-enable ! ! ! ! crypto isakmp policy 1 authentication pre-share ! crypto isakmp policy 2 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group staff key 0 qS;,sc:q<skro1^, domain quest.local pool vpnclients acl 106 ! ! crypto ipsec transform-set tr-null-sha esp-null esp-sha-hmac crypto ipsec transform-set tr-des-md5 esp-des esp-md5-hmac crypto ipsec transform-set tr-des-sha esp-des esp-sha-hmac crypto ipsec transform-set tr-3des-sha esp-3des esp-sha-hmac ! crypto dynamic-map vpnusers 1 description Client to Site VPN Users set transform-set tr-des-md5 ! ! crypto map cm-cryptomap client authentication list userlist crypto map cm-cryptomap isakmp authorization list grouplist crypto map cm-cryptomap client configuration address respond crypto map cm-cryptomap 65000 ipsec-isakmp dynamic vpnusers ! ! ! ! interface Ethernet0 ip address 192.168.0.254 255.255.255.0 ip access-group 102 in ip nat inside hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive dsl operating-mode auto ! interface ATM0.1 point-to-point pvc 0/100 encapsulation aal5mux ppp dialer dialer pool-member 1 ! ! interface Dialer0 bandwidth 640 ip address negotiated ip access-group 101 in no ip redirects no ip unreachables ip nat outside ip inspect firewall out ip audit intrusion in encapsulation ppp no ip route-cache no ip mroute-cache dialer pool 1 dialer-group 1 no cdp enable ppp pap sent-username -removed- password 7 -removed- ppp ipcp dns request crypto map cm-cryptomap ! ip local pool vpnclients 192.168.99.1 192.168.99.254 ip nat inside source list 105 interface Dialer0 overload ip nat inside source static tcp 192.168.0.2 444 interface Dialer0 444 ip nat inside source static tcp 192.168.0.51 9000 interface Dialer0 9000 ip nat inside source static udp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 25 interface Dialer0 25 ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ip http server no ip http secure-server ! ip access-list logging interval 10 logging 192.168.0.2 access-list 1 remark The local LAN. access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.0.0 access-list 2 remark Where management can be done from. access-list 2 permit 192.168.0.0 0.0.0.255 access-list 3 remark Traffic not to check for intrustion detection. access-list 3 deny 192.168.99.0 0.0.0.255 access-list 3 permit any access-list 101 remark Traffic allowed to enter the router from the Internet access-list 101 permit ip 192.168.99.0 0.0.0.255 192.168.0.0 0.0.0.255 access-list 101 deny ip 0.0.0.0 0.255.255.255 any access-list 101 deny ip 10.0.0.0 0.255.255.255 any access-list 101 deny ip 127.0.0.0 0.255.255.255 any access-list 101 deny ip 169.254.0.0 0.0.255.255 any access-list 101 deny ip 172.16.0.0 0.15.255.255 any access-list 101 deny ip 192.0.2.0 0.0.0.255 any access-list 101 deny ip 192.168.0.0 0.0.255.255 any access-list 101 deny ip 198.18.0.0 0.1.255.255 any access-list 101 deny ip 224.0.0.0 0.15.255.255 any access-list 101 deny ip any host 255.255.255.255 access-list 101 permit tcp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit tcp host 120.136.2.22 any eq 1433 access-list 101 permit tcp host 123.100.90.58 any eq 1433 access-list 101 permit udp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit udp host 120.136.2.22 any eq 1433 access-list 101 permit udp host 123.100.90.58 any eq 1433 access-list 101 permit tcp any any eq 444 access-list 101 permit tcp any any eq 9000 access-list 101 permit tcp any any eq smtp access-list 101 permit udp any any eq non500-isakmp access-list 101 permit udp any any eq isakmp access-list 101 permit esp any any access-list 101 permit tcp any any eq 1723 access-list 101 permit gre any any access-list 101 permit tcp any any eq 22 access-list 101 permit tcp any any eq telnet access-list 102 remark Traffic allowed to enter the router from the Ethernet access-list 102 permit ip any host 192.168.0.254 access-list 102 deny ip any host 192.168.0.255 access-list 102 deny udp any any eq tftp log access-list 102 permit ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 102 deny ip any 0.0.0.0 0.255.255.255 log access-list 102 deny ip any 10.0.0.0 0.255.255.255 log access-list 102 deny ip any 127.0.0.0 0.255.255.255 log access-list 102 deny ip any 169.254.0.0 0.0.255.255 log access-list 102 deny ip any 172.16.0.0 0.15.255.255 log access-list 102 deny ip any 192.0.2.0 0.0.0.255 log access-list 102 deny ip any 192.168.0.0 0.0.255.255 log access-list 102 deny ip any 198.18.0.0 0.1.255.255 log access-list 102 deny udp any any eq 135 log access-list 102 deny tcp any any eq 135 log access-list 102 deny udp any any eq netbios-ns log access-list 102 deny udp any any eq netbios-dgm log access-list 102 deny tcp any any eq 445 log access-list 102 permit ip 192.168.0.0 0.0.0.255 any access-list 102 permit ip any host 255.255.255.255 access-list 102 deny ip any any log access-list 105 remark Traffic to NAT access-list 105 deny ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 105 permit ip 192.168.0.0 0.0.0.255 any access-list 106 remark User to Site VPN Clients access-list 106 permit ip 192.168.0.0 0.0.0.255 any dialer-list 1 protocol ip permit ! line con 0 no modem enable line aux 0 line vty 0 4 access-class 2 in transport input telnet ssh transport output none ! scheduler max-task-time 5000 ! end any ideas? :)

    Read the article

  • Too many apache processes, killing the CPU

    - by RULE101
    I am noticed that too many apache processes killing the CPU in my dedicated server. 14193 (Trace) (Kill) nobody 0 66.1 0.0 /usr/local/apache/bin/httpd -k start -DSSL 14128 (Trace) (Kill) nobody 0 65.9 0.0 /usr/local/apache/bin/httpd -k start -DSSL 14136 (Trace) (Kill) nobody 0 65.9 0.0 /usr/local/apache/bin/httpd -k start -DSSL 14129 (Trace) (Kill) nobody 0 65.8 0.0 /usr/local/apache/bin/httpd -k start -DSSL 13419 (Trace) (Kill) nobody 0 65.7 0.0 /usr/local/apache/bin/httpd -k start -DSSL 13421 (Trace) (Kill) nobody 0 65.7 0.0 /usr/local/apache/bin/httpd -k start -DSSL 13426 (Trace) (Kill) nobody 0 65.7 0.0 /usr/local/apache/bin/httpd -k start -DSSL 13428 (Trace) (Kill) nobody 0 65.7 0.0 /usr/local/apache/bin/httpd -k start -DSSL 13429 (Trace) (Kill) nobody 0 65.7 0.0 /usr/local/apache/bin/httpd -k start -DSSL 12173 (Trace) (Kill) nobody 0 65.5 0.0 /usr/local/apache/bin/httpd -k start -DSSL 14073 (Trace) (Kill) nobody 0 65.5 0.0 /usr/local/apache/bin/httpd -k start -DSSL I am getting high load email notification from cpanel during the day. FROM httpd.conf Include "/usr/local/apache/conf/includes/pre_main_global.conf" Include "/usr/local/apache/conf/includes/pre_main_2.conf" LoadModule bwlimited_module modules/mod_bwlimited.so LoadModule h264_streaming_module /usr/local/apache/modules/mod_h264_streaming.so AddHandler h264-streaming.extensions .mp4 Include "/usr/local/apache/conf/php.conf" Include "/usr/local/apache/conf/includes/errordocument.conf" ErrorLog "logs/error_log" ScriptAliasMatch ^/?controlpanel/?$ /usr/local/cpanel/cgi-sys/redirect.cgi ScriptAliasMatch ^/?cpanel/?$ /usr/local/cpanel/cgi-sys/redirect.cgi ScriptAliasMatch ^/?kpanel/?$ /usr/local/cpanel/cgi-sys/redirect.cgi ScriptAliasMatch ^/?securecontrolpanel/?$ /usr/local/cpanel/cgi-sys/sredirect.cgi ScriptAliasMatch ^/?securecpanel/?$ /usr/local/cpanel/cgi-sys/sredirect.cgi ScriptAliasMatch ^/?securewhm/?$ /usr/local/cpanel/cgi-sys/swhmredirect.cgi ScriptAliasMatch ^/?webmail/?$ /usr/local/cpanel/cgi-sys/wredirect.cgi ScriptAliasMatch ^/?whm/?$ /usr/local/cpanel/cgi-sys/whmredirect.cgi RewriteEngine on AddType text/html .shtml Alias /akopia /usr/local/cpanel/3rdparty/interchange/share/akopia/ Alias /bandwidth /usr/local/bandmin/htdocs/ Alias /img-sys /usr/local/cpanel/img-sys/ Alias /interchange /usr/local/cpanel/3rdparty/interchange/share/interchange/ Alias /interchange-5 /usr/local/cpanel/3rdparty/interchange/share/interchange-5/ Alias /java-sys /usr/local/cpanel/java-sys/ Alias /mailman/archives /usr/local/cpanel/3rdparty/mailman/archives/public/ Alias /pipermail /usr/local/cpanel/3rdparty/mailman/archives/public/ Alias /sys_cpanel /usr/local/cpanel/sys_cpanel/ ScriptAlias /cgi-sys /usr/local/cpanel/cgi-sys/ ScriptAlias /mailman /usr/local/cpanel/3rdparty/mailman/cgi-bin/ <Directory "/"> AllowOverride All Options All </Directory> <Directory "/usr/local/apache/htdocs"> Options All AllowOverride None Require all granted </Directory> <Files ~ "^error_log$"> Order allow,deny Deny from all Satisfy All </Files> <Files ".ht*"> Require all denied </Files> <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "logs/access_log" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "/usr/local/apache/cgi-bin/" </IfModule> <Directory "/usr/local/apache/cgi-bin"> AllowOverride None Options All Require all granted </Directory> <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> <IfModule prefork.c> Mutex default mpm-accept </IfModule> <IfModule mod_log_config.c> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log common </IfModule> <IfModule worker.c> Mutex default mpm-accept </IfModule> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Direct modifications to the Apache configuration file may be lost upon subsequent regeneration of the # # configuration file. To have modifications retained, all modifications must be checked into the # # configuration system by running: # # /usr/local/cpanel/bin/apache_conf_distiller --update # # To see if your changes will be conserved, regenerate the Apache configuration file by running: # # /usr/local/cpanel/bin/build_apache_conf # # and check the configuration file for your alterations. If your changes have been ignored, then they will # # need to be added directly to their respective template files. # # # # It is also possible to add custom directives to the various "Include" files loaded by this httpd.conf # # For detailed instructions on using Include files and the apache_conf_distiller with the new configuration # # system refer to the documentation at: http://www.cpanel.net/support/docs/ea/ea3/customdirectives.html # # # # This configuration file was built from the following templates: # # /var/cpanel/templates/apache2/main.default # # /var/cpanel/templates/apache2/main.local # # /var/cpanel/templates/apache2/vhost.default # # /var/cpanel/templates/apache2/vhost.local # # /var/cpanel/templates/apache2/ssl_vhost.default # # /var/cpanel/templates/apache2/ssl_vhost.local # # # # Templates with the '.local' extension will be preferred over templates with the '.default' extension. # # The only template updated by the apache_conf_distiller is main.default. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # PidFile logs/httpd.pid # Defined in /var/cpanel/cpanel.config: apache_port Listen 0.0.0.0:80 User nobody Group nobody ExtendedStatus On ServerAdmin [email protected] ServerName server.powerlabel.net LogLevel warn # These can be set in WHM under 'Apache Global Configuration' Timeout 300 ServerSignature On <IfModule prefork.c> </IfModule> RewriteEngine on RewriteMap LeechProtect prg:/usr/local/cpanel/bin/leechprotect Mutex file:/usr/local/apache/logs rewrite-map <IfModule !mod_ruid2.c> UserDir public_html </IfModule> <IfModule mod_ruid2.c> UserDir disabled </IfModule> # DirectoryIndex is set via the WHM -> Service Configuration -> Apache Setup -> DirectoryIndex Priority DirectoryIndex index.html.var index.htm index.html index.shtml index.xhtml index.wml index.perl index.pl index.plx index.ppl index.cgi index.jsp index.js index.jp index.php4 index.php3 index.php index.phtml default.htm default.html home.htm index.php5 Default.html Default.htm home.html # SSLCipherSuite can be set in WHM under 'Apache Global Configuration' SSLPassPhraseDialog builtin SSLUseStapling on SSLStaplingCache shmcb:/usr/local/apache/logs/stapling_cache_shmcb(256000) SSLSessionCache shmcb:/usr/local/apache/logs/ssl_gcache_data_shmcb(1024000) SSLSessionCacheTimeout 300 Mutex file:/usr/local/apache/logs ssl-cache SSLRandomSeed startup builtin SSLRandomSeed connect builtin # Defined in /var/cpanel/cpanel.config: apache_ssl_port Listen 0.0.0.0:443 AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl AddHandler cgi-script .cgi .pl .plx .ppl .perl AddHandler server-parsed .shtml AddType text/html .shtml AddType application/x-tar .tgz AddType text/vnd.wap.wml .wml AddType image/vnd.wap.wbmp .wbmp AddType text/vnd.wap.wmlscript .wmls AddType application/vnd.wap.wmlc .wmlc AddType application/vnd.wap.wmlscriptc .wmlsc <Location /whm-server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> # SUEXEC is supported Include "/usr/local/apache/conf/includes/pre_virtualhost_global.conf" Include "/usr/local/apache/conf/includes/pre_virtualhost_2.conf" What can cause this and how can i fix it ?

    Read the article

  • "domain crashed" when creating new Xen instance

    - by user47650
    I have downloaded a Xen virtual machine image from the appscale project, and I am trying to start it up. However once I run the command; xm create -c -f xen.conf The instance immediately crashes and provides no console output. however it produces logs that I have posted below. but this is the error; [2011-03-01 12:34:03 xend.XendDomainInfo 3580] WARNING (XendDomainInfo:1178) Domain has crashed: name=appscale-1.4b id=10. I have managed to mount the root.img file locally and verify that it is actually an ext3 file system. I am running Xen 3.0.3 that is a stock RPM from the CentOS 5 repos; # rpm -qa | grep -i xen xen-libs-3.0.3-105.el5_5.5 xen-3.0.3-105.el5_5.5 xen-libs-3.0.3-105.el5_5.5 kernel-xen-2.6.18-194.32.1.el5 any suggestions on how to proceed with troubleshooting? (i am a newbie to Xen) so far I have enabled console logging, but the log file is empty. ==> domain-builder-ng.log <== xc_dom_allocate: cmdline=" ip=:1.2.3.4::::eth0:dhcp root=/dev/sda1 ro xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console", features="" xc_dom_kernel_file: filename="/boot/vmlinuz-2.6.27-7-server" xc_dom_malloc_filemap : 2284 kB xc_dom_ramdisk_file: filename="/boot/initrd.img-2.6.27-7-server" xc_dom_malloc_filemap : 9005 kB xc_dom_boot_xen_init: ver 3.1, caps xen-3.0-x86_64 xen-3.0-x86_32p xc_dom_parse_image: called xc_dom_find_loader: trying ELF-generic loader ... failed xc_dom_find_loader: trying Linux bzImage loader ... xc_dom_malloc : 9875 kB xc_dom_do_gunzip: unzip ok, 0x234bb2 -> 0x9a4de0 OK elf_parse_binary: phdr: paddr=0x200000 memsz=0x447000 elf_parse_binary: phdr: paddr=0x647000 memsz=0xab888 elf_parse_binary: phdr: paddr=0x6f3000 memsz=0x908 elf_parse_binary: phdr: paddr=0x6f4000 memsz=0x1c2f9c elf_parse_binary: memory: 0x200000 -> 0x8b6f9c elf_xen_parse_note: GUEST_OS = "linux" elf_xen_parse_note: GUEST_VERSION = "2.6" elf_xen_parse_note: XEN_VERSION = "xen-3.0" elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000 elf_xen_parse_note: ENTRY = 0xffffffff8071e200 elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff80209000 elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb" elf_xen_parse_note: PAE_MODE = "yes" elf_xen_parse_note: LOADER = "generic" elf_xen_parse_note: unknown xen elf note (0xd) elf_xen_parse_note: SUSPEND_CANCEL = 0x1 elf_xen_parse_note: HV_START_LOW = 0xffff800000000000 elf_xen_parse_note: PADDR_OFFSET = 0x0 elf_xen_addr_calc_check: addresses: virt_base = 0xffffffff80000000 elf_paddr_offset = 0x0 virt_offset = 0xffffffff80000000 virt_kstart = 0xffffffff80200000 virt_kend = 0xffffffff808b6f9c virt_entry = 0xffffffff8071e200 xc_dom_parse_elf_kernel: xen-3.0-x86_64: 0xffffffff80200000 -> 0xffffffff808b6f9c xc_dom_mem_init: mem 1024 MB, pages 0x40000 pages, 4k each xc_dom_mem_init: 0x40000 pages xc_dom_boot_mem_init: called x86_compat: guest xen-3.0-x86_64, address size 64 xc_dom_malloc : 2048 kB ==> xend.log <== [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:02 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2114) UUID Created: True [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2115) Devices to release: [], domid = 9 [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2127) Releasing PVFB backend devices ... [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:207) XendDomainInfo.create(['domain', ['domid', 9], ['uuid', 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0'], ['vcpus', 1], ['vcpu_avail', 1], ['cpu_cap', 0], ['cpu_weight', 256], ['memory', 1024], ['shadow_memory', 0], ['maxmem', 1024], ['features', ''], ['name', 'appscale-1.4b'], ['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash', 'restart'], ['image', ['linux', ['kernel', '/boot/vmlinuz-2.6.27-7-server'], ['ramdisk', '/boot/initrd.img-2.6.27-7-server'], ['ip', ':1.2.3.4::::eth0:dhcp'], ['root', '/dev/sda1 ro'], ['args', 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console']]], ['cpus', []], ['device', ['vif', ['backend', 0], ['script', 'vif-bridge'], ['mac', '00:16:3B:72:10:E4']]], ['device', ['vbd', ['backend', 0], ['dev', 'sda1:disk'], ['uname', 'file:/local/xen/domains/appscale1.4/root.img'], ['mode', 'w']]], ['state', '----c-'], ['shutdown_reason', 'crash'], ['cpu_time', 0.000339131], ['online_vcpus', 1], ['up_time', '0.952092885971'], ['start_time', '1299011639.92'], ['store_mfn', 1169289], ['console_mfn', 1169288]]) [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:329) parseConfig: config is ['domain', ['domid', 9], ['uuid', 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0'], ['vcpus', 1], ['vcpu_avail', 1], ['cpu_cap', 0], ['cpu_weight', 256], ['memory', 1024], ['shadow_memory', 0], ['maxmem', 1024], ['features', ''], ['name', 'appscale-1.4b'], ['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash', 'restart'], ['image', ['linux', ['kernel', '/boot/vmlinuz-2.6.27-7-server'], ['ramdisk', '/boot/initrd.img-2.6.27-7-server'], ['ip', ':1.2.3.4::::eth0:dhcp'], ['root', '/dev/sda1 ro'], ['args', 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console']]], ['cpus', []], ['device', ['vif', ['backend', 0], ['script', 'vif-bridge'], ['mac', '00:16:3B:72:10:E4']]], ['device', ['vbd', ['backend', 0], ['dev', 'sda1:disk'], ['uname', 'file:/local/xen/domains/appscale1.4/root.img'], ['mode', 'w']]], ['state', '----c-'], ['shutdown_reason', 'crash'], ['cpu_time', 0.000339131], ['online_vcpus', 1], ['up_time', '0.952092885971'], ['start_time', '1299011639.92'], ['store_mfn', 1169289], ['console_mfn', 1169288]] [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:446) parseConfig: result is {'features': '', 'image': ['linux', ['kernel', '/boot/vmlinuz-2.6.27-7-server'], ['ramdisk', '/boot/initrd.img-2.6.27-7-server'], ['ip', ':1.2.3.4::::eth0:dhcp'], ['root', '/dev/sda1 ro'], ['args', 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console']], 'cpus': [], 'vcpu_avail': 1, 'backend': [], 'uuid': 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0', 'on_reboot': 'restart', 'cpu_weight': 256.0, 'memory': 1024, 'cpu_cap': 0, 'localtime': None, 'timer_mode': None, 'start_time': 1299011639.9200001, 'on_poweroff': 'destroy', 'on_crash': 'restart', 'device': [('vif', ['vif', ['backend', 0], ['script', 'vif-bridge'], ['mac', '00:16:3B:72:10:E4']]), ('vbd', ['vbd', ['backend', 0], ['dev', 'sda1:disk'], ['uname', 'file:/local/xen/domains/appscale1.4/root.img'], ['mode', 'w']])], 'bootloader': None, 'maxmem': 1024, 'shadow_memory': 0, 'name': 'appscale-1.4b', 'bootloader_args': None, 'vcpus': 1, 'cpu': None} [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1784) XendDomainInfo.construct: None [2011-03-01 12:34:02 xend 3580] DEBUG (balloon:145) Balloon: 3034420 KiB free; need 4096; done. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1953) XendDomainInfo.initDomain: 10 256.0 [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1994) _initDomain:shadow_memory=0x0, maxmem=0x400, memory=0x400. [2011-03-01 12:34:02 xend 3580] DEBUG (balloon:145) Balloon: 3034412 KiB free; need 1048576; done. [2011-03-01 12:34:02 xend 3580] INFO (image:139) buildDomain os=linux dom=10 vcpus=1 [2011-03-01 12:34:02 xend 3580] DEBUG (image:208) domid = 10 [2011-03-01 12:34:02 xend 3580] DEBUG (image:209) memsize = 1024 [2011-03-01 12:34:02 xend 3580] DEBUG (image:210) image = /boot/vmlinuz-2.6.27-7-server [2011-03-01 12:34:02 xend 3580] DEBUG (image:211) store_evtchn = 1 [2011-03-01 12:34:02 xend 3580] DEBUG (image:212) console_evtchn = 2 [2011-03-01 12:34:02 xend 3580] DEBUG (image:213) cmdline = ip=:1.2.3.4::::eth0:dhcp root=/dev/sda1 ro xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console [2011-03-01 12:34:02 xend 3580] DEBUG (image:214) ramdisk = /boot/initrd.img-2.6.27-7-server [2011-03-01 12:34:02 xend 3580] DEBUG (image:215) vcpus = 1 [2011-03-01 12:34:02 xend 3580] DEBUG (image:216) features = ==> domain-builder-ng.log <== xc_dom_build_image: called xc_dom_alloc_segment: kernel : 0xffffffff80200000 -> 0xffffffff808b7000 (pfn 0x200 + 0x6b7 pages) xc_dom_pfn_to_ptr: domU mapping: pfn 0x200+0x6b7 at 0x2aaaab5f6000 elf_load_binary: phdr 0 at 0x0x2aaaab5f6000 -> 0x0x2aaaaba3d000 elf_load_binary: phdr 1 at 0x0x2aaaaba3d000 -> 0x0x2aaaabae8888 elf_load_binary: phdr 2 at 0x0x2aaaabae9000 -> 0x0x2aaaabae9908 elf_load_binary: phdr 3 at 0x0x2aaaabaea000 -> 0x0x2aaaabb9a004 xc_dom_alloc_segment: ramdisk : 0xffffffff808b7000 -> 0xffffffff82382000 (pfn 0x8b7 + 0x1acb pages) xc_dom_malloc : 160 kB xc_dom_pfn_to_ptr: domU mapping: pfn 0x8b7+0x1acb at 0x2aaab0000000 xc_dom_do_gunzip: unzip ok, 0x8cb5e7 -> 0x1aca210 xc_dom_alloc_segment: phys2mach : 0xffffffff82382000 -> 0xffffffff82582000 (pfn 0x2382 + 0x200 pages) xc_dom_pfn_to_ptr: domU mapping: pfn 0x2382+0x200 at 0x2aaab1acb000 xc_dom_alloc_page : start info : 0xffffffff82582000 (pfn 0x2582) xc_dom_alloc_page : xenstore : 0xffffffff82583000 (pfn 0x2583) xc_dom_alloc_page : console : 0xffffffff82584000 (pfn 0x2584) nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s) nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s) nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s) nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffff827fffff, 20 table(s) xc_dom_alloc_segment: page tables : 0xffffffff82585000 -> 0xffffffff8259c000 (pfn 0x2585 + 0x17 pages) xc_dom_pfn_to_ptr: domU mapping: pfn 0x2585+0x17 at 0x2aaab1ccb000 xc_dom_alloc_page : boot stack : 0xffffffff8259c000 (pfn 0x259c) xc_dom_build_image : virt_alloc_end : 0xffffffff8259d000 xc_dom_build_image : virt_pgtab_end : 0xffffffff82800000 xc_dom_boot_image: called arch_setup_bootearly: doing nothing xc_dom_compat_check: supported guest type: xen-3.0-x86_64 <= matches xc_dom_compat_check: supported guest type: xen-3.0-x86_32p xc_dom_update_guest_p2m: dst 64bit, pages 0x40000 clear_page: pfn 0x2584, mfn 0x11d788 clear_page: pfn 0x2583, mfn 0x11d789 xc_dom_pfn_to_ptr: domU mapping: pfn 0x2582+0x1 at 0x2aaab1ce2000 start_info_x86_64: called setup_hypercall_page: vaddr=0xffffffff80209000 pfn=0x209 domain builder memory footprint allocated malloc : 12139 kB anon mmap : 0 bytes mapped file mmap : 11289 kB domU mmap : 35 MB arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xd6fe1 shared_info_x86_64: called vcpu_x86_64: called vcpu_x86_64: cr3: pfn 0x2585 mfn 0x11d787 launch_vm: called, ctxt=0x97b21f8 xc_dom_release: called ==> xend.log <== [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:114) DevController: writing {'mac': '00:16:3B:72:10:E4', 'handle': '0', 'protocol': 'x86_64-abi', 'backend-id': '0', 'state': '1', 'backend': '/local/domain/0/backend/vif/10/0'} to /local/domain/10/device/vif/0. [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:116) DevController: writing {'domain': 'appscale-1.4b', 'handle': '0', 'script': '/etc/xen/scripts/vif-bridge', 'state': '1', 'frontend': '/local/domain/10/device/vif/0', 'mac': '00:16:3B:72:10:E4', 'online': '1', 'frontend-id': '10'} to /local/domain/0/backend/vif/10/0. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:634) Checking for duplicate for uname: /local/xen/domains/appscale1.4/root.img [file:/local/xen/domains/appscale1.4/root.img], dev: sda1:disk, mode: w [2011-03-01 12:34:02 xend 3580] DEBUG (blkif:27) exception looking up device number for sda1:disk: [Errno 2] No such file or directory: '/dev/sda1:disk' [2011-03-01 12:34:02 xend 3580] DEBUG (blkif:27) exception looking up device number for sda1: [Errno 2] No such file or directory: '/dev/sda1' [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:114) DevController: writing {'virtual-device': '2049', 'device-type': 'disk', 'protocol': 'x86_64-abi', 'backend-id': '0', 'state': '1', 'backend': '/local/domain/0/backend/vbd/10/2049'} to /local/domain/10/device/vbd/2049. [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:116) DevController: writing {'domain': 'appscale-1.4b', 'frontend': '/local/domain/10/device/vbd/2049', 'format': 'raw', 'dev': 'sda1', 'state': '1', 'params': '/local/xen/domains/appscale1.4/root.img', 'mode': 'w', 'online': '1', 'frontend-id': '10', 'type': 'file'} to /local/domain/0/backend/vbd/10/2049. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:993) Storing VM details: {'shadow_memory': '0', 'uuid': 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0', 'on_reboot': 'restart', 'start_time': '1299011642.74', 'on_poweroff': 'destroy', 'name': 'appscale-1.4b', 'xend/restart_count': '0', 'vcpus': '1', 'vcpu_avail': '1', 'memory': '1024', 'on_crash': 'restart', 'image': "(linux (kernel /boot/vmlinuz-2.6.27-7-server) (ramdisk /boot/initrd.img-2.6.27-7-server) (ip :1.2.3.4::::eth0:dhcp) (root '/dev/sda1 ro') (args 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console'))", 'maxmem': '1024'} [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1028) Storing domain details: {'console/ring-ref': '1169288', 'console/port': '2', 'name': 'appscale-1.4b', 'console/limit': '1048576', 'vm': '/vm/d5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0', 'domid': '10', 'cpu/0/availability': 'online', 'memory/target': '1048576', 'store/ring-ref': '1169289', 'store/port': '1'} [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:158) Waiting for devices vif. [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:164) Waiting for 0. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1250) XendDomainInfo.handleShutdownWatch [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vif/10/0/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vif/10/0/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:523) hotplugStatusCallback 1. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices usb. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vbd. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:164) Waiting for 2049. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vbd/10/2049/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vbd/10/2049/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:523) hotplugStatusCallback 1. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices irq. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vkbd. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vfb. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices pci. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices ioports. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices tap. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vtpm. [2011-03-01 12:34:03 xend.XendDomainInfo 3580] WARNING (XendDomainInfo:1178) Domain has crashed: name=appscale-1.4b id=10. [2011-03-01 12:34:03 xend.XendDomainInfo 3580] ERROR (XendDomainInfo:2654) VM appscale-1.4b restarting too fast (2.275545 seconds since the last restart). Refusing to restart to avoid loops. [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2189) XendDomainInfo.destroy: domid=10 ==> xen-hotplug.log <== Nothing to flush. ==> xend.log <== [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2114) UUID Created: True [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2115) Devices to release: [], domid = 10 [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2127) Releasing PVFB backend devices ... And this is the xen.conf file that I am using; # cat xen.conf # Configuration file for the Xen instance AppScale, created # bn VMBuilder kernel = '/boot/vmlinuz-2.6.27-7-server' ramdisk = '/boot/initrd.img-2.6.27-7-server' memory = 1024 vcpus = 1 root = '/dev/sda1 ro' disk = [ 'file:/local/xen/domains/appscale1.4/root.img,sda1,w', ] name = 'appscale-1.4b' dhcp = 'dhcp' vif = [ 'mac=00:16:3B:72:10:E4' ] on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' extra = 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console'

    Read the article

  • DirectX works for 64-bit but not 32-bit

    - by dtbarne
    I'm trying to play a game (Civilization 5) which was previously working but no longer. I believe I've narrowed it down to a DirectX issue because I get an error running dxdiag.exe in 32 bit mode. My goal (at least I believe) is to get Direct3D Acceleration "Enabled" in dxdiag (as it is in 64 bit dxdiag). A very similar issue is here: http://answers.microsoft.com/en-us/windows/forum/windows_7-gaming/direct3d-acceleration-is-not-available-in-windows/4c345e6e-dc68-e011-8dfc-68b599b31bf5?page=1 The proposed answer, which looks very promising, doesn't seem to work for me. Like other users in that thread, HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Direct3D\Drivers does not have a SoftwareOnly key to change. I even tried manually adding it as a string and dword, to no avail. I have a NVIDIA GeForce GT 525M, and before you ask, yes I've tried updating (also uninstalling, reinstalling) my drivers. I've also tried doing the same with DirectX (and Civilization 5 for that matter). Been debugging for some 4+ hours now after a full day of work and I've run out of ideas. I'm hoping somebody knows the solution here! :) Here's what I see when I open dxdiag: DxDiag has detected that there mgiht have been a problem accessing Direct3D the last time this program was used. Would you like to bypass Direct3D this time? No - Crash Yes - Works, but in Display tab: DirectDraw Acceleration: Disabled Direct3D Acceleration: Not Available AGP Texture Acceleration: Not Available If I click "Run 64-bit DxDiag", all three are "Enabled". I should also note that I've tried the following steps as Microsoft suggests, but I'm not able to do so as the "Change Settings" button is disabled. Some programs run very slowly—or not at all—unless Microsoft DirectDraw or Direct3D hardware acceleration is turned on. To determine this, click the Display tab, and then under DirectX Features, check to see whether DirectDraw, Direct3D, and AGP Texture Acceleration appear as Enabled. If not, try turning on hardware acceleration. Click to open Screen Resolution. Click Advanced settings. Click the Troubleshoot tab, and then click Change settings. If you're prompted for an administrator password or confirmation, type the password or provide confirmation. Move the Hardware Acceleration slider to Full. Full dxdiag dump: ------------------ System Information ------------------ Time of this report: 11/8/2012, 23:13:24 Machine name: DTBARNE Operating System: Windows 7 Professional 64-bit (6.1, Build 7601) Service Pack 1 (7601.win7sp1_gdr.120830-0333) Language: English (Regional Setting: English) System Manufacturer: Dell Inc. System Model: Dell System XPS L502X BIOS: Default System BIOS Processor: Intel(R) Core(TM) i5-2450M CPU @ 2.50GHz (4 CPUs), ~2.5GHz Memory: 8192MB RAM Available OS Memory: 8086MB RAM Page File: 2466MB used, 13704MB available Windows Dir: C:\Windows DirectX Version: DirectX 11 DX Setup Parameters: Not found User DPI Setting: Using System DPI System DPI Setting: 96 DPI (100 percent) DWM DPI Scaling: Disabled DxDiag Version: 6.01.7601.17514 32bit Unicode DxDiag Previously: Crashed in Direct3D (stage 2). Re-running DxDiag with "dontskip" command line parameter or choosing not to bypass information gathering when prompted might result in DxDiag successfully obtaining this information ------------ DxDiag Notes ------------ Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Input Tab: No problems found. -------------------- DirectX Debug Levels -------------------- Direct3D: 0/4 (retail) DirectDraw: 0/4 (retail) DirectInput: 0/5 (retail) DirectMusic: 0/5 (retail) DirectPlay: 0/9 (retail) DirectSound: 0/5 (retail) DirectShow: 0/6 (retail) --------------- Display Devices --------------- Card name: Intel(R) HD Graphics 3000 Manufacturer: Chip type: DAC type: Device Key: Enum\PCI\VEN_8086&DEV_0126&SUBSYS_04B61028&REV_09 Display Memory: Dedicated Memory: n/a Shared Memory: n/a Current Mode: 1920 x 1080 (32 bit) (60Hz) Monitor Name: Generic PnP Monitor Monitor Model: Monitor Id: Native Mode: Output Type: Driver Name: Driver File Version: () Driver Version: DDI Version: Driver Model: WDDM 1.1 Driver Attributes: Final Retail Driver Date/Size: , 0 bytes WHQL Logo'd: n/a WHQL Date Stamp: n/a Device Identifier: Vendor ID: Device ID: SubSys ID: Revision ID: Driver Strong Name: oem11.inf:IntelGfx.NTamd64.6.0:iSNBM0:8.15.10.2696:pci\ven_8086&dev_0126&subsys_04b61028 Rank Of Driver: 00E60001 Video Accel: Deinterlace Caps: n/a D3D9 Overlay: DXVA-HD: DDraw Status: Disabled D3D Status: Not Available AGP Status: Not Available ------------- Sound Devices ------------- Description: Speakers (High Definition Audio Device) Default Sound Playback: Yes Default Voice Playback: Yes Hardware ID: HDAUDIO\FUNC_01&VEN_10EC&DEV_0665&SUBSYS_102804B6&REV_1000 Manufacturer ID: 1 Product ID: 65535 Type: WDM Driver Name: HdAudio.sys Driver Version: 6.01.7601.17514 (English) Driver Attributes: Final Retail WHQL Logo'd: Yes Date and Size: 11/20/2010 22:23:47, 350208 bytes Other Files: Driver Provider: Microsoft HW Accel Level: Basic Cap Flags: 0xF1F Min/Max Sample Rate: 100, 200000 Static/Strm HW Mix Bufs: 1, 0 Static/Strm HW 3D Bufs: 0, 0 HW Memory: 0 Voice Management: No EAX(tm) 2.0 Listen/Src: No, No I3DL2(tm) Listen/Src: No, No Sensaura(tm) ZoomFX(tm): No Description: Digital Audio (S/PDIF) (High Definition Audio Device) Default Sound Playback: No Default Voice Playback: No Hardware ID: HDAUDIO\FUNC_01&VEN_10EC&DEV_0665&SUBSYS_102804B6&REV_1000 Manufacturer ID: 1 Product ID: 65535 Type: WDM Driver Name: HdAudio.sys Driver Version: 6.01.7601.17514 (English) Driver Attributes: Final Retail WHQL Logo'd: Yes Date and Size: 11/20/2010 22:23:47, 350208 bytes Other Files: Driver Provider: Microsoft HW Accel Level: Basic Cap Flags: 0xF1F Min/Max Sample Rate: 100, 200000 Static/Strm HW Mix Bufs: 1, 0 Static/Strm HW 3D Bufs: 0, 0 HW Memory: 0 Voice Management: No EAX(tm) 2.0 Listen/Src: No, No I3DL2(tm) Listen/Src: No, No Sensaura(tm) ZoomFX(tm): No --------------------- Sound Capture Devices --------------------- Description: Microphone (High Definition Audio Device) Default Sound Capture: Yes Default Voice Capture: Yes Driver Name: HdAudio.sys Driver Version: 6.01.7601.17514 (English) Driver Attributes: Final Retail Date and Size: 11/20/2010 22:23:47, 350208 bytes Cap Flags: 0x1 Format Flags: 0xFFFFF ------------------- DirectInput Devices ------------------- Device Name: Mouse Attached: 1 Controller ID: n/a Vendor/Product ID: n/a FF Driver: n/a Device Name: Keyboard Attached: 1 Controller ID: n/a Vendor/Product ID: n/a FF Driver: n/a Poll w/ Interrupt: No ----------- USB Devices ----------- + USB Root Hub | Vendor/Product ID: 0x8086, 0x1C26 | Matching Device ID: usb\root_hub20 | Service: usbhub | +-+ Generic USB Hub | | Vendor/Product ID: 0x8087, 0x0024 | | Location: Port_#0001.Hub_#0002 | | Matching Device ID: usb\class_09 | | Service: usbhub ---------------- Gameport Devices ---------------- ------------ PS/2 Devices ------------ + Standard PS/2 Keyboard | Matching Device ID: *pnp0303 | Service: i8042prt | + Terminal Server Keyboard Driver | Matching Device ID: root\rdp_kbd | Upper Filters: kbdclass | Service: TermDD | + Synaptics PS/2 Port TouchPad | Matching Device ID: *dll04b6 | Upper Filters: SynTP | Service: i8042prt | + Terminal Server Mouse Driver | Matching Device ID: root\rdp_mou | Upper Filters: mouclass | Service: TermDD ------------------------ Disk & DVD/CD-ROM Drives ------------------------ Drive: C: Free Space: 26.2 GB Total Space: 122.0 GB File System: NTFS Model: M4-CT128M4SSD2 ATA Device Drive: D: Model: Optiarc DVDRWBD BC-5540H ATA Device Driver: c:\windows\system32\drivers\cdrom.sys, 6.01.7601.17514 (English), , 0 bytes -------------- System Devices -------------- Name: High Definition Audio Controller Device ID: PCI\VEN_8086&DEV_1C20&SUBSYS_04B61028&REV_05\3&11583659&0&D8 Driver: n/a Name: PCI standard host CPU bridge Device ID: PCI\VEN_8086&DEV_0104&SUBSYS_04B61028&REV_09\3&11583659&0&00 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C1A&SUBSYS_04B61028&REV_B5\3&11583659&0&E5 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_0101&SUBSYS_20108086&REV_09\3&11583659&0&08 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C18&SUBSYS_04B61028&REV_B5\3&11583659&0&E4 Driver: n/a Name: Intel(R) Centrino(R) Advanced-N 6230 Device ID: PCI\VEN_8086&DEV_0091&SUBSYS_52218086&REV_34\4&2634DE8D&0&00E1 Driver: n/a Name: PCI standard ISA bridge Device ID: PCI\VEN_8086&DEV_1C4B&SUBSYS_04B61028&REV_05\3&11583659&0&F8 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C16&SUBSYS_04B61028&REV_B5\3&11583659&0&E3 Driver: n/a Name: Realtek PCIe GBE Family Controller Device ID: PCI\VEN_10EC&DEV_8168&SUBSYS_04B61028&REV_06\4&109EAB2F&0&00E5 Driver: n/a Name: Intel(R) Management Engine Interface Device ID: PCI\VEN_8086&DEV_1C3A&SUBSYS_04B61028&REV_04\3&11583659&0&B0 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C12&SUBSYS_04B61028&REV_B5\3&11583659&0&E1 Driver: n/a Name: NVIDIA GeForce GT 525M Device ID: PCI\VEN_10DE&DEV_0DF5&SUBSYS_04B61028&REV_A1\4&4DCA75F&0&0008 Driver: n/a Name: Standard Enhanced PCI to USB Host Controller Device ID: PCI\VEN_8086&DEV_1C2D&SUBSYS_04B61028&REV_05\3&11583659&0&D0 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_8086&DEV_1C10&SUBSYS_04B61028&REV_B5\3&11583659&0&E0 Driver: n/a Name: Standard Enhanced PCI to USB Host Controller Device ID: PCI\VEN_8086&DEV_1C26&SUBSYS_04B61028&REV_05\3&11583659&0&E8 Driver: n/a Name: Standard AHCI 1.0 Serial ATA Controller Device ID: PCI\VEN_8086&DEV_1C03&SUBSYS_04B61028&REV_05\3&11583659&0&FA Driver: n/a Name: SM Bus Controller Device ID: PCI\VEN_8086&DEV_1C22&SUBSYS_04B61028&REV_05\3&11583659&0&FB Driver: n/a Name: Intel(R) HD Graphics 3000 Device ID: PCI\VEN_8086&DEV_0126&SUBSYS_04B61028&REV_09\3&11583659&0&10 Driver: n/a Name: Renesas Electronics USB 3.0 Host Controller Device ID: PCI\VEN_1033&DEV_0194&SUBSYS_04B61028&REV_04\4&3494AC3A&0&00E3 Driver: n/a ------------------ DirectShow Filters ------------------ DirectShow Filters: WMAudio Decoder DMO,0x00800800,1,1,WMADMOD.DLL,6.01.7601.17514 WMAPro over S/PDIF DMO,0x00600800,1,1,WMADMOD.DLL,6.01.7601.17514 WMSpeech Decoder DMO,0x00600800,1,1,WMSPDMOD.DLL,6.01.7601.17514 MP3 Decoder DMO,0x00600800,1,1,mp3dmod.dll,6.01.7600.16385 Mpeg4s Decoder DMO,0x00800001,1,1,mp4sdecd.dll,6.01.7600.16385 WMV Screen decoder DMO,0x00600800,1,1,wmvsdecd.dll,6.01.7601.17514 WMVideo Decoder DMO,0x00800001,1,1,wmvdecod.dll,6.01.7601.17514 Mpeg43 Decoder DMO,0x00800001,1,1,mp43decd.dll,6.01.7600.16385 Mpeg4 Decoder DMO,0x00800001,1,1,mpg4decd.dll,6.01.7600.16385 DV Muxer,0x00400000,0,0,qdv.dll,6.06.7601.17514 Color Space Converter,0x00400001,1,1,quartz.dll,6.06.7601.17713 WM ASF Reader,0x00400000,0,0,qasf.dll,12.00.7601.17514 Screen Capture filter,0x00200000,0,1,wmpsrcwp.dll,12.00.7601.17514 AVI Splitter,0x00600000,1,1,quartz.dll,6.06.7601.17713 VGA 16 Color Ditherer,0x00400000,1,1,quartz.dll,6.06.7601.17713 SBE2MediaTypeProfile,0x00200000,0,0,sbe.dll,6.06.7601.17528 Microsoft DTV-DVD Video Decoder,0x005fffff,2,4,msmpeg2vdec.dll,6.01.7140.0000 AC3 Parser Filter,0x00600000,1,1,mpg2splt.ax,6.06.7601.17528 StreamBufferSink,0x00200000,0,0,sbe.dll,6.06.7601.17528 MJPEG Decompressor,0x00600000,1,1,quartz.dll,6.06.7601.17713 MPEG-I Stream Splitter,0x00600000,1,2,quartz.dll,6.06.7601.17713 SAMI (CC) Parser,0x00400000,1,1,quartz.dll,6.06.7601.17713 VBI Codec,0x00600000,1,4,VBICodec.ax,6.06.7601.17514 MPEG-2 Splitter,0x005fffff,1,0,mpg2splt.ax,6.06.7601.17528 Closed Captions Analysis Filter,0x00200000,2,5,cca.dll,6.06.7601.17514 SBE2FileScan,0x00200000,0,0,sbe.dll,6.06.7601.17528 Microsoft MPEG-2 Video Encoder,0x00200000,1,1,msmpeg2enc.dll,6.01.7601.17514 Internal Script Command Renderer,0x00800001,1,0,quartz.dll,6.06.7601.17713 MPEG Audio Decoder,0x03680001,1,1,quartz.dll,6.06.7601.17713 DV Splitter,0x00600000,1,2,qdv.dll,6.06.7601.17514 Video Mixing Renderer 9,0x00200000,1,0,quartz.dll,6.06.7601.17713 Microsoft MPEG-2 Encoder,0x00200000,2,1,msmpeg2enc.dll,6.01.7601.17514 ACM Wrapper,0x00600000,1,1,quartz.dll,6.06.7601.17713 Video Renderer,0x00800001,1,0,quartz.dll,6.06.7601.17713 MPEG-2 Video Stream Analyzer,0x00200000,0,0,sbe.dll,6.06.7601.17528 Line 21 Decoder,0x00600000,1,1,qdvd.dll,6.06.7601.17835 Video Port Manager,0x00600000,2,1,quartz.dll,6.06.7601.17713 Video Renderer,0x00400000,1,0,quartz.dll,6.06.7601.17713 VPS Decoder,0x00200000,0,0,WSTPager.ax,6.06.7601.17514 WM ASF Writer,0x00400000,0,0,qasf.dll,12.00.7601.17514 VBI Surface Allocator,0x00600000,1,1,vbisurf.ax,6.01.7601.17514 File writer,0x00200000,1,0,qcap.dll,6.06.7601.17514 iTV Data Sink,0x00600000,1,0,itvdata.dll,6.06.7601.17514 iTV Data Capture filter,0x00600000,1,1,itvdata.dll,6.06.7601.17514 DVD Navigator,0x00200000,0,3,qdvd.dll,6.06.7601.17835 Overlay Mixer2,0x00200000,1,1,qdvd.dll,6.06.7601.17835 AVI Draw,0x00600064,9,1,quartz.dll,6.06.7601.17713 RDP DShow Redirection Filter,0xffffffff,1,0,DShowRdpFilter.dll, Microsoft MPEG-2 Audio Encoder,0x00200000,1,1,msmpeg2enc.dll,6.01.7601.17514 WST Pager,0x00200000,1,1,WSTPager.ax,6.06.7601.17514 MPEG-2 Demultiplexer,0x00600000,1,1,mpg2splt.ax,6.06.7601.17528 DV Video Decoder,0x00800000,1,1,qdv.dll,6.06.7601.17514 SampleGrabber,0x00200000,1,1,qedit.dll,6.06.7601.17514 Null Renderer,0x00200000,1,0,qedit.dll,6.06.7601.17514 MPEG-2 Sections and Tables,0x005fffff,1,0,Mpeg2Data.ax,6.06.7601.17514 Microsoft AC3 Encoder,0x00200000,1,1,msac3enc.dll,6.01.7601.17514 StreamBufferSource,0x00200000,0,0,sbe.dll,6.06.7601.17528 Smart Tee,0x00200000,1,2,qcap.dll,6.06.7601.17514 Overlay Mixer,0x00200000,0,0,qdvd.dll,6.06.7601.17835 AVI Decompressor,0x00600000,1,1,quartz.dll,6.06.7601.17713 AVI/WAV File Source,0x00400000,0,2,quartz.dll,6.06.7601.17713 Wave Parser,0x00400000,1,1,quartz.dll,6.06.7601.17713 MIDI Parser,0x00400000,1,1,quartz.dll,6.06.7601.17713 Multi-file Parser,0x00400000,1,1,quartz.dll,6.06.7601.17713 File stream renderer,0x00400000,1,1,quartz.dll,6.06.7601.17713 Microsoft DTV-DVD Audio Decoder,0x005fffff,1,1,msmpeg2adec.dll,6.01.7140.0000 StreamBufferSink2,0x00200000,0,0,sbe.dll,6.06.7601.17528 AVI Mux,0x00200000,1,0,qcap.dll,6.06.7601.17514 Line 21 Decoder 2,0x00600002,1,1,quartz.dll,6.06.7601.17713 File Source (Async.),0x00400000,0,1,quartz.dll,6.06.7601.17713 File Source (URL),0x00400000,0,1,quartz.dll,6.06.7601.17713 Infinite Pin Tee Filter,0x00200000,1,1,qcap.dll,6.06.7601.17514 Enhanced Video Renderer,0x00200000,1,0,evr.dll,6.01.7601.17514 BDA MPEG2 Transport Information Filter,0x00200000,2,0,psisrndr.ax,6.06.7601.17669 MPEG Video Decoder,0x40000001,1,1,quartz.dll,6.06.7601.17713 WDM Streaming Tee/Splitter Devices: Tee/Sink-to-Sink Converter,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 Video Compressors: WMVideo8 Encoder DMO,0x00600800,1,1,wmvxencd.dll,6.01.7600.16385 WMVideo9 Encoder DMO,0x00600800,1,1,wmvencod.dll,6.01.7600.16385 MSScreen 9 encoder DMO,0x00600800,1,1,wmvsencd.dll,6.01.7600.16385 DV Video Encoder,0x00200000,0,0,qdv.dll,6.06.7601.17514 MJPEG Compressor,0x00200000,0,0,quartz.dll,6.06.7601.17713 Cinepak Codec by Radius,0x00200000,1,1,qcap.dll,6.06.7601.17514 Intel IYUV codec,0x00200000,1,1,qcap.dll,6.06.7601.17514 Intel IYUV codec,0x00200000,1,1,qcap.dll,6.06.7601.17514 Microsoft RLE,0x00200000,1,1,qcap.dll,6.06.7601.17514 Microsoft Video 1,0x00200000,1,1,qcap.dll,6.06.7601.17514 Audio Compressors: WM Speech Encoder DMO,0x00600800,1,1,WMSPDMOE.DLL,6.01.7600.16385 WMAudio Encoder DMO,0x00600800,1,1,WMADMOE.DLL,6.01.7600.16385 IMA ADPCM,0x00200000,1,1,quartz.dll,6.06.7601.17713 PCM,0x00200000,1,1,quartz.dll,6.06.7601.17713 Microsoft ADPCM,0x00200000,1,1,quartz.dll,6.06.7601.17713 GSM 6.10,0x00200000,1,1,quartz.dll,6.06.7601.17713 CCITT A-Law,0x00200000,1,1,quartz.dll,6.06.7601.17713 CCITT u-Law,0x00200000,1,1,quartz.dll,6.06.7601.17713 MPEG Layer-3,0x00200000,1,1,quartz.dll,6.06.7601.17713 Audio Capture Sources: Microphone (High Definition Aud,0x00200000,0,0,qcap.dll,6.06.7601.17514 PBDA CP Filters: PBDA DTFilter,0x00600000,1,1,CPFilters.dll,6.06.7601.17528 PBDA ETFilter,0x00200000,0,0,CPFilters.dll,6.06.7601.17528 PBDA PTFilter,0x00200000,0,0,CPFilters.dll,6.06.7601.17528 Midi Renderers: Default MidiOut Device,0x00800000,1,0,quartz.dll,6.06.7601.17713 Microsoft GS Wavetable Synth,0x00200000,1,0,quartz.dll,6.06.7601.17713 WDM Streaming Capture Devices: HD Audio Microphone 2,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 Integrated Webcam,0x00200000,1,2,ksproxy.ax,6.01.7601.17514 WDM Streaming Rendering Devices: HD Audio Headphone/Speakers,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 HD Audio SPDIF out,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 BDA Network Providers: Microsoft ATSC Network Provider,0x00200000,0,1,MSDvbNP.ax,6.06.7601.17514 Microsoft DVBC Network Provider,0x00200000,0,1,MSDvbNP.ax,6.06.7601.17514 Microsoft DVBS Network Provider,0x00200000,0,1,MSDvbNP.ax,6.06.7601.17514 Microsoft DVBT Network Provider,0x00200000,0,1,MSDvbNP.ax,6.06.7601.17514 Microsoft Network Provider,0x00200000,0,1,MSNP.ax,6.06.7601.17514 Video Capture Sources: Integrated Webcam,0x00200000,1,2,ksproxy.ax,6.01.7601.17514 Multi-Instance Capable VBI Codecs: VBI Codec,0x00600000,1,4,VBICodec.ax,6.06.7601.17514 BDA Transport Information Renderers: BDA MPEG2 Transport Information Filter,0x00600000,2,0,psisrndr.ax,6.06.7601.17669 MPEG-2 Sections and Tables,0x00600000,1,0,Mpeg2Data.ax,6.06.7601.17514 BDA CP/CA Filters: Decrypt/Tag,0x00600000,1,1,EncDec.dll,6.06.7601.17708 Encrypt/Tag,0x00200000,0,0,EncDec.dll,6.06.7601.17708 PTFilter,0x00200000,0,0,EncDec.dll,6.06.7601.17708 XDS Codec,0x00200000,0,0,EncDec.dll,6.06.7601.17708 WDM Streaming Communication Transforms: Tee/Sink-to-Sink Converter,0x00200000,1,1,ksproxy.ax,6.01.7601.17514 Audio Renderers: Speakers (High Definition Audio,0x00200000,1,0,quartz.dll,6.06.7601.17713 Default DirectSound Device,0x00800000,1,0,quartz.dll,6.06.7601.17713 Default WaveOut Device,0x00200000,1,0,quartz.dll,6.06.7601.17713 Digital Audio (S/PDIF) (High De,0x00200000,1,0,quartz.dll,6.06.7601.17713 DirectSound: Digital Audio (S/PDIF) (High Definition Audio Device),0x00200000,1,0,quartz.dll,6.06.7601.17713 DirectSound: Speakers (High Definition Audio Device),0x00200000,1,0,quartz.dll,6.06.7601.17713 --------------- EVR Power Information --------------- Current Setting: {651288E5-A7ED-4076-A96B-6CC62D848FE1} (Balanced) Quality Flags: 2576 Enabled: Force throttling Allow half deinterlace Allow scaling Decode Power Usage: 100 Balanced Flags: 1424 Enabled: Force throttling Allow batching Force half deinterlace Force scaling Decode Power Usage: 50 PowerFlags: 1424 Enabled: Force throttling Allow batching Force half deinterlace Force scaling Decode Power Usage: 0

    Read the article

  • How do I prevent TCP connection freezes over an OpenVPN network?

    - by Jason R
    New details added at the end of this question; it's possible that I'm zeroing in on the cause. I have a UDP OpenVPN-based VPN set up in tap mode (I need tap because I need the VPN to pass multicast packets, which doesn't seem to be possible with tun networks) with a handful of clients across the Internet. I've been experiencing frequent TCP connection freezes over the VPN. That is, I will establish a TCP connection (e.g. an SSH connection, but other protocols have similar issues), and at some point during the session, it seems that traffic will cease being transmitted over that TCP session. This seems to be related to points at which large data transfers occur, such as if I execute an ls command in an SSH session, or if I cat a long log file. Some Google searches turn up a number of answers like this previous one on Server Fault, indicating that the likely culprit is an MTU issue: that during periods of high traffic, the VPN is trying to send packets that get dropped somewhere in the pipes between the VPN endpoints. The above-linked answer suggests using the following OpenVPN configuration settings to mitigate the problem: fragment 1400 mssfix This should limit the MTU used on the VPN to 1400 bytes and fix the TCP maximum segment size to prevent the generation of any packets larger than that. This seems to mitigate the problem a bit, but I still frequently see the freezes. I've tried a number of sizes as arguments to the fragment directive: 1200, 1000, 576, all with similar results. I can't think of any strange network topology between the two ends that could trigger such a problem: the VPN server is running on a pfSense machine connected directly to the Internet, and my client is also connected directly to the Internet at another location. One other strange piece of the puzzle: if I run the tracepath utility, then that seems to band-aid the problem. A sample run looks like: [~]$ tracepath -n 192.168.100.91 1: 192.168.100.90 0.039ms pmtu 1500 1: 192.168.100.91 40.823ms reached 1: 192.168.100.91 19.846ms reached Resume: pmtu 1500 hops 1 back 64 The above run is between two clients on the VPN: I initiated the trace from 192.168.100.90 to the destination of 192.168.100.91. Both clients were configured with fragment 1200; mssfix; in an attempt to limit the MTU used on the link. The above results would seem to suggest that tracepath was able to detect a path MTU of 1500 bytes between the two clients. I would assume that it would be somewhat smaller due to the fragmentation settings specified in the OpenVPN configuration. I found that result somewhat strange. Even stranger, however: if I have a TCP connection in the stalled state (e.g. an SSH session with a directory listing that froze in the middle), then executing the tracepath command shown above causes the connection to start up again! I can't figure out any reasonable explanation for why this would be the case, but I feel like this might be pointing toward a solution to ultimately eradicate the problem. Does anyone have any recommendations for other things to try? Edit: I've come back and looked at this a bit further, and have found only more confounding information: I set the OpenVPN connection to fragment at 1400 bytes, as shown above. Then, I connected to the VPN from across the Internet and used Wireshark to look at the UDP packets that were sent to the VPN server while the stall occurred. None were greater than the specified 1400 byte count, so the fragmentation seems to be functioning properly. To verify that even a 1400-byte MTU would be sufficient, I pinged the VPN server using the following (Linux) command: ping <host> -s 1450 -M do This (I believe) sends a 1450-byte packet with fragmentation disabled (I at least verified that it didn't work if I set it to an obviously-too-large value like 1600 bytes). These seem to work just fine; I get replies back from the host with no issue. So, maybe this isn't an MTU issue at all. I'm just confused as to what else it might be! Edit 2: The rabbit hole just keeps getting deeper: I've now isolated the problem a bit more. It seems to be related to the exact OS that the VPN client uses. I have successfully duplicated the problem on at least three Ubuntu machines (versions 12.04 through 13.04). I can reliably duplicate an SSH connection freeze within a minute or so by just cat-ing a large log file. However, if I do the same test using a CentOS 6 machine as a client, then I don't see the problem! I've tested using the exact same OpenVPN client version as I was using on the Ubuntu machines. I can cat log files for hours without seeing the connection freeze. This seems to provide some insight as to the ultimate cause, but I'm just not sure what that insight is. I have examined the traffic over the VPN using Wireshark. I'm not a TCP expert, so I'm not sure what to make of the gory details, but the gist is that at some point, a UDP packet gets dropped due to the limited bandwidth of the Internet link, causing TCP retransmissions inside the VPN tunnel. On the CentOS client, these retransmissions occur properly and things move on happily. At some point with the Ubuntu clients, though, the remote end starts retransmitting the same TCP segment over and over (with the transmit delay increasing between each retransmission). The client sends what looks like a valid TCP ACK to each retransmission, but the remote end still continues to transmit the same TCP segment periodically. This extends ad infinitum and the connection stalls. My question here would be: Does anyone have any recommendations for how to troubleshoot and/or determine the root cause of the TCP issue? It's as if the remote end isn't accepting the ACK messages sent by the VPN client. One common difference between the CentOS node and the various Ubuntu releases is that Ubuntu has a much more recent Linux kernel version (from 3.2 in Ubuntu 12.04 to 3.8 in 13.04). A pointer to some new kernel bug maybe? I'm assuming that if that were so, then I wouldn't be the only one experiencing the problem; I don't think this seems like a particularly exotic setup.

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Oracle Coherence, Split-Brain and Recovery Protocols In Detail

    - by Ricardo Ferreira
    This article provides a high level conceptual overview of Split-Brain scenarios in distributed systems. It will focus on a specific example of cluster communication failure and recovery in Oracle Coherence. This includes a discussion on the witness protocol (used to remove failed cluster members) and the panic protocol (used to resolve Split-Brain scenarios). Note that the removal of cluster members does not necessarily indicate a Split-Brain condition. Oracle Coherence does not (and cannot) detect a Split-Brain as it occurs, the condition is only detected when cluster members that previously lost contact with each other regain contact. Cluster Topology and Configuration In order to create an good didactic for the article, let's assume a cluster topology and configuration. In this example we have a six member cluster, consisting of one JVM on each physical machine. The member IDs are as follows: Member ID  IP Address  1  10.149.155.76  2  10.149.155.77  3  10.149.155.236  4  10.149.155.75  5  10.149.155.79  6  10.149.155.78 Members 1, 2, and 3 are connected to a switch, and members 4, 5, and 6 are connected to a second switch. There is a link between the two switches, which provides network connectivity between all of the machines. Member 1 is the first member to join this cluster, thus making it the senior member. Member 6 is the last member to join this cluster. Here is a log snippet from Member 6 showing the complete member set: 2010-02-26 15:27:57.390/3.062 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=main, member=6): Started DefaultCacheServer... SafeCluster: Name=cluster:0xDDEB Group{Address=224.3.5.3, Port=35465, TTL=4} MasterMemberSet ( ThisMember=Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) OldestMember=Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) ActualMemberSet=MemberSet(Size=6, BitSetCount=2 Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) Member(Id=2, Timestamp=2010-02-26 15:27:17.847, Address=10.149.155.77:8088, MachineId=1101, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:296, Role=CoherenceServer) Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer) Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) Member(Id=5, Timestamp=2010-02-26 15:27:49.095, Address=10.149.155.79:8088, MachineId=1103, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:3229, Role=CoherenceServer) Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) ) RecycleMillis=120000 RecycleSet=MemberSet(Size=0, BitSetCount=0 ) ) At approximately 15:30, the connection between the two switches is severed: Thirty seconds later (the default packet timeout in development mode) the logs indicate communication failures across the cluster. In this example, the communication failure was caused by a network failure. In a production setting, this type of communication failure can have many root causes, including (but not limited to) network failures, excessive GC, high CPU utilization, swapping/virtual memory, and exceeding maximum network bandwidth. In addition, this type of failure is not necessarily indicative of a split brain. Any communication failure will be logged in this fashion. Member 2 logs a communication failure with Member 5: 2010-02-26 15:30:32.638/196.928 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=PacketPublisher, member=2): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=5, Timestamp=2010-02-26 15:27:49.095, Address=10.149.155.79:8088, MachineId=1103, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:3229, Role=CoherenceServer) by MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) ) The Coherence clustering protocol (TCMP) is a reliable transport mechanism built on UDP. In order for the protocol to be reliable, it requires an acknowledgement (ACK) for each packet delivered. If a packet fails to be acknowledged within the configured timeout period, the Coherence cluster member will log a packet timeout (as seen in the log message above). When this occurs, the cluster member will consult with other members to determine who is at fault for the communication failure. If the witness members agree that the suspect member is at fault, the suspect is removed from the cluster. If the witnesses unanimously disagree, the accuser is removed. This process is known as the witness protocol. Since Member 2 cannot communicate with Member 5, it selects two witnesses (Members 1 and 4) to determine if the communication issue is with Member 5 or with itself (Member 2). However, Member 4 is on the switch that is no longer accessible by Members 1, 2 and 3; thus a packet timeout for member 4 is recorded as well: 2010-02-26 15:30:35.648/199.938 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=PacketPublisher, member=2): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) by MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) ) Member 1 has the ability to confirm the departure of member 4, however Member 6 cannot as it is also inaccessible. At the same time, Member 3 sends a request to remove Member 6, which is followed by a report from Member 3 indicating that Member 6 has departed the cluster: 2010-02-26 15:30:35.706/199.996 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=2): MemberLeft request for Member 6 received from Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer) 2010-02-26 15:30:35.709/199.999 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=2): MemberLeft notification for Member 6 received from Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer) The log for Member 3 determines how Member 6 departed the cluster: 2010-02-26 15:30:35.161/191.694 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=PacketPublisher, member=3): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) by MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) Member(Id=2, Timestamp=2010-02-26 15:27:17.847, Address=10.149.155.77:8088, MachineId=1101, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:296, Role=CoherenceServer) ) 2010-02-26 15:30:35.165/191.698 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=Cluster, member=3): Member departure confirmed by MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) Member(Id=2, Timestamp=2010-02-26 15:27:17.847, Address=10.149.155.77:8088, MachineId=1101, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:296, Role=CoherenceServer) ); removing Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) In this case, Member 3 happened to select two witnesses that it still had connectivity with (Members 1 and 2) thus resulting in a simple decision to remove Member 6. Given the departure of Member 6, Member 2 is left with a single witness to confirm the departure of Member 4: 2010-02-26 15:30:35.713/200.003 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=Cluster, member=2): Member departure confirmed by MemberSet(Size=1, BitSetCount=2 Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) ); removing Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) In the meantime, Member 4 logs a missing heartbeat from the senior member. This message is also logged on Members 5 and 6. 2010-02-26 15:30:07.906/150.453 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=PacketListenerN, member=4): Scheduled senior member heartbeat is overdue; rejoining multicast group. Next, Member 4 logs a TcpRing failure with Member 2, thus resulting in the termination of Member 2: 2010-02-26 15:30:21.421/163.968 Oracle Coherence GE 3.5.3/465p2 <D4> (thread=Cluster, member=4): TcpRing: Number of socket exceptions exceeded maximum; last was "java.net.SocketTimeoutException: connect timed out"; removing the member: 2 For quick process termination detection, Oracle Coherence utilizes a feature called TcpRing which is a sparse collection of TCP/IP-based connections between different members in the cluster. Each member in the cluster is connected to at least one other member, which (if at all possible) is running on a different physical box. This connection is not used for any data transfer, only heartbeat communications are sent once a second per each link. If a certain number of exceptions are thrown while trying to re-establish a connection, the member throwing the exceptions is removed from the cluster. Member 5 logs a packet timeout with Member 3 and cites witnesses Members 4 and 6: 2010-02-26 15:30:29.791/165.037 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=PacketPublisher, member=5): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer) by MemberSet(Size=2, BitSetCount=2 Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) ) 2010-02-26 15:30:29.798/165.044 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=Cluster, member=5): Member departure confirmed by MemberSet(Size=2, BitSetCount=2 Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) ); removing Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer) Eventually we are left with two distinct clusters consisting of Members 1, 2, 3 and Members 4, 5, 6, respectively. In the latter cluster, Member 4 is promoted to senior member. The connection between the two switches is restored at 15:33. Upon the restoration of the connection, the cluster members immediately receive cluster heartbeats from the two senior members. In the case of Members 1, 2, and 3, the following is logged: 2010-02-26 15:33:14.970/369.066 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=Cluster, member=1): The member formerly known as Member(Id=4, Timestamp=2010-02-26 15:30:35.341, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) has been forcefully evicted from the cluster, but continues to emit a cluster heartbeat; henceforth, the member will be shunned and its messages will be ignored. Likewise for Members 4, 5, and 6: 2010-02-26 15:33:14.343/336.890 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=Cluster, member=4): The member formerly known as Member(Id=1, Timestamp=2010-02-26 15:30:31.64, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) has been forcefully evicted from the cluster, but continues to emit a cluster heartbeat; henceforth, the member will be shunned and its messages will be ignored. This message indicates that a senior heartbeat is being received from members that were previously removed from the cluster, in other words, something that should not be possible. For this reason, the recipients of these messages will initially ignore them. After several iterations of these messages, the existence of multiple clusters is acknowledged, thus triggering the panic protocol to reconcile this situation. When the presence of more than one cluster (i.e. Split-Brain) is detected by a Coherence member, the panic protocol is invoked in order to resolve the conflicting clusters and consolidate into a single cluster. The protocol consists of the removal of smaller clusters until there is one cluster remaining. In the case of equal size clusters, the one with the older Senior Member will survive. Member 1, being the oldest member, initiates the protocol: 2010-02-26 15:33:45.970/400.066 Oracle Coherence GE 3.5.3/465p2 <Warning> (thread=Cluster, member=1): An existence of a cluster island with senior Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) containing 3 nodes have been detected. Since this Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) is the senior of an older cluster island, the panic protocol is being activated to stop the other island's senior and all junior nodes that belong to it. Member 3 receives the panic: 2010-02-26 15:33:45.803/382.336 Oracle Coherence GE 3.5.3/465p2 <Error> (thread=Cluster, member=3): Received panic from senior Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) caused by Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) Member 4, the senior member of the younger cluster, receives the kill message from Member 3: 2010-02-26 15:33:44.921/367.468 Oracle Coherence GE 3.5.3/465p2 <Error> (thread=Cluster, member=4): Received a Kill message from a valid Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer); stopping cluster service. In turn, Member 4 requests the departure of its junior members 5 and 6: 2010-02-26 15:33:44.921/367.468 Oracle Coherence GE 3.5.3/465p2 <Error> (thread=Cluster, member=4): Received a Kill message from a valid Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer); stopping cluster service. 2010-02-26 15:33:43.343/349.015 Oracle Coherence GE 3.5.3/465p2 <Error> (thread=Cluster, member=6): Received a Kill message from a valid Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer); stopping cluster service. Once Members 4, 5, and 6 restart, they rejoin the original cluster with senior member 1. The log below is from Member 4. Note that it receives a different member id when it rejoins the cluster. 2010-02-26 15:33:44.921/367.468 Oracle Coherence GE 3.5.3/465p2 <Error> (thread=Cluster, member=4): Received a Kill message from a valid Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer); stopping cluster service. 2010-02-26 15:33:46.921/369.468 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Service Cluster left the cluster 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Invocation:InvocationService, member=4): Service InvocationService left the cluster 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=OptimisticCache, member=4): Service OptimisticCache left the cluster 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=ReplicatedCache, member=4): Service ReplicatedCache left the cluster 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=DistributedCache, member=4): Service DistributedCache left the cluster 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Invocation:Management, member=4): Service Management left the cluster 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Member 6 left service Management with senior member 5 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Member 6 left service DistributedCache with senior member 5 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Member 6 left service ReplicatedCache with senior member 5 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Member 6 left service OptimisticCache with senior member 5 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Member 6 left service InvocationService with senior member 5 2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=4): Member(Id=6, Timestamp=2010-02-26 15:33:47.046, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) left Cluster with senior member 4 2010-02-26 15:33:49.218/371.765 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=main, member=n/a): Restarting cluster 2010-02-26 15:33:49.421/371.968 Oracle Coherence GE 3.5.3/465p2 <D5> (thread=Cluster, member=n/a): Service Cluster joined the cluster with senior service member n/a 2010-02-26 15:33:49.625/372.172 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=Cluster, member=n/a): This Member(Id=5, Timestamp=2010-02-26 15:33:50.499, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) joined cluster "cluster:0xDDEB" with senior Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) Cool isn't it?

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • background image not showing in html

    - by Registered User
    I am having following css <!DOCTYPE html > <html> <head> <meta charset="utf-8"> <title>Black Goose Bistro Summer Menu</title> <link href='http://fonts.googleapis.com/css?family=Marko+One' rel='stylesheet' type='text/css'> <style> body { font-family: Georgia, serif; font-size: 100%; line-height: 175%; margin: 0 15% 0; background-image:url(images/bullseye.png); } #header { margin-top: 0; padding: 3em 1em 2em 1em; text-align: center; } a { text-decoration: none; } h1 { font: bold 1.5em Georgia, serif; text-shadow: .1em .1em .2em gray; } h2 { font-size: 1em; text-transform: uppercase; letter-spacing: .5em; text-align: center; } dt { font-weight: bold; } strong { font-style: italic; } ul { list-style-type: none; margin: 0; padding: 0; } #info p { font-style: italic; } .price { font-family: Georgia, serif; font-style: italic; } p.warning, sup { font-size: small; } .label { font-weight: bold; font-variant: small-caps; font-style: normal; } h2 + p { text-align: center; font-style: italic; } ); </style> </head> <body> <div id="header"> <h1>Black Goose Bistro &bull; Summer Menu</h1> <div id="info"> <p>Baker's Corner, Seekonk, Massachusetts<br> <span class="label">Hours: Monday through Thursday:</span> 11 to 9, <span class="label">Friday and Saturday;</span> 11 to midnight</p> <ul> <li><a href="#appetizers">Appetizers</a></li> <li><a href="#entrees">Main Courses</a></li> <li><a href="#toast">Traditional Toasts</a></li> <li><a href="#dessert">Dessert Selection</a></li> </ul> </div> </div> <div id="appetizers"> <h2>Appetizers</h2> <p>This season, we explore the spicy flavors of the southwest in our appetizer collection.</p> <dl> <dt>Black bean purses</dt> <dd>Spicy black bean and a blend of mexican cheeses wrapped in sheets of phyllo and baked until golden. <span class="price">$3.95</span></dd> <dt class="newitem">Southwestern napoleons with lump crab &mdash; <strong>new item!</strong></dt> <dd>Layers of light lump crab meat, bean and corn salsa, and our handmade flour tortillas. <span class="price">$7.95</span></dd> </dl> </div> <div id="entrees"> <h2>Main courses</h2> <p>Big, bold flavors are the name of the game this summer. Allow us to assist you with finding the perfect wine.</p> <dl> <dt class="newitem">Jerk rotisserie chicken with fried plantains &mdash; <strong>new item!</strong></dt> <dd>Tender chicken slow-roasted on the rotisserie, flavored with spicy and fragrant jerk sauce and served with fried plantains and fresh mango. <strong>Very spicy.</strong> <span class="price">$12.95</span></dd> <dt>Shrimp sate kebabs with peanut sauce</dt> <dd>Skewers of shrimp marinated in lemongrass, garlic, and fish sauce then grilled to perfection. Served with spicy peanut sauce and jasmine rice. <span class="price">$12.95</span></dd> <dt>Grilled skirt steak with mushroom fricasee</dt> <dd>Flavorful skirt steak marinated in asian flavors grilled as you like it<sup>*</sup>. Served over a blend of sauteed wild mushrooms with a side of blue cheese mashed potatoes. <span class="price">$16.95</span></dd> </dl> </div> <div id="toast"> <h2>Traditional Toasts</h2> <p>The ultimate comfort food, our traditional toast recipes are adapted from <a href="http://www.gutenberg.org/files/13923/13923-h/13923-h.htm"><cite>The Whitehouse Cookbook</cite></a> published in 1887.</p> <dl> <dt>Cream toast</dt> <dd>Simple cream sauce over highest quality toasted bread, baked daily. <span class="price">$3.95</span></dd> <dt>Mushroom toast</dt> <dd>Layers of light lump crab meat, bean and corn salsa, and our handmade flour tortillas. <span class="price">$6.95</span></dd> <dt>Nun's toast</dt> <dd>Onions and hard-boiled eggs in a cream sauce over buttered hot toast. <span class="price">$6.95</span></dd> <dt>Apple toast</dt> <dd>Sweet, cinnamon stewed apples over delicious buttery grilled bread. <span class="price">$6.95</span></dd> </dl> </div> <div id="dessert"> <h2>Dessert Selection</h2> <p>Be sure to save room for our desserts, made daily by our own <a href="http://www.jwu.edu/college.aspx?id=19510">Johnson & Wales</a> trained pastry chef.</p> <dl> <dt class="newitem">Lemon chiffon cake &mdash; <strong>new item!</strong></dt> <dd>Light and citrus flavored sponge cake with buttercream frosting as light as a cloud. <span class="price">$2.95</span></dd> <dt class="newitem">Molten chocolate cake</dt> <dd>Bubba's special dark chocolate cake with a warm, molten center. Served with or without a splash of almond liqueur. <span class="price">$3.95</span></dd> </dl> </div> <p class="warning"><sup>*</sup> We are required to warn you that undercooked food is a health risk.</p> </body> </html> but the background image does not appear in body tag you can see background-image:url(images/bullseye.png); this html page is bistro.html and the directory in which it is contained there is a folder images and inside images folder I have a file bullseye.png .I expect the png to appear in background.But that does not happen. For sake of question I am posting the image here also Let me know if the syntax of css wrong? following is image http://i.stack.imgur.com/YUKgg.png

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel.

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctls, so i'm posting them with comments. Based on Igor Sysoev (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Sysctls are for 7.x FreeBSD. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. Highload web server sysctls: # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Do not use lager sockbufs on 8.0 # ( http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=262144 # Recive clusters (on amd64 7.2+ 65k is default) # For such high value vm.kmem_size must be increased to 3G #kern.ipc.nmbclusters=229376 # Jumbo pagesize(4k/8k) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=192000 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=24000 #kern.ipc.nmbjumbo16=10240 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # Turn off receive autotuning #net.inet.tcp.recvbuf_auto=0 # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This should be enabled if you going to use big spaces (>64k) #net.inet.tcp.rfc1323=1 # Turn this off on highspeed, lossless connections (LAN 1Gbit+) #net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # You can try setting it to 0 on fileserver with 1GBit+ interfaces # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) #net.inet.tcp.inflight.enable=0 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't tested it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds (default: 30000 from RFC) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=40960 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becames lower) vfs.ufs.dirhash_maxmem=67108864 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 /boot/loader.conf: # Accept filters for data, http and DNS requests # Usefull when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load= #siis_load= # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 200M) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=100 # Incresed hostcache net.inet.tcp.hostcache.hashsize="16384" net.inet.tcp.hostcache.bucketlimit="100" # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # `sysctl dev.em.0.stats=1 ; dmesg` # #Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # Nicer boot logo =) loader_logo="beastie" And finally here is my additions to GENERIC kernel # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU of 8K(amd64) # (4k for i386). This req. is only for receiving data. # Read more in man zero_copy_sockets #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL options IPFIREWALL_VERBOSE options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_DEFAULT_TO_ACCEPT options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron #device amdtemp # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # DTrace options KDTRACE_HOOKS # all architectures - enable general DTrace hooks options DDB_CTF # all architectures - kernel ELF linker loads CTF data #options KDTRACE_FRAME # amd64-only # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (9.x+) #options TEKEN_UTF8 #options TEKEN_XTERM # NCQ support # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ #options ATA_CAM # FreeBSD 9+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html #options DEADLKRES PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • Sendmail Failing to Forward Locally Addressed Mail to Exchange Server

    - by DomainSoil
    I've recently gained employment as a web developer with a small company. What they neglected to tell me upon hire was that I would be administrating the server along with my other daily duties. Now, truth be told, I'm not clueless when it comes to these things, but this is my first rodeo working with a rack server/console.. However, I'm confident that I will be able to work through any solutions you provide. Short Description: When a customer places an order via our (Magento CE 1.8.1.0) website, a copy of said order is supposed to be BCC'd to our sales manager. I say supposed because this was a working feature before the old administrator left. Long Description: Shortly after I started, we had a server crash which required a server restart. After restart, we noticed a few features on our site weren't working, but all those have been cleaned up except this one. I had to create an account on our server for root access. When a customer places an order, our sites software (Magento CE 1.8.1.0) is configured to BCC the customers order email to our sales manager. We use a Microsoft Exchange 2007 Server for our mail, which is hosted on a different machine (in-house) that I don't have access to ATM, but I'm sure I could if needed. As far as I can tell, all other external emails work.. Only INTERNAL email addresses fail to deliver. I know this because I've also tested my own internal address via our website. I set up an account with an internal email, made a test order, and never received the email. I changed my email for the account to an external GMail account, and received emails as expected. Let's dive into the logs and config's. For privacy/security reasons, names have been changed to the following: domain.com = Our Top Level Domain. email.local = Our Exchange Server. example.com = ANY other TLD. OLDadmin = Our previous Server Administrator. NEWadmin = Me. SALES@ = Our Sales Manager. Customer# = A Customer. Here's a list of the programs and config files used that hold relevant for this issue: Server: > [root@www ~]# cat /etc/centos-release CentOS release 6.3 (final) Sendmail: > [root@www ~]# sendmail -d0.1 -bt < /dev/null Version 8.14.4 ========SYSTEM IDENTITY (after readcf)======== (short domain name) $w = domain (canonical domain name) $j = domain.com (subdomain name) $m = com (node name) $k = www.domain.com > [root@www ~]# rpm -qa | grep -i sendmail sendmail-cf-8.14.4-8.e16.noarch sendmail-8.14-4-8.e16.x86_64 nslookup: > [root@www ~]# nslookup email.local Name: email.local Address: 192.168.1.50 hostname: > [root@www ~]# hostname www.domain.com /etc/mail/access: > [root@www ~]# vi /etc/mail/access Connect:localhost.localdomain RELAY Connect:localhost RELAY Connect:127.0.0.1 RELAY /etc/mail/domaintable: > [root@www ~]# vi /etc/mail/domaintable # /etc/mail/local-host-names: > [root@www ~]# vi /etc/mail/local-host-names # /etc/mail/mailertable: > [root@www ~]# vi /etc/mail/mailertable # /etc/mail/sendmail.cf: > [root@www ~]# vi /etc/mail/sendmail.cf ###################################################################### ##### ##### DO NOT EDIT THIS FILE! Only edit the source .mc file. ##### ###################################################################### ###################################################################### ##### $Id: cfhead.m4,v 8.120 2009/01/23 22:39:21 ca Exp $ ##### ##### $Id: cf.m4,v 8.32 1999/02/07 07:26:14 gshapiro Exp $ ##### ##### setup for linux ##### ##### $Id: linux.m4,v 8.13 2000/09/17 17:30:00 gshapiro Exp $ ##### ##### $Id: local_procmail.m4,v 8.22 2002/11/17 04:24:19 ca Exp $ ##### ##### $Id: no_default_msa.m4,v 8.2 2001/02/14 05:03:22 gshapiro Exp $ ##### ##### $Id: smrsh.m4,v 8.14 1999/11/18 05:06:23 ca Exp $ ##### ##### $Id: mailertable.m4,v 8.25 2002/06/27 23:23:57 gshapiro Exp $ ##### ##### $Id: virtusertable.m4,v 8.23 2002/06/27 23:23:57 gshapiro Exp $ ##### ##### $Id: redirect.m4,v 8.15 1999/08/06 01:47:36 gshapiro Exp $ ##### ##### $Id: always_add_domain.m4,v 8.11 2000/09/12 22:00:53 ca Exp $ ##### ##### $Id: use_cw_file.m4,v 8.11 2001/08/26 20:58:57 gshapiro Exp $ ##### ##### $Id: use_ct_file.m4,v 8.11 2001/08/26 20:58:57 gshapiro Exp $ ##### ##### $Id: local_procmail.m4,v 8.22 2002/11/17 04:24:19 ca Exp $ ##### ##### $Id: access_db.m4,v 8.27 2006/07/06 21:10:10 ca Exp $ ##### ##### $Id: blacklist_recipients.m4,v 8.13 1999/04/02 02:25:13 gshapiro Exp $ ##### ##### $Id: accept_unresolvable_domains.m4,v 8.10 1999/02/07 07:26:07 gshapiro Exp $ ##### ##### $Id: masquerade_envelope.m4,v 8.9 1999/02/07 07:26:10 gshapiro Exp $ ##### ##### $Id: masquerade_entire_domain.m4,v 8.9 1999/02/07 07:26:10 gshapiro Exp $ ##### ##### $Id: proto.m4,v 8.741 2009/12/11 00:04:53 ca Exp $ ##### # level 10 config file format V10/Berkeley # override file safeties - setting this option compromises system security, # addressing the actual file configuration problem is preferred # need to set this before any file actions are encountered in the cf file #O DontBlameSendmail=safe # default LDAP map specification # need to set this now before any LDAP maps are defined #O LDAPDefaultSpec=-h localhost ################## # local info # ################## # my LDAP cluster # need to set this before any LDAP lookups are done (including classes) #D{sendmailMTACluster}$m Cwlocalhost # file containing names of hosts for which we receive email Fw/etc/mail/local-host-names # my official domain name # ... define this only if sendmail cannot automatically determine your domain #Dj$w.Foo.COM # host/domain names ending with a token in class P are canonical CP. # "Smart" relay host (may be null) DSemail.local # operators that cannot be in local usernames (i.e., network indicators) CO @ % ! # a class with just dot (for identifying canonical names) C.. # a class with just a left bracket (for identifying domain literals) C[[ # access_db acceptance class C{Accept}OK RELAY C{ResOk}OKR # Hosts for which relaying is permitted ($=R) FR-o /etc/mail/relay-domains # arithmetic map Karith arith # macro storage map Kmacro macro # possible values for TLS_connection in access map C{Tls}VERIFY ENCR # who I send unqualified names to if FEATURE(stickyhost) is used # (null means deliver locally) DRemail.local. # who gets all local email traffic # ($R has precedence for unqualified names if FEATURE(stickyhost) is used) DHemail.local. # dequoting map Kdequote dequote # class E: names that should be exposed as from this host, even if we masquerade # class L: names that should be delivered locally, even if we have a relay # class M: domains that should be converted to $M # class N: domains that should not be converted to $M #CL root C{E}root C{w}localhost.localdomain C{M}domain.com # who I masquerade as (null for no masquerading) (see also $=M) DMdomain.com # my name for error messages DnMAILER-DAEMON # Mailer table (overriding domains) Kmailertable hash -o /etc/mail/mailertable.db # Virtual user table (maps incoming users) Kvirtuser hash -o /etc/mail/virtusertable.db CPREDIRECT # Access list database (for spam stomping) Kaccess hash -T<TMPF> -o /etc/mail/access.db # Configuration version number DZ8.14.4 /etc/mail/sendmail.mc: > [root@www ~]# vi /etc/mail/sendmail.mc divert(-1)dnl dnl # dnl # This is the sendmail macro config file for m4. If you make changes to dnl # /etc/mail/sendmail.mc, you will need to regenerate the dnl # /etc/mail/sendmail.cf file by confirming that the sendmail-cf package is dnl # installed and then performing a dnl # dnl # /etc/mail/make dnl # include(`/usr/share/sendmail-cf/m4/cf.m4')dnl VERSIONID(`setup for linux')dnl OSTYPE(`linux')dnl dnl # dnl # Do not advertize sendmail version. dnl # dnl define(`confSMTP_LOGIN_MSG', `$j Sendmail; $b')dnl dnl # dnl # default logging level is 9, you might want to set it higher to dnl # debug the configuration dnl # dnl define(`confLOG_LEVEL', `9')dnl dnl # dnl # Uncomment and edit the following line if your outgoing mail needs to dnl # be sent out through an external mail server: dnl # define(`SMART_HOST', `email.local')dnl dnl # define(`confDEF_USER_ID', ``8:12'')dnl dnl define(`confAUTO_REBUILD')dnl define(`confTO_CONNECT', `1m')dnl define(`confTRY_NULL_MX_LIST', `True')dnl define(`confDONT_PROBE_INTERFACES', `True')dnl define(`PROCMAIL_MAILER_PATH', `/usr/bin/procmail')dnl define(`ALIAS_FILE', `/etc/aliases')dnl define(`STATUS_FILE', `/var/log/mail/statistics')dnl define(`UUCP_MAILER_MAX', `2000000')dnl define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl define(`confAUTH_OPTIONS', `A')dnl dnl # dnl # The following allows relaying if the user authenticates, and disallows dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links dnl # dnl define(`confAUTH_OPTIONS', `A p')dnl dnl # dnl # PLAIN is the preferred plaintext authentication method and used by dnl # Mozilla Mail and Evolution, though Outlook Express and other MUAs do dnl # use LOGIN. Other mechanisms should be used if the connection is not dnl # guaranteed secure. dnl # Please remember that saslauthd needs to be running for AUTH. dnl # dnl TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl # dnl # Rudimentary information on creating certificates for sendmail TLS: dnl # cd /etc/pki/tls/certs; make sendmail.pem dnl # Complete usage: dnl # make -C /etc/pki/tls/certs usage dnl # dnl define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl dnl define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl dnl define(`confSERVER_CERT', `/etc/pki/tls/certs/sendmail.pem')dnl dnl define(`confSERVER_KEY', `/etc/pki/tls/certs/sendmail.pem')dnl dnl # dnl # This allows sendmail to use a keyfile that is shared with OpenLDAP's dnl # slapd, which requires the file to be readble by group ldap dnl # dnl define(`confDONT_BLAME_SENDMAIL', `groupreadablekeyfile')dnl dnl # dnl define(`confTO_QUEUEWARN', `4h')dnl dnl define(`confTO_QUEUERETURN', `5d')dnl dnl define(`confQUEUE_LA', `12')dnl dnl define(`confREFUSE_LA', `18')dnl define(`confTO_IDENT', `0')dnl dnl FEATURE(delay_checks)dnl FEATURE(`no_default_msa', `dnl')dnl FEATURE(`smrsh', `/usr/sbin/smrsh')dnl FEATURE(`mailertable', `hash -o /etc/mail/mailertable.db')dnl FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl FEATURE(redirect)dnl FEATURE(always_add_domain)dnl FEATURE(use_cw_file)dnl FEATURE(use_ct_file)dnl dnl # dnl # The following limits the number of processes sendmail can fork to accept dnl # incoming messages or process its message queues to 20.) sendmail refuses dnl # to accept connections once it has reached its quota of child processes. dnl # dnl define(`confMAX_DAEMON_CHILDREN', `20')dnl dnl # dnl # Limits the number of new connections per second. This caps the overhead dnl # incurred due to forking new sendmail processes. May be useful against dnl # DoS attacks or barrages of spam. (As mentioned below, a per-IP address dnl # limit would be useful but is not available as an option at this writing.) dnl # dnl define(`confCONNECTION_RATE_THROTTLE', `3')dnl dnl # dnl # The -t option will retry delivery if e.g. the user runs over his quota. dnl # FEATURE(local_procmail, `', `procmail -t -Y -a $h -d $u')dnl FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl FEATURE(`blacklist_recipients')dnl EXPOSED_USER(`root')dnl dnl # dnl # For using Cyrus-IMAPd as POP3/IMAP server through LMTP delivery uncomment dnl # the following 2 definitions and activate below in the MAILER section the dnl # cyrusv2 mailer. dnl # dnl define(`confLOCAL_MAILER', `cyrusv2')dnl dnl define(`CYRUSV2_MAILER_ARGS', `FILE /var/lib/imap/socket/lmtp')dnl dnl # dnl # The following causes sendmail to only listen on the IPv4 loopback address dnl # 127.0.0.1 and not on any other network devices. Remove the loopback dnl # address restriction to accept email from the internet or intranet. dnl # DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl dnl # dnl # The following causes sendmail to additionally listen to port 587 for dnl # mail from MUAs that authenticate. Roaming users who can't reach their dnl # preferred sendmail daemon due to port 25 being blocked or redirected find dnl # this useful. dnl # dnl DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl dnl # dnl # The following causes sendmail to additionally listen to port 465, but dnl # starting immediately in TLS mode upon connecting. Port 25 or 587 followed dnl # by STARTTLS is preferred, but roaming clients using Outlook Express can't dnl # do STARTTLS on ports other than 25. Mozilla Mail can ONLY use STARTTLS dnl # and doesn't support the deprecated smtps; Evolution <1.1.1 uses smtps dnl # when SSL is enabled-- STARTTLS support is available in version 1.1.1. dnl # dnl # For this to work your OpenSSL certificates must be configured. dnl # dnl DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl dnl # dnl # The following causes sendmail to additionally listen on the IPv6 loopback dnl # device. Remove the loopback address restriction listen to the network. dnl # dnl DAEMON_OPTIONS(`port=smtp,Addr=::1, Name=MTA-v6, Family=inet6')dnl dnl # dnl # enable both ipv6 and ipv4 in sendmail: dnl # dnl DAEMON_OPTIONS(`Name=MTA-v4, Family=inet, Name=MTA-v6, Family=inet6') dnl # dnl # We strongly recommend not accepting unresolvable domains if you want to dnl # protect yourself from spam. However, the laptop and users on computers dnl # that do not have 24x7 DNS do need this. dnl # FEATURE(`accept_unresolvable_domains')dnl dnl # dnl FEATURE(`relay_based_on_MX')dnl dnl # dnl # Also accept email sent to "localhost.localdomain" as local email. dnl # LOCAL_DOMAIN(`localhost.localdomain')dnl dnl # dnl # The following example makes mail from this host and any additional dnl # specified domains appear to be sent from mydomain.com dnl # MASQUERADE_AS(`domain.com')dnl dnl # dnl # masquerade not just the headers, but the envelope as well dnl FEATURE(masquerade_envelope)dnl dnl # dnl # masquerade not just @mydomainalias.com, but @*.mydomainalias.com as well dnl # FEATURE(masquerade_entire_domain)dnl dnl # MASQUERADE_DOMAIN(domain.com)dnl dnl MASQUERADE_DOMAIN(localhost.localdomain)dnl dnl MASQUERADE_DOMAIN(mydomainalias.com)dnl dnl MASQUERADE_DOMAIN(mydomain.lan)dnl MAILER(smtp)dnl MAILER(procmail)dnl dnl MAILER(cyrusv2)dnl /etc/mail/trusted-users: > [root@www ~]# vi /etc/mail/trusted-users # /etc/mail/virtusertable: > [root@www ~]# vi /etc/mail/virtusertable [email protected] [email protected] [email protected] [email protected] /etc/hosts: > [root@www ~]# vi /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.1.50 email.local I've only included the "local info" part of sendmail.cf, to save space. If there are any files that I've missed, please advise so I may produce them. Now that that's out of the way, lets look at some entries from /var/log/maillog. The first entry is from an order BEFORE the crash, when the site was working as expected. ##Order 200005374 Aug 5, 2014 7:06:38 AM## Aug 5 07:06:39 www sendmail[26149]: s75C6dqB026149: from=OLDadmin, size=11091, class=0, nrcpts=2, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 07:06:39 www sendmail[26150]: s75C6dXe026150: from=<[email protected]>, size=11257, class=0, nrcpts=2, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 07:06:39 www sendmail[26149]: s75C6dqB026149: [email protected],=?utf-8?B?dGhvbWFzICBHaWxsZXNwaWU=?= <[email protected]>, ctladdr=OLDadmin (501/501), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=71091, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75C6dXe026150 Message accepted for delivery) Aug 5 07:06:40 www sendmail[26152]: s75C6dXe026150: to=<[email protected]>,<[email protected]>, delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=161257, relay=email.local. [192.168.1.50], dsn=2.0.0, stat=Sent ( <[email protected]> Queued mail for delivery) This next entry from maillog is from an order AFTER the crash. ##Order 200005375 Aug 5, 2014 9:45:25 AM## Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: from=OLDadmin, size=11344, class=0, nrcpts=2, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 09:45:26 www sendmail[30022]: s75EjQm1030022: <[email protected]>... User unknown Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: [email protected], ctladdr=OLDadmin (501/501), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=71344, relay=[127.0.0.1] [127.0.0.1], dsn=5.1.1, stat=User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm1030022: from=<[email protected]>, size=11500, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: to==?utf-8?B?S2VubmV0aCBCaWViZXI=?= <[email protected]>, ctladdr=OLDadmin (501/501), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=71344, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75EjQm1030022 Message accepted for delivery) Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: s75EjQ4P030021: DSN: User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm3030022: <[email protected]>... User unknown Aug 5 09:45:26 www sendmail[30021]: s75EjQ4P030021: to=OLDadmin, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=42368, relay=[127.0.0.1] [127.0.0.1], dsn=5.1.1, stat=User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm3030022: from=<>, size=12368, class=0, nrcpts=0, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:45:26 www sendmail[30021]: s75EjQ4P030021: s75EjQ4Q030021: return to sender: User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm5030022: from=<>, size=14845, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:45:26 www sendmail[30021]: s75EjQ4Q030021: to=postmaster, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=43392, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75EjQm5030022 Message accepted for delivery) Aug 5 09:45:26 www sendmail[30025]: s75EjQm5030022: to=root, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=45053, dsn=2.0.0, stat=Sent Aug 5 09:45:27 www sendmail[30024]: s75EjQm1030022: to=<[email protected]>, delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=131500, relay=email.local. [192.168.1.50], dsn=2.0.0, stat=Sent ( <[email protected]> Queued mail for delivery) To add a little more, I think I've pinpointed the actual crash event. ##THE CRASH## Aug 5 09:39:46 www sendmail[3251]: restarting /usr/sbin/sendmail due to signal Aug 5 09:39:46 www sm-msp-queue[3260]: restarting /usr/sbin/sendmail due to signal Aug 5 09:39:46 www sm-msp-queue[29370]: starting daemon (8.14.4): queueing@01:00:00 Aug 5 09:39:47 www sendmail[29372]: starting daemon (8.14.4): SMTP+queueing@01:00:00 Aug 5 09:40:02 www sendmail[29465]: s75Ee2vT029465: Authentication-Warning: www.domain.com: OLDadmin set sender to root using -f Aug 5 09:40:02 www sendmail[29464]: s75Ee2IF029464: Authentication-Warning: www.domain.com: OLDadmin set sender to root using -f Aug 5 09:40:02 www sendmail[29465]: s75Ee2vT029465: from=root, size=1426, class=0, nrcpts=1, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 09:40:02 www sendmail[29464]: s75Ee2IF029464: from=root, size=1426, class=0, nrcpts=1, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 09:40:02 www sendmail[29466]: s75Ee23t029466: from=<[email protected]>, size=1784, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:40:02 www sendmail[29466]: s75Ee23t029466: to=<[email protected]>, delay=00:00:00, mailer=local, pri=31784, dsn=4.4.3, stat=queued Aug 5 09:40:02 www sendmail[29467]: s75Ee2wh029467: from=<[email protected]>, size=1784, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:40:02 www sendmail[29467]: s75Ee2wh029467: to=<[email protected]>, delay=00:00:00, mailer=local, pri=31784, dsn=4.4.3, stat=queued Aug 5 09:40:02 www sendmail[29464]: s75Ee2IF029464: to=OLDadmin, ctladdr=root (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=31426, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75Ee23t029466 Message accepted for delivery) Aug 5 09:40:02 www sendmail[29465]: s75Ee2vT029465: to=OLDadmin, ctladdr=root (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=31426, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75Ee2wh029467 Message accepted for delivery) Aug 5 09:40:06 www sm-msp-queue[29370]: restarting /usr/sbin/sendmail due to signal Aug 5 09:40:06 www sendmail[29372]: restarting /usr/sbin/sendmail due to signal Aug 5 09:40:06 www sm-msp-queue[29888]: starting daemon (8.14.4): queueing@01:00:00 Aug 5 09:40:06 www sendmail[29890]: starting daemon (8.14.4): SMTP+queueing@01:00:00 Aug 5 09:40:06 www sendmail[29891]: s75Ee23t029466: to=<[email protected]>, delay=00:00:04, mailer=local, pri=121784, dsn=5.1.1, stat=User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee23t029466: s75Ee6xY029891: DSN: User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee6xY029891: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=33035, dsn=2.0.0, stat=Sent Aug 5 09:40:06 www sendmail[29891]: s75Ee2wh029467: to=<[email protected]>, delay=00:00:04, mailer=local, pri=121784, dsn=5.1.1, stat=User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee2wh029467: s75Ee6xZ029891: DSN: User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee6xZ029891: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=33035, dsn=2.0.0, stat=Sent Something to note about the maillog's: Before the crash, the msgid included localhost.localdomain; after the crash it's been domain.com. Thanks to all who take the time to read and look into this issue. I appreciate it and look forward to tackling this issue together.

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • FreeBSD 8.1 unstable network connection

    - by frankcheong
    I have three FreeBSD 8.1 running on three different hardware and therefore consist of different network adapter as well (bce, bge and igb). I found that the network connection is kind of unstable which I have tried to scp some 10MB file and found that I cannot always get the files completed successfully. I have further checked with my network admin and he claim that the problem is being caused by the network driver which cannot support the load whereby he tried to ping using huge packet size (around 15k) and my server will drop packet consistently at a regular interval. I found that this statement may not be valid since the three server is using three different network drive and it would be quite impossible that the same problem is being caused by three different network adapter and thus different network driver. Since then I have tried to tune up the performance by playing around with the /etc/sysctl.conf figures with no luck. kern.ipc.somaxconn=1024 kern.ipc.shmall=3276800 kern.ipc.shmmax=1638400000 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # Required by pf net.inet.ip.forwarding=1 #Network Performance Tuning kern.ipc.maxsockbuf=16777216 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.recvbuf_max=16777216 # Setting specifically for 1 or even 10Gbps network net.local.stream.sendspace=262144 net.local.stream.recvspace=262144 net.inet.tcp.local_slowstart_flightsize=10 net.inet.tcp.nolocaltimewait=1 net.inet.tcp.mssdflt=1460 net.inet.tcp.sendbuf_auto=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.sendspace=262144 net.inet.tcp.recvspace=262144 net.inet.udp.recvspace=262144 kern.ipc.maxsockbuf=16777216 kern.ipc.nmbclusters=32768 net.inet.tcp.delayed_ack=1 net.inet.tcp.delacktime=100 net.inet.tcp.slowstart_flightsize=179 net.inet.tcp.inflight.enable=1 net.inet.tcp.inflight.min=6144 # Reduce the cache size of slow start connection net.inet.tcp.hostcache.expire=1 Our network admin also claim that they see quite a lot of network up and down from their cisco switch log while I cannot find any up down message inside the dmesg. Have further checked the netstat -s but dont have concrete idea. tcp: 133695291 packets sent 39408539 data packets (3358837321 bytes) 61868 data packets (89472844 bytes) retransmitted 24 data packets unnecessarily retransmitted 0 resends initiated by MTU discovery 50756141 ack-only packets (2148 delayed) 0 URG only packets 0 window probe packets 4372385 window update packets 39781869 control packets 134898031 packets received 72339403 acks (for 3357601899 bytes) 190712 duplicate acks 0 acks for unsent data 59339201 packets (3647021974 bytes) received in-sequence 114 completely duplicate packets (135202 bytes) 27 old duplicate packets 0 packets with some dup. data (0 bytes duped) 42090 out-of-order packets (60817889 bytes) 0 packets (0 bytes) of data after window 0 window probes 3953896 window update packets 64181 packets received after close 0 discarded for bad checksums 0 discarded for bad header offset fields 0 discarded because packet too short 45192 discarded due to memory problems 19945391 connection requests 1323420 connection accepts 0 bad connection attempts 0 listen queue overflows 0 ignored RSTs in the windows 21133581 connections established (including accepts) 21268724 connections closed (including 32737 drops) 207874 connections updated cached RTT on close 207874 connections updated cached RTT variance on close 132439 connections updated cached ssthresh on close 42392 embryonic connections dropped 72339338 segments updated rtt (of 69477829 attempts) 390871 retransmit timeouts 0 connections dropped by rexmit timeout 0 persist timeouts 0 connections dropped by persist timeout 0 Connections (fin_wait_2) dropped because of timeout 13990 keepalive timeouts 2 keepalive probes sent 13988 connections dropped by keepalive 173044 correct ACK header predictions 36947371 correct data packet header predictions 1323420 syncache entries added 0 retransmitted 0 dupsyn 0 dropped 1323420 completed 0 bucket overflow 0 cache overflow 0 reset 0 stale 0 aborted 0 badack 0 unreach 0 zone failures 1323420 cookies sent 0 cookies received 1864 SACK recovery episodes 18005 segment rexmits in SACK recovery episodes 26066896 byte rexmits in SACK recovery episodes 147327 SACK options (SACK blocks) received 87473 SACK options (SACK blocks) sent 0 SACK scoreboard overflow 0 packets with ECN CE bit set 0 packets with ECN ECT(0) bit set 0 packets with ECN ECT(1) bit set 0 successful ECN handshakes 0 times ECN reduced the congestion window udp: 5141258 datagrams received 0 with incomplete header 0 with bad data length field 0 with bad checksum 1 with no checksum 0 dropped due to no socket 129616 broadcast/multicast datagrams undelivered 0 dropped due to full socket buffers 0 not for hashed pcb 5011642 delivered 5016050 datagrams output 0 times multicast source filter matched sctp: 0 input packets 0 datagrams 0 packets that had data 0 input SACK chunks 0 input DATA chunks 0 duplicate DATA chunks 0 input HB chunks 0 HB-ACK chunks 0 input ECNE chunks 0 input AUTH chunks 0 chunks missing AUTH 0 invalid HMAC ids received 0 invalid secret ids received 0 auth failed 0 fast path receives all one chunk 0 fast path multi-part data 0 output packets 0 output SACKs 0 output DATA chunks 0 retransmitted DATA chunks 0 fast retransmitted DATA chunks 0 FR's that happened more than once to same chunk 0 intput HB chunks 0 output ECNE chunks 0 output AUTH chunks 0 ip_output error counter Packet drop statistics: 0 from middle box 0 from end host 0 with data 0 non-data, non-endhost 0 non-endhost, bandwidth rep only 0 not enough for chunk header 0 not enough data to confirm 0 where process_chunk_drop said break 0 failed to find TSN 0 attempt reverse TSN lookup 0 e-host confirms zero-rwnd 0 midbox confirms no space 0 data did not match TSN 0 TSN's marked for Fast Retran Timeouts: 0 iterator timers fired 0 T3 data time outs 0 window probe (T3) timers fired 0 INIT timers fired 0 sack timers fired 0 shutdown timers fired 0 heartbeat timers fired 0 a cookie timeout fired 0 an endpoint changed its cookiesecret 0 PMTU timers fired 0 shutdown ack timers fired 0 shutdown guard timers fired 0 stream reset timers fired 0 early FR timers fired 0 an asconf timer fired 0 auto close timer fired 0 asoc free timers expired 0 inp free timers expired 0 packet shorter than header 0 checksum error 0 no endpoint for port 0 bad v-tag 0 bad SID 0 no memory 0 number of multiple FR in a RTT window 0 RFC813 allowed sending 0 RFC813 does not allow sending 0 times max burst prohibited sending 0 look ahead tells us no memory in interface 0 numbers of window probes sent 0 times an output error to clamp down on next user send 0 times sctp_senderrors were caused from a user 0 number of in data drops due to chunk limit reached 0 number of in data drops due to rwnd limit reached 0 times a ECN reduced the cwnd 0 used express lookup via vtag 0 collision in express lookup 0 times the sender ran dry of user data on primary 0 same for above 0 sacks the slow way 0 window update only sacks sent 0 sends with sinfo_flags !=0 0 unordered sends 0 sends with EOF flag set 0 sends with ABORT flag set 0 times protocol drain called 0 times we did a protocol drain 0 times recv was called with peek 0 cached chunks used 0 cached stream oq's used 0 unread messages abandonded by close 0 send burst avoidance, already max burst inflight to net 0 send cwnd full avoidance, already max burst inflight to net 0 number of map array over-runs via fwd-tsn's ip: 137814085 total packets received 0 bad header checksums 0 with size smaller than minimum 0 with data size < data length 0 with ip length > max ip packet size 0 with header length < data size 0 with data length < header length 0 with bad options 0 with incorrect version number 1200 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 300 packets reassembled ok 137813009 packets for this host 530 packets for unknown/unsupported protocol 0 packets forwarded (0 packets fast forwarded) 61 packets not forwardable 0 packets received for unknown multicast group 0 redirects sent 137234598 packets sent from this host 0 packets sent with fabricated ip header 685307 output packets dropped due to no bufs, etc. 52 output packets discarded due to no route 300 output datagrams fragmented 1200 fragments created 0 datagrams that can't be fragmented 0 tunneling packets that can't find gif 0 datagrams with bad address in header icmp: 0 calls to icmp_error 0 errors not generated in response to an icmp message Output histogram: echo reply: 305 0 messages with bad code fields 0 messages less than the minimum length 0 messages with bad checksum 0 messages with bad length 0 multicast echo requests ignored 0 multicast timestamp requests ignored Input histogram: destination unreachable: 530 echo: 305 305 message responses generated 0 invalid return addresses 0 no return routes ICMP address mask responses are disabled igmp: 0 messages received 0 messages received with too few bytes 0 messages received with wrong TTL 0 messages received with bad checksum 0 V1/V2 membership queries received 0 V3 membership queries received 0 membership queries received with invalid field(s) 0 general queries received 0 group queries received 0 group-source queries received 0 group-source queries dropped 0 membership reports received 0 membership reports received with invalid field(s) 0 membership reports received for groups to which we belong 0 V3 reports received without Router Alert 0 membership reports sent arp: 376748 ARP requests sent 3207 ARP replies sent 245245 ARP requests received 80845 ARP replies received 326090 ARP packets received 267712 total packets dropped due to no ARP entry 108876 ARP entrys timed out 0 Duplicate IPs seen ip6: 2226633 total packets received 0 with size smaller than minimum 0 with data size < data length 0 with bad options 0 with incorrect version number 0 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 0 fragments that exceeded limit 0 packets reassembled ok 2226633 packets for this host 0 packets forwarded 0 packets not forwardable 0 redirects sent 2226633 packets sent from this host 0 packets sent with fabricated ip header 0 output packets dropped due to no bufs, etc. 8 output packets discarded due to no route 0 output datagrams fragmented 0 fragments created 0 datagrams that can't be fragmented 0 packets that violated scope rules 0 multicast packets which we don't join Input histogram: UDP: 2226633 Mbuf statistics: 962679 one mbuf 1263954 one ext mbuf 0 two or more ext mbuf 0 packets whose headers are not continuous 0 tunneling packets that can't find gif 0 packets discarded because of too many headers 0 failures of source address selection Source addresses selection rule applied: icmp6: 0 calls to icmp6_error 0 errors not generated in response to an icmp6 message 0 errors not generated because of rate limitation 0 messages with bad code fields 0 messages < minimum length 0 bad checksums 0 messages with bad length Histogram of error messages to be generated: 0 no route 0 administratively prohibited 0 beyond scope 0 address unreachable 0 port unreachable 0 packet too big 0 time exceed transit 0 time exceed reassembly 0 erroneous header field 0 unrecognized next header 0 unrecognized option 0 redirect 0 unknown 0 message responses generated 0 messages with too many ND options 0 messages with bad ND options 0 bad neighbor solicitation messages 0 bad neighbor advertisement messages 0 bad router solicitation messages 0 bad router advertisement messages 0 bad redirect messages 0 path MTU changes rip6: 0 messages received 0 checksum calculations on inbound 0 messages with bad checksum 0 messages dropped due to no socket 0 multicast messages dropped due to no socket 0 messages dropped due to full socket buffers 0 delivered 0 datagrams output netstat -m 516/5124/5640 mbufs in use (current/cache/total) 512/1634/2146/32768 mbuf clusters in use (current/cache/total/max) 512/1536 mbuf+clusters out of packet secondary zone in use (current/cache) 0/1303/1303/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 1153K/9761K/10914K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/8/6656 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines Anyone got an idea what might be the possible cause?

    Read the article

  • What&rsquo;s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

    - by Rick Strahl
    In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier. WebForms The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages. Take Control of Your ClientIDs Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs. In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page. <%@Page Language="C#"      MasterPageFile="~/Site.Master"     CodeBehind="WebForm2.aspx.cs"     Inherits="WebApplication1.WebForm2"  %> <asp:Content ID="content"  ContentPlaceHolderID="content"               runat="server"               ClientIDMode="Static" >       <asp:TextBox runat="server" ID="txtName" /> </asp:Content> The four available ClientIDMode values are: AutoID This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place. <input name="ctl00$content$txtName" type="text"        id="ctl00_content_txtName" /> This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks. Static This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids: <input name="ctl00$content$txtName"         type="text" id="txtName" /> Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control. This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict. Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix. Predictable The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows: This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_). The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting. For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this: <input name="ctl00$content$txtName" type="text"         id="content_txtName" /> which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container. If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName: <input name="ctl00$content$txtName" type="text"         id="ctl00_content_txtName" /> The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging. The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode. Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example: <asp:GridView runat="server" ID="gvItems"              AutoGenerateColumns="false"             ClientIDMode="Static"              ClientIDRowSuffix="Id">     <Columns>     <asp:TemplateField>         <ItemTemplate>             <asp:Label runat="server" id="txtName"                        Text='<%# Eval("Name") %>'                   ClientIDMode="Predictable"/>         </ItemTemplate>     </asp:TemplateField>     <asp:TemplateField>         <ItemTemplate>         <asp:Label runat="server" id="txtId"                     Text='<%# Eval("Id") %>'                     ClientIDMode="Predictable" />         </ItemTemplate>     </asp:TemplateField>     </Columns>  </asp:GridView> generates client Ids inside of a column in the master page described earlier: <td>     <span id="txtName_0">Rick</span> </td> where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls: <span id="ctrl0_txtName_0">Rick</span> I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway. Inherit The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting. For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx. ClientID Enhancements Summary The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls. ViewStateMode Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates. Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems. In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it. ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"        ClientIDMode="Static"                ViewStateMode="Disabled"     EnableViewState="true"  %> <!-- this control has viewstate  --> <asp:TextBox runat="server" ID="txtName"  ViewStateMode="Enabled" />       <!-- this control has no viewstate - it inherits  from parent container --> <asp:TextBox runat="server" ID="txtAddress" /> Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it. For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on. Inline HTML Encoding HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this: <%: "<script type='text/javascript'>" +     "alert('Really?');</script>" %> which produces properly encoded HTML: &lt;script type=&#39;text/javascript&#39; &gt;alert(&#39;Really?&#39;);&lt;/script&gt; Effectively this is a shortcut to: <%= HttpUtility.HtmlEncode( "<script type='text/javascript'>" + "alert('Really?');</script>") %> Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values: <%: Model.Address.Street %> This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks. Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature. ScriptManager Enhancements The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading. The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library. There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath. [assembly: WebResource(    "Westwind.Web.Resources.ww.jquery.js",    "application/x-javascript",    CdnPath =  "http://mysite.com/scripts/ww.jquery.min.js")] Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example. Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config: <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <Scripts>         <asp:ScriptReference          Name="Westwind.Web.Resources.ww.jquery.js"         Assembly="Westwind.Web" />     </Scripts>        </asp:ScriptManager> The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js): <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <CompositeScript          Path="~/scripts/allscripts.js">         <Scripts>             <asp:ScriptReference                    Path="~/scripts/jquery.js" />             <asp:ScriptReference                    Path="~/scripts/ww.jquery.js" />             <asp:ScriptReference            Name="Westwind.Web.Resources.editors.js"                 Assembly="Westwind.Web" />         </Scripts>     </CompositeScript> </asp:ScriptManager> When you render this into HTML, you’ll see a single script reference in the page: <script src="scripts/allscripts.debug.js"          type="text/javascript"></script> All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally. The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy. Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers. MetaDescription, MetaKeywords Page Properties There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself: Page.MetaKeywords = "ASP.NET,4.0,New Features"; Page.MetaDescription = "This article discusses the new features in ASP.NET 4.0"; Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"      ClientIDMode="Static"                MetaDescription="Article that discusses what's                      new in ASP.NET 4.0"     MetaKeywords="ASP.NET,4.0,New Features" %> Nothing earth shattering but quite convenient. Visual Studio 2010 Enhancements for Web Development For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general. Text Editors Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing. On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack. Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors. Multi-Targeting Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions. Multi-Monitor Support One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment. IntelliSense Snippets for HTML and JavaScript Editors The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually. You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created. Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets. jQuery Integration Is Now Native jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery. Summary ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback. If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Did I lose my RAID again?

    - by BarsMonster
    Hi! A little history: 2 years ago I was really excited to find out that mdadm is so powerful that it even can reshape arrays, so you can start with a smaller array and then grow it as you need. I've bought 3x1Tb drives and made a RAID-5. It was fine for a year. Then I bought 2x more, and tried to reshape to RAID-6 out of 5 drives, and due to some mess with superblock versions, lost all content. Had to rebuild it from scratch, but 2Tb of data were gone. Yesterday I bought 2 more drives, and this time I had everything: properly built array, UPS. I've disabled write intent map, added 2 new drives as spares and run a command to grow array to 7-disks. It started working, but speed was ridiculously slow, ~100kb/sec. After processing first 37Mb at such an amazing speed, one of old HDDs fails. I properly shutdown the PC and disconnected the failed drive. After bootup it appeared that it recreated the intent map as it was still in mdadm config, so I removed it from config and rebooted again. Now all I see is that all mdadm processes deadlock, and don't do anything. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1937 root 20 0 12992 608 444 D 0 0.1 0:00.00 mdadm 2283 root 20 0 12992 852 704 D 0 0.1 0:00.01 mdadm 2287 root 20 0 0 0 0 D 0 0.0 0:00.01 md0_reshape 2288 root 18 -2 12992 820 676 D 0 0.1 0:00.01 mdadm And all I see in mdstat is: $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sdb1[1] sdg1[4] sdf1[7] sde1[6] sdd1[0] sdc1[5] 2929683456 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [7/6] [UU_UUUU] [>....................] reshape = 0.0% (37888/976561152) finish=567604147.2min speed=0K/sec I've already tried mdadm 2.6.7, 3.1.4 and 3.2 - nothing helps. Did I lose my data again? Any suggestions on how can I make this work? OS is Ubuntu Server 10.04.2. PS. Needless to say, the data is inaccessible - I cannot mount /dev/md0 to save the most valuable data. You can see my disappointment - the very specific thing I was excited about failed twice taking 5Tb of my data with it. Update: It appears there is some nice info in kern.log: 21:38:48 ...: [ 166.522055] raid5: reshape will continue 21:38:48 ...: [ 166.522085] raid5: device sdb1 operational as raid disk 1 21:38:48 ...: [ 166.522091] raid5: device sdg1 operational as raid disk 4 21:38:48 ...: [ 166.522097] raid5: device sdf1 operational as raid disk 5 21:38:48 ...: [ 166.522102] raid5: device sde1 operational as raid disk 6 21:38:48 ...: [ 166.522107] raid5: device sdd1 operational as raid disk 0 21:38:48 ...: [ 166.522111] raid5: device sdc1 operational as raid disk 3 21:38:48 ...: [ 166.523942] raid5: allocated 7438kB for md0 21:38:48 ...: [ 166.524041] 1: w=1 pa=2 pr=5 m=2 a=2 r=7 op1=0 op2=0 21:38:48 ...: [ 166.524050] 4: w=2 pa=2 pr=5 m=2 a=2 r=7 op1=0 op2=0 21:38:48 ...: [ 166.524056] 5: w=3 pa=2 pr=5 m=2 a=2 r=7 op1=0 op2=0 21:38:48 ...: [ 166.524062] 6: w=4 pa=2 pr=5 m=2 a=2 r=7 op1=0 op2=0 21:38:48 ...: [ 166.524068] 0: w=5 pa=2 pr=5 m=2 a=2 r=7 op1=0 op2=0 21:38:48 ...: [ 166.524073] 3: w=6 pa=2 pr=5 m=2 a=2 r=7 op1=0 op2=0 21:38:48 ...: [ 166.524079] raid5: raid level 6 set md0 active with 6 out of 7 devices, algorithm 2 21:38:48 ...: [ 166.524519] RAID5 conf printout: 21:38:48 ...: [ 166.524523] --- rd:7 wd:6 21:38:48 ...: [ 166.524528] disk 0, o:1, dev:sdd1 21:38:48 ...: [ 166.524532] disk 1, o:1, dev:sdb1 21:38:48 ...: [ 166.524537] disk 3, o:1, dev:sdc1 21:38:48 ...: [ 166.524541] disk 4, o:1, dev:sdg1 21:38:48 ...: [ 166.524545] disk 5, o:1, dev:sdf1 21:38:48 ...: [ 166.524550] disk 6, o:1, dev:sde1 21:38:48 ...: [ 166.524553] ...ok start reshape thread 21:38:48 ...: [ 166.524727] md0: detected capacity change from 0 to 2999995858944 21:38:48 ...: [ 166.524735] md: reshape of RAID array md0 21:38:48 ...: [ 166.524740] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. 21:38:48 ...: [ 166.524745] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reshape. 21:38:48 ...: [ 166.524756] md: using 128k window, over a total of 976561152 blocks. 21:39:05 ...: [ 166.525013] md0: 21:42:04 ...: [ 362.520063] INFO: task mdadm:1937 blocked for more than 120 seconds. 21:42:04 ...: [ 362.520068] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:42:04 ...: [ 362.520073] mdadm D 00000000ffffffff 0 1937 1 0x00000000 21:42:04 ...: [ 362.520083] ffff88002ef4f5d8 0000000000000082 0000000000015bc0 0000000000015bc0 21:42:04 ...: [ 362.520092] ffff88002eb5b198 ffff88002ef4ffd8 0000000000015bc0 ffff88002eb5ade0 21:42:04 ...: [ 362.520100] 0000000000015bc0 ffff88002ef4ffd8 0000000000015bc0 ffff88002eb5b198 21:42:04 ...: [ 362.520107] Call Trace: 21:42:04 ...: [ 362.520133] [<ffffffffa0224892>] get_active_stripe+0x312/0x3f0 [raid456] 21:42:04 ...: [ 362.520148] [<ffffffff81059ae0>] ? default_wake_function+0x0/0x20 21:42:04 ...: [ 362.520159] [<ffffffffa0228413>] make_request+0x243/0x4b0 [raid456] 21:42:04 ...: [ 362.520169] [<ffffffffa0221a90>] ? release_stripe+0x50/0x70 [raid456] 21:42:04 ...: [ 362.520179] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:42:04 ...: [ 362.520188] [<ffffffff81414df0>] md_make_request+0xc0/0x130 21:42:04 ...: [ 362.520194] [<ffffffff81414df0>] ? md_make_request+0xc0/0x130 21:42:04 ...: [ 362.520205] [<ffffffff8129f8c1>] generic_make_request+0x1b1/0x4f0 21:42:04 ...: [ 362.520214] [<ffffffff810f6515>] ? mempool_alloc_slab+0x15/0x20 21:42:04 ...: [ 362.520222] [<ffffffff8116c2ec>] ? alloc_buffer_head+0x1c/0x60 21:42:04 ...: [ 362.520230] [<ffffffff8129fc80>] submit_bio+0x80/0x110 21:42:04 ...: [ 362.520236] [<ffffffff8116c849>] submit_bh+0xf9/0x140 21:42:04 ...: [ 362.520244] [<ffffffff8116f124>] block_read_full_page+0x274/0x3b0 21:42:04 ...: [ 362.520251] [<ffffffff81172c90>] ? blkdev_get_block+0x0/0x70 21:42:04 ...: [ 362.520258] [<ffffffff8110d875>] ? __inc_zone_page_state+0x35/0x40 21:42:04 ...: [ 362.520265] [<ffffffff810f46d8>] ? add_to_page_cache_locked+0xe8/0x160 21:42:04 ...: [ 362.520272] [<ffffffff81173d78>] blkdev_readpage+0x18/0x20 21:42:04 ...: [ 362.520279] [<ffffffff810f484b>] __read_cache_page+0x7b/0xe0 21:42:04 ...: [ 362.520285] [<ffffffff81173d60>] ? blkdev_readpage+0x0/0x20 21:42:04 ...: [ 362.520290] [<ffffffff81173d60>] ? blkdev_readpage+0x0/0x20 21:42:04 ...: [ 362.520297] [<ffffffff810f57dc>] do_read_cache_page+0x3c/0x120 21:42:04 ...: [ 362.520304] [<ffffffff810f5909>] read_cache_page_async+0x19/0x20 21:42:04 ...: [ 362.520310] [<ffffffff810f591e>] read_cache_page+0xe/0x20 21:42:04 ...: [ 362.520317] [<ffffffff811a6cb0>] read_dev_sector+0x30/0xa0 21:42:04 ...: [ 362.520324] [<ffffffff811a7fcd>] amiga_partition+0x6d/0x460 21:42:04 ...: [ 362.520331] [<ffffffff811a7938>] check_partition+0x138/0x190 21:42:04 ...: [ 362.520338] [<ffffffff811a7a7a>] rescan_partitions+0xea/0x2f0 21:42:04 ...: [ 362.520344] [<ffffffff811744c7>] __blkdev_get+0x267/0x3d0 21:42:04 ...: [ 362.520350] [<ffffffff81174650>] ? blkdev_open+0x0/0xc0 21:42:04 ...: [ 362.520356] [<ffffffff81174640>] blkdev_get+0x10/0x20 21:42:04 ...: [ 362.520362] [<ffffffff811746c1>] blkdev_open+0x71/0xc0 21:42:04 ...: [ 362.520369] [<ffffffff811419f3>] __dentry_open+0x113/0x370 21:42:04 ...: [ 362.520377] [<ffffffff81253f8f>] ? security_inode_permission+0x1f/0x30 21:42:04 ...: [ 362.520385] [<ffffffff8114de3f>] ? inode_permission+0xaf/0xd0 21:42:04 ...: [ 362.520391] [<ffffffff81141d67>] nameidata_to_filp+0x57/0x70 21:42:04 ...: [ 362.520398] [<ffffffff8115207a>] do_filp_open+0x2da/0xba0 21:42:04 ...: [ 362.520406] [<ffffffff811134a8>] ? unmap_vmas+0x178/0x310 21:42:04 ...: [ 362.520414] [<ffffffff8115dbfa>] ? alloc_fd+0x10a/0x150 21:42:04 ...: [ 362.520421] [<ffffffff81141769>] do_sys_open+0x69/0x170 21:42:04 ...: [ 362.520428] [<ffffffff811418b0>] sys_open+0x20/0x30 21:42:04 ...: [ 362.520437] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:42:04 ...: [ 362.520446] INFO: task mdadm:2283 blocked for more than 120 seconds. 21:42:04 ...: [ 362.520450] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:42:04 ...: [ 362.520454] mdadm D 0000000000000000 0 2283 2212 0x00000000 21:42:04 ...: [ 362.520462] ffff88002cca7d98 0000000000000086 0000000000015bc0 0000000000015bc0 21:42:04 ...: [ 362.520470] ffff88002ededf78 ffff88002cca7fd8 0000000000015bc0 ffff88002ededbc0 21:42:04 ...: [ 362.520478] 0000000000015bc0 ffff88002cca7fd8 0000000000015bc0 ffff88002ededf78 21:42:04 ...: [ 362.520485] Call Trace: 21:42:04 ...: [ 362.520495] [<ffffffff81543a97>] __mutex_lock_slowpath+0xf7/0x180 21:42:04 ...: [ 362.520502] [<ffffffff8154397b>] mutex_lock+0x2b/0x50 21:42:04 ...: [ 362.520508] [<ffffffff8117404d>] __blkdev_put+0x3d/0x190 21:42:04 ...: [ 362.520514] [<ffffffff811741b0>] blkdev_put+0x10/0x20 21:42:04 ...: [ 362.520520] [<ffffffff811741f3>] blkdev_close+0x33/0x60 21:42:04 ...: [ 362.520527] [<ffffffff81145375>] __fput+0xf5/0x210 21:42:04 ...: [ 362.520534] [<ffffffff811454b5>] fput+0x25/0x30 21:42:04 ...: [ 362.520540] [<ffffffff811415ad>] filp_close+0x5d/0x90 21:42:04 ...: [ 362.520546] [<ffffffff81141697>] sys_close+0xb7/0x120 21:42:04 ...: [ 362.520553] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:42:04 ...: [ 362.520559] INFO: task md0_reshape:2287 blocked for more than 120 seconds. 21:42:04 ...: [ 362.520563] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:42:04 ...: [ 362.520567] md0_reshape D ffff88003aee96f0 0 2287 2 0x00000000 21:42:04 ...: [ 362.520575] ffff88003cf05a70 0000000000000046 0000000000015bc0 0000000000015bc0 21:42:04 ...: [ 362.520582] ffff88003aee9aa8 ffff88003cf05fd8 0000000000015bc0 ffff88003aee96f0 21:42:04 ...: [ 362.520590] 0000000000015bc0 ffff88003cf05fd8 0000000000015bc0 ffff88003aee9aa8 21:42:04 ...: [ 362.520597] Call Trace: 21:42:04 ...: [ 362.520608] [<ffffffffa0224892>] get_active_stripe+0x312/0x3f0 [raid456] 21:42:04 ...: [ 362.520616] [<ffffffff81059ae0>] ? default_wake_function+0x0/0x20 21:42:04 ...: [ 362.520626] [<ffffffffa0226f80>] reshape_request+0x4c0/0x9a0 [raid456] 21:42:04 ...: [ 362.520634] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:42:04 ...: [ 362.520644] [<ffffffffa022777a>] sync_request+0x31a/0x3a0 [raid456] 21:42:04 ...: [ 362.520651] [<ffffffff81052713>] ? __wake_up+0x53/0x70 21:42:04 ...: [ 362.520658] [<ffffffff814156b1>] md_do_sync+0x621/0xbb0 21:42:04 ...: [ 362.520668] [<ffffffff810387b9>] ? default_spin_lock_flags+0x9/0x10 21:42:04 ...: [ 362.520675] [<ffffffff8141640c>] md_thread+0x5c/0x130 21:42:04 ...: [ 362.520681] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:42:04 ...: [ 362.520688] [<ffffffff814163b0>] ? md_thread+0x0/0x130 21:42:04 ...: [ 362.520694] [<ffffffff81084416>] kthread+0x96/0xa0 21:42:04 ...: [ 362.520701] [<ffffffff810131ea>] child_rip+0xa/0x20 21:42:04 ...: [ 362.520707] [<ffffffff81084380>] ? kthread+0x0/0xa0 21:42:04 ...: [ 362.520713] [<ffffffff810131e0>] ? child_rip+0x0/0x20 21:42:04 ...: [ 362.520718] INFO: task mdadm:2288 blocked for more than 120 seconds. 21:42:04 ...: [ 362.520721] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:42:04 ...: [ 362.520725] mdadm D 0000000000000000 0 2288 1 0x00000000 21:42:04 ...: [ 362.520733] ffff88002cca9c18 0000000000000086 0000000000015bc0 0000000000015bc0 21:42:04 ...: [ 362.520741] ffff88003aee83b8 ffff88002cca9fd8 0000000000015bc0 ffff88003aee8000 21:42:04 ...: [ 362.520748] 0000000000015bc0 ffff88002cca9fd8 0000000000015bc0 ffff88003aee83b8 21:42:04 ...: [ 362.520755] Call Trace: 21:42:04 ...: [ 362.520763] [<ffffffff81543a97>] __mutex_lock_slowpath+0xf7/0x180 21:42:04 ...: [ 362.520771] [<ffffffff812a6d50>] ? exact_match+0x0/0x10 21:42:04 ...: [ 362.520777] [<ffffffff8154397b>] mutex_lock+0x2b/0x50 21:42:04 ...: [ 362.520783] [<ffffffff811742c8>] __blkdev_get+0x68/0x3d0 21:42:04 ...: [ 362.520790] [<ffffffff81174650>] ? blkdev_open+0x0/0xc0 21:42:04 ...: [ 362.520795] [<ffffffff81174640>] blkdev_get+0x10/0x20 21:42:04 ...: [ 362.520801] [<ffffffff811746c1>] blkdev_open+0x71/0xc0 21:42:04 ...: [ 362.520808] [<ffffffff811419f3>] __dentry_open+0x113/0x370 21:42:04 ...: [ 362.520815] [<ffffffff81253f8f>] ? security_inode_permission+0x1f/0x30 21:42:04 ...: [ 362.520821] [<ffffffff8114de3f>] ? inode_permission+0xaf/0xd0 21:42:04 ...: [ 362.520828] [<ffffffff81141d67>] nameidata_to_filp+0x57/0x70 21:42:04 ...: [ 362.520834] [<ffffffff8115207a>] do_filp_open+0x2da/0xba0 21:42:04 ...: [ 362.520841] [<ffffffff810ff0e1>] ? lru_cache_add_lru+0x21/0x40 21:42:04 ...: [ 362.520848] [<ffffffff8111109c>] ? do_anonymous_page+0x11c/0x330 21:42:04 ...: [ 362.520855] [<ffffffff81115d5f>] ? handle_mm_fault+0x31f/0x3c0 21:42:04 ...: [ 362.520862] [<ffffffff8115dbfa>] ? alloc_fd+0x10a/0x150 21:42:04 ...: [ 362.520868] [<ffffffff81141769>] do_sys_open+0x69/0x170 21:42:04 ...: [ 362.520874] [<ffffffff811418b0>] sys_open+0x20/0x30 21:42:04 ...: [ 362.520882] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:44:04 ...: [ 482.520065] INFO: task mdadm:1937 blocked for more than 120 seconds. 21:44:04 ...: [ 482.520071] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:44:04 ...: [ 482.520077] mdadm D 00000000ffffffff 0 1937 1 0x00000000 21:44:04 ...: [ 482.520087] ffff88002ef4f5d8 0000000000000082 0000000000015bc0 0000000000015bc0 21:44:04 ...: [ 482.520096] ffff88002eb5b198 ffff88002ef4ffd8 0000000000015bc0 ffff88002eb5ade0 21:44:04 ...: [ 482.520104] 0000000000015bc0 ffff88002ef4ffd8 0000000000015bc0 ffff88002eb5b198 21:44:04 ...: [ 482.520112] Call Trace: 21:44:04 ...: [ 482.520139] [<ffffffffa0224892>] get_active_stripe+0x312/0x3f0 [raid456] 21:44:04 ...: [ 482.520154] [<ffffffff81059ae0>] ? default_wake_function+0x0/0x20 21:44:04 ...: [ 482.520165] [<ffffffffa0228413>] make_request+0x243/0x4b0 [raid456] 21:44:04 ...: [ 482.520175] [<ffffffffa0221a90>] ? release_stripe+0x50/0x70 [raid456] 21:44:04 ...: [ 482.520185] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:44:04 ...: [ 482.520194] [<ffffffff81414df0>] md_make_request+0xc0/0x130 21:44:04 ...: [ 482.520201] [<ffffffff81414df0>] ? md_make_request+0xc0/0x130 21:44:04 ...: [ 482.520212] [<ffffffff8129f8c1>] generic_make_request+0x1b1/0x4f0 21:44:04 ...: [ 482.520221] [<ffffffff810f6515>] ? mempool_alloc_slab+0x15/0x20 21:44:04 ...: [ 482.520229] [<ffffffff8116c2ec>] ? alloc_buffer_head+0x1c/0x60 21:44:04 ...: [ 482.520237] [<ffffffff8129fc80>] submit_bio+0x80/0x110 21:44:04 ...: [ 482.520244] [<ffffffff8116c849>] submit_bh+0xf9/0x140 21:44:04 ...: [ 482.520252] [<ffffffff8116f124>] block_read_full_page+0x274/0x3b0 21:44:04 ...: [ 482.520258] [<ffffffff81172c90>] ? blkdev_get_block+0x0/0x70 21:44:04 ...: [ 482.520266] [<ffffffff8110d875>] ? __inc_zone_page_state+0x35/0x40 21:44:04 ...: [ 482.520273] [<ffffffff810f46d8>] ? add_to_page_cache_locked+0xe8/0x160 21:44:04 ...: [ 482.520280] [<ffffffff81173d78>] blkdev_readpage+0x18/0x20 21:44:04 ...: [ 482.520286] [<ffffffff810f484b>] __read_cache_page+0x7b/0xe0 21:44:04 ...: [ 482.520293] [<ffffffff81173d60>] ? blkdev_readpage+0x0/0x20 21:44:04 ...: [ 482.520299] [<ffffffff81173d60>] ? blkdev_readpage+0x0/0x20 21:44:04 ...: [ 482.520306] [<ffffffff810f57dc>] do_read_cache_page+0x3c/0x120 21:44:04 ...: [ 482.520313] [<ffffffff810f5909>] read_cache_page_async+0x19/0x20 21:44:04 ...: [ 482.520319] [<ffffffff810f591e>] read_cache_page+0xe/0x20 21:44:04 ...: [ 482.520327] [<ffffffff811a6cb0>] read_dev_sector+0x30/0xa0 21:44:04 ...: [ 482.520334] [<ffffffff811a7fcd>] amiga_partition+0x6d/0x460 21:44:04 ...: [ 482.520341] [<ffffffff811a7938>] check_partition+0x138/0x190 21:44:04 ...: [ 482.520348] [<ffffffff811a7a7a>] rescan_partitions+0xea/0x2f0 21:44:04 ...: [ 482.520355] [<ffffffff811744c7>] __blkdev_get+0x267/0x3d0 21:44:04 ...: [ 482.520361] [<ffffffff81174650>] ? blkdev_open+0x0/0xc0 21:44:04 ...: [ 482.520367] [<ffffffff81174640>] blkdev_get+0x10/0x20 21:44:04 ...: [ 482.520373] [<ffffffff811746c1>] blkdev_open+0x71/0xc0 21:44:04 ...: [ 482.520380] [<ffffffff811419f3>] __dentry_open+0x113/0x370 21:44:04 ...: [ 482.520388] [<ffffffff81253f8f>] ? security_inode_permission+0x1f/0x30 21:44:04 ...: [ 482.520396] [<ffffffff8114de3f>] ? inode_permission+0xaf/0xd0 21:44:04 ...: [ 482.520403] [<ffffffff81141d67>] nameidata_to_filp+0x57/0x70 21:44:04 ...: [ 482.520410] [<ffffffff8115207a>] do_filp_open+0x2da/0xba0 21:44:04 ...: [ 482.520417] [<ffffffff811134a8>] ? unmap_vmas+0x178/0x310 21:44:04 ...: [ 482.520426] [<ffffffff8115dbfa>] ? alloc_fd+0x10a/0x150 21:44:04 ...: [ 482.520432] [<ffffffff81141769>] do_sys_open+0x69/0x170 21:44:04 ...: [ 482.520438] [<ffffffff811418b0>] sys_open+0x20/0x30 21:44:04 ...: [ 482.520447] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:44:04 ...: [ 482.520458] INFO: task mdadm:2283 blocked for more than 120 seconds. 21:44:04 ...: [ 482.520462] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:44:04 ...: [ 482.520467] mdadm D 0000000000000000 0 2283 2212 0x00000000 21:44:04 ...: [ 482.520475] ffff88002cca7d98 0000000000000086 0000000000015bc0 0000000000015bc0 21:44:04 ...: [ 482.520483] ffff88002ededf78 ffff88002cca7fd8 0000000000015bc0 ffff88002ededbc0 21:44:04 ...: [ 482.520490] 0000000000015bc0 ffff88002cca7fd8 0000000000015bc0 ffff88002ededf78 21:44:04 ...: [ 482.520498] Call Trace: 21:44:04 ...: [ 482.520508] [<ffffffff81543a97>] __mutex_lock_slowpath+0xf7/0x180 21:44:04 ...: [ 482.520515] [<ffffffff8154397b>] mutex_lock+0x2b/0x50 21:44:04 ...: [ 482.520521] [<ffffffff8117404d>] __blkdev_put+0x3d/0x190 21:44:04 ...: [ 482.520527] [<ffffffff811741b0>] blkdev_put+0x10/0x20 21:44:04 ...: [ 482.520533] [<ffffffff811741f3>] blkdev_close+0x33/0x60 21:44:04 ...: [ 482.520541] [<ffffffff81145375>] __fput+0xf5/0x210 21:44:04 ...: [ 482.520547] [<ffffffff811454b5>] fput+0x25/0x30 21:44:04 ...: [ 482.520554] [<ffffffff811415ad>] filp_close+0x5d/0x90 21:44:04 ...: [ 482.520560] [<ffffffff81141697>] sys_close+0xb7/0x120 21:44:04 ...: [ 482.520568] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:44:04 ...: [ 482.520574] INFO: task md0_reshape:2287 blocked for more than 120 seconds. 21:44:04 ...: [ 482.520578] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:44:04 ...: [ 482.520582] md0_reshape D ffff88003aee96f0 0 2287 2 0x00000000 21:44:04 ...: [ 482.520590] ffff88003cf05a70 0000000000000046 0000000000015bc0 0000000000015bc0 21:44:04 ...: [ 482.520597] ffff88003aee9aa8 ffff88003cf05fd8 0000000000015bc0 ffff88003aee96f0 21:44:04 ...: [ 482.520605] 0000000000015bc0 ffff88003cf05fd8 0000000000015bc0 ffff88003aee9aa8 21:44:04 ...: [ 482.520612] Call Trace: 21:44:04 ...: [ 482.520623] [<ffffffffa0224892>] get_active_stripe+0x312/0x3f0 [raid456] 21:44:04 ...: [ 482.520633] [<ffffffff81059ae0>] ? default_wake_function+0x0/0x20 21:44:04 ...: [ 482.520643] [<ffffffffa0226f80>] reshape_request+0x4c0/0x9a0 [raid456] 21:44:04 ...: [ 482.520651] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:44:04 ...: [ 482.520661] [<ffffffffa022777a>] sync_request+0x31a/0x3a0 [raid456] 21:44:04 ...: [ 482.520668] [<ffffffff81052713>] ? __wake_up+0x53/0x70 21:44:04 ...: [ 482.520675] [<ffffffff814156b1>] md_do_sync+0x621/0xbb0 21:44:04 ...: [ 482.520685] [<ffffffff810387b9>] ? default_spin_lock_flags+0x9/0x10 21:44:04 ...: [ 482.520692] [<ffffffff8141640c>] md_thread+0x5c/0x130 21:44:04 ...: [ 482.520699] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:44:04 ...: [ 482.520705] [<ffffffff814163b0>] ? md_thread+0x0/0x130 21:44:04 ...: [ 482.520711] [<ffffffff81084416>] kthread+0x96/0xa0 21:44:04 ...: [ 482.520718] [<ffffffff810131ea>] child_rip+0xa/0x20 21:44:04 ...: [ 482.520725] [<ffffffff81084380>] ? kthread+0x0/0xa0 21:44:04 ...: [ 482.520730] [<ffffffff810131e0>] ? child_rip+0x0/0x20 21:44:04 ...: [ 482.520735] INFO: task mdadm:2288 blocked for more than 120 seconds. 21:44:04 ...: [ 482.520739] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:44:04 ...: [ 482.520743] mdadm D 0000000000000000 0 2288 1 0x00000000 21:44:04 ...: [ 482.520751] ffff88002cca9c18 0000000000000086 0000000000015bc0 0000000000015bc0 21:44:04 ...: [ 482.520759] ffff88003aee83b8 ffff88002cca9fd8 0000000000015bc0 ffff88003aee8000 21:44:04 ...: [ 482.520767] 0000000000015bc0 ffff88002cca9fd8 0000000000015bc0 ffff88003aee83b8 21:44:04 ...: [ 482.520774] Call Trace: 21:44:04 ...: [ 482.520782] [<ffffffff81543a97>] __mutex_lock_slowpath+0xf7/0x180 21:44:04 ...: [ 482.520790] [<ffffffff812a6d50>] ? exact_match+0x0/0x10 21:44:04 ...: [ 482.520797] [<ffffffff8154397b>] mutex_lock+0x2b/0x50 21:44:04 ...: [ 482.520804] [<ffffffff811742c8>] __blkdev_get+0x68/0x3d0 21:44:04 ...: [ 482.520810] [<ffffffff81174650>] ? blkdev_open+0x0/0xc0 21:44:04 ...: [ 482.520816] [<ffffffff81174640>] blkdev_get+0x10/0x20 21:44:04 ...: [ 482.520822] [<ffffffff811746c1>] blkdev_open+0x71/0xc0 21:44:04 ...: [ 482.520829] [<ffffffff811419f3>] __dentry_open+0x113/0x370 21:44:04 ...: [ 482.520837] [<ffffffff81253f8f>] ? security_inode_permission+0x1f/0x30 21:44:04 ...: [ 482.520843] [<ffffffff8114de3f>] ? inode_permission+0xaf/0xd0 21:44:04 ...: [ 482.520850] [<ffffffff81141d67>] nameidata_to_filp+0x57/0x70 21:44:04 ...: [ 482.520857] [<ffffffff8115207a>] do_filp_open+0x2da/0xba0 21:44:04 ...: [ 482.520864] [<ffffffff810ff0e1>] ? lru_cache_add_lru+0x21/0x40 21:44:04 ...: [ 482.520871] [<ffffffff8111109c>] ? do_anonymous_page+0x11c/0x330 21:44:04 ...: [ 482.520878] [<ffffffff81115d5f>] ? handle_mm_fault+0x31f/0x3c0 21:44:04 ...: [ 482.520885] [<ffffffff8115dbfa>] ? alloc_fd+0x10a/0x150 21:44:04 ...: [ 482.520891] [<ffffffff81141769>] do_sys_open+0x69/0x170 21:44:04 ...: [ 482.520897] [<ffffffff811418b0>] sys_open+0x20/0x30 21:44:04 ...: [ 482.520905] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:46:04 ...: [ 602.520053] INFO: task mdadm:1937 blocked for more than 120 seconds. 21:46:04 ...: [ 602.520059] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:46:04 ...: [ 602.520065] mdadm D 00000000ffffffff 0 1937 1 0x00000000 21:46:04 ...: [ 602.520075] ffff88002ef4f5d8 0000000000000082 0000000000015bc0 0000000000015bc0 21:46:04 ...: [ 602.520084] ffff88002eb5b198 ffff88002ef4ffd8 0000000000015bc0 ffff88002eb5ade0 21:46:04 ...: [ 602.520091] 0000000000015bc0 ffff88002ef4ffd8 0000000000015bc0 ffff88002eb5b198 21:46:04 ...: [ 602.520099] Call Trace: 21:46:04 ...: [ 602.520127] [<ffffffffa0224892>] get_active_stripe+0x312/0x3f0 [raid456] 21:46:04 ...: [ 602.520142] [<ffffffff81059ae0>] ? default_wake_function+0x0/0x20 21:46:04 ...: [ 602.520153] [<ffffffffa0228413>] make_request+0x243/0x4b0 [raid456] 21:46:04 ...: [ 602.520162] [<ffffffffa0221a90>] ? release_stripe+0x50/0x70 [raid456] 21:46:04 ...: [ 602.520171] [<ffffffff81084790>] ? autoremove_wake_function+0x0/0x40 21:46:04 ...: [ 602.520180] [<ffffffff81414df0>] md_make_request+0xc0/0x130 21:46:04 ...: [ 602.520187] [<ffffffff81414df0>] ? md_make_request+0xc0/0x130 21:46:04 ...: [ 602.520197] [<ffffffff8129f8c1>] generic_make_request+0x1b1/0x4f0 21:46:04 ...: [ 602.520206] [<ffffffff810f6515>] ? mempool_alloc_slab+0x15/0x20 21:46:04 ...: [ 602.520215] [<ffffffff8116c2ec>] ? alloc_buffer_head+0x1c/0x60 21:46:04 ...: [ 602.520222] [<ffffffff8129fc80>] submit_bio+0x80/0x110 21:46:04 ...: [ 602.520229] [<ffffffff8116c849>] submit_bh+0xf9/0x140 21:46:04 ...: [ 602.520237] [<ffffffff8116f124>] block_read_full_page+0x274/0x3b0 21:46:04 ...: [ 602.520244] [<ffffffff81172c90>] ? blkdev_get_block+0x0/0x70 21:46:04 ...: [ 602.520252] [<ffffffff8110d875>] ? __inc_zone_page_state+0x35/0x40 21:46:04 ...: [ 602.520259] [<ffffffff810f46d8>] ? add_to_page_cache_locked+0xe8/0x160 21:46:04 ...: [ 602.520266] [<ffffffff81173d78>] blkdev_readpage+0x18/0x20 21:46:04 ...: [ 602.520273] [<ffffffff810f484b>] __read_cache_page+0x7b/0xe0 21:46:04 ...: [ 602.520279] [<ffffffff81173d60>] ? blkdev_readpage+0x0/0x20 21:46:04 ...: [ 602.520285] [<ffffffff81173d60>] ? blkdev_readpage+0x0/0x20 21:46:04 ...: [ 602.520292] [<ffffffff810f57dc>] do_read_cache_page+0x3c/0x120 21:46:04 ...: [ 602.520300] [<ffffffff810f5909>] read_cache_page_async+0x19/0x20 21:46:04 ...: [ 602.520306] [<ffffffff810f591e>] read_cache_page+0xe/0x20 21:46:04 ...: [ 602.520314] [<ffffffff811a6cb0>] read_dev_sector+0x30/0xa0 21:46:04 ...: [ 602.520321] [<ffffffff811a7fcd>] amiga_partition+0x6d/0x460 21:46:04 ...: [ 602.520328] [<ffffffff811a7938>] check_partition+0x138/0x190 21:46:04 ...: [ 602.520335] [<ffffffff811a7a7a>] rescan_partitions+0xea/0x2f0 21:46:04 ...: [ 602.520342] [<ffffffff811744c7>] __blkdev_get+0x267/0x3d0 21:46:04 ...: [ 602.520348] [<ffffffff81174650>] ? blkdev_open+0x0/0xc0 21:46:04 ...: [ 602.520354] [<ffffffff81174640>] blkdev_get+0x10/0x20 21:46:04 ...: [ 602.520359] [<ffffffff811746c1>] blkdev_open+0x71/0xc0 21:46:04 ...: [ 602.520367] [<ffffffff811419f3>] __dentry_open+0x113/0x370 21:46:04 ...: [ 602.520375] [<ffffffff81253f8f>] ? security_inode_permission+0x1f/0x30 21:46:04 ...: [ 602.520383] [<ffffffff8114de3f>] ? inode_permission+0xaf/0xd0 21:46:04 ...: [ 602.520390] [<ffffffff81141d67>] nameidata_to_filp+0x57/0x70 21:46:04 ...: [ 602.520397] [<ffffffff8115207a>] do_filp_open+0x2da/0xba0 21:46:04 ...: [ 602.520404] [<ffffffff811134a8>] ? unmap_vmas+0x178/0x310 21:46:04 ...: [ 602.520413] [<ffffffff8115dbfa>] ? alloc_fd+0x10a/0x150 21:46:04 ...: [ 602.520419] [<ffffffff81141769>] do_sys_open+0x69/0x170 21:46:04 ...: [ 602.520425] [<ffffffff811418b0>] sys_open+0x20/0x30 21:46:04 ...: [ 602.520434] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b 21:46:04 ...: [ 602.520443] INFO: task mdadm:2283 blocked for more than 120 seconds. 21:46:04 ...: [ 602.520447] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 21:46:04 ...: [ 602.520451] mdadm D 0000000000000000 0 2283 2212 0x00000000 21:46:04 ...: [ 602.520460] ffff88002cca7d98 0000000000000086 0000000000015bc0 0000000000015bc0 21:46:04 ...: [ 602.520468] ffff88002ededf78 ffff88002cca7fd8 0000000000015bc0 ffff88002ededbc0 21:46:04 ...: [ 602.520475] 0000000000015bc0 ffff88002cca7fd8 0000000000015bc0 ffff88002ededf78 21:46:04 ...: [ 602.520483] Call Trace: 21:46:04 ...: [ 602.520492] [<ffffffff81543a97>] __mutex_lock_slowpath+0xf7/0x180 21:46:04 ...: [ 602.520500] [<ffffffff8154397b>] mutex_lock+0x2b/0x50 21:46:04 ...: [ 602.520506] [<ffffffff8117404d>] __blkdev_put+0x3d/0x190 21:46:04 ...: [ 602.520512] [<ffffffff811741b0>] blkdev_put+0x10/0x20 21:46:04 ...: [ 602.520518] [<ffffffff811741f3>] blkdev_close+0x33/0x60 21:46:04 ...: [ 602.520526] [<ffffffff81145375>] __fput+0xf5/0x210 21:46:04 ...: [ 602.520533] [<ffffffff811454b5>] fput+0x25/0x30 21:46:04 ...: [ 602.520539] [<ffffffff811415ad>] filp_close+0x5d/0x90 21:46:04 ...: [ 602.520545] [<ffffffff81141697>] sys_close+0xb7/0x120 21:46:04 ...: [ 602.520552] [<ffffffff810121b2>] system_call_fastpath+0x16/0x1b

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • Httpd problem, suspect an attack but not sure

    - by Bob
    On one of my servers when I type netstat -n I get a huge output, something like 400 entries for httpd. The bandwidth on the server isn't high, so I'm confused as to what's causing it. I'm suspecting an attack, but not sure. Intermittently, the web server will stop responding. When this happens all other services such as ping, ftp, work just normally. System load is also normal. The only thing that isn't normal I think is the "netstat -n" output. Can you guys take a look and see if there's something I can do? I have APF installed, but not sure what rules I should put into place to mitigate the problem. Btw, I'm running CentOS 5 Linux with Apache 2. root@linux [/backup/stuff/apf-9.7-1]# netstat -n|grep :80 tcp 0 0 120.136.23.56:80 220.181.94.220:48397 TIME_WAIT tcp 0 0 120.136.23.56:80 218.86.49.153:1734 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.37.99:48316 TIME_WAIT tcp 0 0 120.136.23.56:80 208.80.193.33:54407 TIME_WAIT tcp 0 0 120.136.23.56:80 65.49.2.180:46768 TIME_WAIT tcp 0 0 120.136.23.56:80 120.0.70.180:9414 FIN_WAIT2 tcp 0 0 120.136.23.56:80 221.130.177.101:43386 TIME_WAIT tcp 0 0 120.136.23.92:80 220.181.7.112:51601 TIME_WAIT tcp 0 0 120.136.23.94:80 220.181.94.215:53097 TIME_WAIT tcp 0 0 120.136.23.56:80 202.160.188.236:53203 TIME_WAIT tcp 0 0 120.136.23.56:80 119.119.247.249:62297 TIME_WAIT tcp 0 0 120.136.23.56:80 119.119.247.249:64345 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.115.105:36600 TIME_WAIT tcp 0 0 120.136.23.56:80 118.77.25.129:1743 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.220:35107 TIME_WAIT tcp 0 0 120.136.23.56:80 119.119.247.249:61801 TIME_WAIT tcp 0 0 120.136.23.56:80 66.249.69.155:57641 TIME_WAIT tcp 0 1009 120.136.23.56:80 114.249.218.24:17204 CLOSING tcp 0 0 120.136.23.93:80 119.235.237.85:45355 TIME_WAIT tcp 0 0 120.136.23.56:80 217.212.224.182:45195 TIME_WAIT tcp 0 0 120.136.23.56:80 220.189.10.170:1556 TIME_WAIT tcp 0 0 120.136.23.56:80 202.160.180.102:35701 TIME_WAIT tcp 0 0 120.136.23.56:80 118.77.25.129:1745 TIME_WAIT tcp 0 0 120.136.23.56:80 118.77.25.129:1749 TIME_WAIT tcp 0 0 120.136.23.56:80 118.77.25.129:1748 TIME_WAIT tcp 0 0 120.136.23.56:80 221.195.76.250:26635 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.111.239:58417 TIME_WAIT tcp 0 0 120.136.23.56:80 67.218.116.164:53370 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.236:56168 TIME_WAIT tcp 0 0 120.136.23.93:80 120.136.23.93:36947 TIME_WAIT tcp 0 1009 120.136.23.56:80 114.249.218.24:16991 CLOSING tcp 0 305 120.136.23.56:80 59.58.149.147:1881 ESTABLISHED tcp 0 0 120.136.23.56:80 61.186.48.148:1405 ESTABLISHED tcp 0 0 120.136.23.56:80 123.125.66.46:26703 TIME_WAIT tcp 0 0 120.136.23.56:80 222.84.58.254:4814 TIME_WAIT tcp 0 0 120.136.23.56:80 218.86.49.153:1698 TIME_WAIT tcp 0 0 120.136.23.56:80 222.84.58.254:4813 TIME_WAIT tcp 0 0 120.136.23.56:80 222.84.58.254:4810 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.236:60508 TIME_WAIT tcp 0 0 120.136.23.56:80 222.84.58.254:4811 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.125.71:43991 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.125.71:52182 TIME_WAIT tcp 0 0 120.136.23.56:80 222.84.58.254:4806 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.125.71:56024 TIME_WAIT tcp 0 0 120.136.23.56:80 222.84.58.254:4805 TIME_WAIT tcp 0 0 120.136.23.56:80 222.89.251.167:2133 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.37.99:48340 TIME_WAIT tcp 0 0 120.136.23.56:80 119.119.247.249:63543 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.220:39544 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.125.71:48066 TIME_WAIT tcp 0 0 120.136.23.56:80 222.84.58.254:4822 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.113.253:55817 TIME_WAIT tcp 0 0 120.136.23.56:80 219.141.124.130:11316 FIN_WAIT2 tcp 0 0 120.136.23.56:80 222.84.58.254:4820 TIME_WAIT tcp 0 0 120.136.23.56:80 222.84.58.254:4816 TIME_WAIT tcp 0 0 120.136.23.56:80 202.160.180.140:40743 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.125.71:60979 TIME_WAIT tcp 0 2332 120.136.23.56:80 221.180.12.66:29255 LAST_ACK tcp 0 0 120.136.23.56:80 117.36.231.149:4078 TIME_WAIT tcp 0 2332 120.136.23.56:80 221.180.12.66:29251 LAST_ACK tcp 0 0 120.136.23.56:80 117.36.231.149:4079 TIME_WAIT tcp 0 2332 120.136.23.56:80 221.180.12.66:29260 LAST_ACK tcp 0 0 120.136.23.56:80 220.181.94.236:51379 TIME_WAIT tcp 0 0 120.136.23.56:80 114.237.16.26:1363 TIME_WAIT tcp 0 2332 120.136.23.56:80 221.180.12.66:29263 LAST_ACK tcp 0 0 120.136.23.56:80 220.181.94.220:63106 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.101:45795 TIME_WAIT tcp 0 0 120.136.23.56:80 111.224.115.203:46315 ESTABLISHED tcp 0 0 120.136.23.56:80 66.249.69.5:35081 ESTABLISHED tcp 0 0 120.136.23.56:80 203.209.252.26:51590 TIME_WAIT tcp 0 2332 120.136.23.56:80 221.180.12.66:29268 LAST_ACK tcp 0 0 120.136.23.80:80 216.7.175.100:54555 TIME_WAIT tcp 0 0 120.136.23.92:80 220.181.7.38:47180 TIME_WAIT tcp 0 0 120.136.23.56:80 119.119.247.249:64467 TIME_WAIT tcp 0 2332 120.136.23.56:80 221.180.12.66:29265 LAST_ACK tcp 0 0 120.136.23.92:80 220.181.7.110:46593 TIME_WAIT tcp 0 2332 120.136.23.56:80 221.180.12.66:29276 LAST_ACK tcp 0 0 120.136.23.56:80 117.36.231.149:4080 TIME_WAIT tcp 0 0 120.136.23.56:80 117.36.231.149:4081 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.37.99:50215 TIME_WAIT tcp 0 101505 120.136.23.56:80 111.166.41.15:1315 ESTABLISHED tcp 0 2332 120.136.23.56:80 221.180.12.66:29274 LAST_ACK tcp 0 0 120.136.23.56:80 222.84.58.254:4878 TIME_WAIT tcp 0 1 120.136.23.93:80 58.33.226.66:4715 FIN_WAIT1 tcp 0 0 120.136.23.56:80 222.84.58.254:4877 TIME_WAIT tcp 0 1009 120.136.23.56:80 114.249.218.24:17062 CLOSING tcp 0 2332 120.136.23.56:80 221.180.12.66:29280 LAST_ACK tcp 0 0 120.136.23.56:80 222.84.58.254:4874 TIME_WAIT tcp 0 0 120.136.23.93:80 124.115.0.28:59777 TIME_WAIT tcp 0 0 120.136.23.56:80 222.84.58.254:4872 TIME_WAIT tcp 0 0 120.136.23.56:80 222.84.58.254:4870 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.37.99:50449 TIME_WAIT tcp 0 0 120.136.23.56:80 222.84.58.254:4868 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.107:37579 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.114.238:34255 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.105:35530 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.220:43960 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.111.229:41667 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.220:52669 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.111.239:56779 TIME_WAIT tcp 1 16560 120.136.23.56:80 210.13.118.102:43675 CLOSE_WAIT tcp 0 1009 120.136.23.56:80 114.249.218.24:17084 CLOSING tcp 0 0 120.136.23.56:80 221.130.177.105:33501 TIME_WAIT tcp 0 0 120.136.23.93:80 123.116.230.132:9703 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.37.99:49414 TIME_WAIT tcp 0 0 120.136.23.56:80 220.168.66.48:3360 ESTABLISHED tcp 0 0 120.136.23.56:80 220.168.66.48:3361 FIN_WAIT2 tcp 0 0 120.136.23.56:80 220.168.66.48:3362 ESTABLISHED tcp 0 0 120.136.23.80:80 66.249.68.183:39813 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.37.99:51569 TIME_WAIT tcp 0 0 120.136.23.56:80 216.129.119.11:58377 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.111.229:41914 TIME_WAIT tcp 0 0 120.136.23.56:80 60.213.146.54:33921 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.37.99:50287 TIME_WAIT tcp 0 0 120.136.23.56:80 61.150.84.6:2094 TIME_WAIT tcp 0 0 120.136.23.56:80 67.218.116.166:33262 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.101:38064 TIME_WAIT tcp 0 0 120.136.23.56:80 110.75.167.223:39895 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.37.99:48991 TIME_WAIT tcp 1 16560 120.136.23.56:80 210.13.118.102:61893 CLOSE_WAIT tcp 0 0 120.136.23.93:80 61.152.250.144:42832 TIME_WAIT tcp 0 0 120.136.23.56:80 202.160.180.174:37484 TIME_WAIT tcp 0 0 120.136.23.56:80 119.119.247.249:63403 TIME_WAIT tcp 0 0 120.136.23.56:80 119.119.247.249:62121 TIME_WAIT tcp 0 0 120.136.23.56:80 66.249.69.155:62189 TIME_WAIT tcp 0 0 120.136.23.56:80 202.160.180.80:60303 TIME_WAIT tcp 0 363 120.136.23.56:80 123.89.153.157:39067 ESTABLISHED tcp 0 0 127.0.0.1:80 127.0.0.1:49406 TIME_WAIT tcp 0 0 120.136.23.92:80 66.249.65.226:61423 TIME_WAIT tcp 0 0 120.136.23.56:80 122.136.173.33:19652 TIME_WAIT tcp 0 2332 120.136.23.56:80 221.180.12.66:29243 LAST_ACK tcp 0 0 120.136.23.56:80 122.136.173.33:19653 FIN_WAIT2 tcp 0 0 120.136.23.56:80 122.86.41.132:5061 TIME_WAIT tcp 0 0 120.136.23.56:80 202.160.179.90:51318 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5060 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.235:54333 TIME_WAIT tcp 0 1 120.136.23.56:80 122.86.41.132:5062 LAST_ACK tcp 0 0 120.136.23.56:80 220.181.94.229:42547 ESTABLISHED tcp 0 0 120.136.23.56:80 123.125.66.135:39557 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5057 TIME_WAIT tcp 0 0 120.136.23.56:80 202.127.20.37:17012 ESTABLISHED tcp 0 0 120.136.23.56:80 202.127.20.37:17013 ESTABLISHED tcp 0 0 120.136.23.93:80 222.190.105.186:4641 FIN_WAIT2 tcp 0 0 120.136.23.56:80 122.86.41.132:5059 TIME_WAIT tcp 0 0 120.136.23.56:80 202.127.20.37:17014 ESTABLISHED tcp 0 0 120.136.23.56:80 60.169.49.238:64078 ESTABLISHED tcp 0 0 120.136.23.56:80 122.86.41.132:5058 TIME_WAIT tcp 0 0 120.136.23.56:80 202.127.20.37:17015 ESTABLISHED tcp 0 0 120.136.23.56:80 60.169.49.238:64079 ESTABLISHED tcp 0 0 120.136.23.56:80 202.127.20.37:17016 ESTABLISHED tcp 0 0 120.136.23.56:80 67.195.113.224:53092 TIME_WAIT tcp 0 1 120.136.23.56:80 122.86.41.132:5065 LAST_ACK tcp 0 0 120.136.23.56:80 122.86.41.132:5064 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5067 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5066 TIME_WAIT tcp 0 0 120.136.23.80:80 125.46.48.20:58200 TIME_WAIT tcp 0 27544 120.136.23.56:80 124.160.125.8:8189 LAST_ACK tcp 0 0 120.136.23.56:80 123.125.66.27:30477 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.102:60019 TIME_WAIT tcp 0 0 120.136.23.56:80 60.169.49.238:64080 FIN_WAIT2 tcp 0 0 120.136.23.56:80 220.181.94.229:37673 TIME_WAIT tcp 0 26136 120.136.23.56:80 60.169.49.238:64081 ESTABLISHED tcp 0 0 120.136.23.56:80 202.127.20.37:17002 ESTABLISHED tcp 0 0 120.136.23.56:80 60.169.49.238:64082 ESTABLISHED tcp 0 0 120.136.23.56:80 60.169.49.238:64083 ESTABLISHED tcp 0 0 120.136.23.56:80 60.169.49.238:64084 FIN_WAIT2 tcp 0 0 120.136.23.56:80 60.169.49.238:64085 FIN_WAIT2 tcp 0 0 120.136.23.56:80 219.131.92.53:4084 TIME_WAIT tcp 0 0 120.136.23.56:80 219.131.92.53:4085 TIME_WAIT tcp 0 0 120.136.23.56:80 219.131.92.53:4086 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.235:42269 TIME_WAIT tcp 0 0 120.136.23.56:80 125.238.149.46:56911 TIME_WAIT tcp 0 0 120.136.23.56:80 125.238.149.46:56910 TIME_WAIT tcp 0 0 120.136.23.56:80 219.131.92.53:4081 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.221:34606 TIME_WAIT tcp 0 0 120.136.23.56:80 219.131.92.53:4082 TIME_WAIT tcp 0 0 120.136.23.80:80 125.46.48.20:25451 TIME_WAIT tcp 0 0 120.136.23.56:80 219.131.92.53:4083 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.100:55875 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.100:51522 TIME_WAIT tcp 0 0 120.136.23.56:80 111.9.9.224:49650 TIME_WAIT tcp 0 0 120.136.23.56:80 219.131.92.53:4088 TIME_WAIT tcp 0 0 120.136.23.56:80 219.131.92.53:4089 TIME_WAIT tcp 0 0 120.136.23.56:80 124.224.63.144:18753 TIME_WAIT tcp 0 0 120.136.23.56:80 124.224.63.144:18752 TIME_WAIT tcp 0 0 120.136.23.56:80 124.224.63.144:18755 TIME_WAIT tcp 0 0 120.136.23.56:80 66.249.69.2:43954 ESTABLISHED tcp 0 0 120.136.23.56:80 124.224.63.144:18754 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.231:48903 TIME_WAIT tcp 0 0 120.136.23.56:80 121.0.29.194:61655 TIME_WAIT tcp 0 0 120.136.23.56:80 125.238.149.46:56915 TIME_WAIT tcp 0 0 120.136.23.56:80 125.238.149.46:56914 TIME_WAIT tcp 0 0 120.136.23.80:80 125.46.48.20:16247 TIME_WAIT tcp 0 0 120.136.23.56:80 125.238.149.46:56913 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.235:59909 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.235:48389 TIME_WAIT tcp 0 0 120.136.23.56:80 125.238.149.46:56912 TIME_WAIT tcp 0 0 120.136.23.93:80 222.190.105.186:4635 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.106:44326 TIME_WAIT tcp 0 0 120.136.23.56:80 222.170.217.26:1812 TIME_WAIT tcp 0 0 120.136.23.56:80 222.170.217.26:1810 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.104:36898 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.235:39033 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.231:58229 TIME_WAIT tcp 0 0 120.136.23.56:80 222.170.217.26:1822 TIME_WAIT tcp 0 0 120.136.23.56:80 222.170.217.26:1820 TIME_WAIT tcp 0 0 120.136.23.56:80 121.206.183.172:2214 FIN_WAIT2 tcp 0 0 120.136.23.56:80 220.181.94.221:54341 TIME_WAIT tcp 0 0 120.136.23.56:80 222.170.217.26:1818 TIME_WAIT tcp 0 0 120.136.23.56:80 124.224.63.144:18751 TIME_WAIT tcp 0 0 120.136.23.56:80 124.224.63.144:18750 TIME_WAIT tcp 0 0 120.136.23.56:80 61.177.143.210:4226 TIME_WAIT tcp 0 0 120.136.23.56:80 116.9.9.250:55700 TIME_WAIT tcp 0 39599 120.136.23.93:80 125.107.166.221:3083 ESTABLISHED tcp 0 0 120.136.23.56:80 120.86.215.180:62554 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.100:48442 TIME_WAIT tcp 0 0 120.136.23.56:80 123.150.182.221:34199 TIME_WAIT tcp 0 69227 120.136.23.93:80 125.107.166.221:3084 ESTABLISHED tcp 0 0 120.136.23.56:80 220.181.94.231:53605 TIME_WAIT tcp 0 0 120.136.23.56:80 123.150.182.221:34196 TIME_WAIT tcp 0 0 120.136.23.56:80 120.86.215.180:62556 TIME_WAIT tcp 0 0 120.136.23.56:80 123.150.182.221:34203 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.104:40252 TIME_WAIT tcp 0 0 120.136.23.56:80 123.150.182.221:34202 TIME_WAIT tcp 0 0 120.136.23.56:80 124.224.63.144:18731 TIME_WAIT tcp 0 0 120.136.23.56:80 123.150.182.221:34201 TIME_WAIT tcp 0 0 120.136.23.56:80 123.150.182.221:34200 TIME_WAIT tcp 0 0 120.136.23.56:80 111.9.9.224:49538 TIME_WAIT tcp 0 0 120.136.23.56:80 202.160.180.57:49229 TIME_WAIT tcp 0 0 120.136.23.56:80 124.224.63.144:18734 TIME_WAIT tcp 0 0 120.136.23.56:80 123.150.182.221:34204 TIME_WAIT tcp 0 0 120.136.23.56:80 111.72.156.95:2517 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.229:59728 TIME_WAIT tcp 0 0 120.136.23.56:80 116.20.61.208:50598 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5031 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5030 TIME_WAIT tcp 0 0 120.136.23.56:80 220.191.255.196:46290 FIN_WAIT2 tcp 0 0 120.136.23.56:80 122.86.41.132:5037 TIME_WAIT tcp 0 1 120.136.23.56:80 122.86.41.132:5036 LAST_ACK tcp 0 0 120.136.23.80:80 115.56.48.140:38058 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5039 TIME_WAIT tcp 0 0 120.136.23.80:80 115.56.48.140:38057 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5038 TIME_WAIT tcp 0 0 120.136.23.80:80 125.46.48.20:45862 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5033 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5032 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5034 TIME_WAIT tcp 0 0 120.136.23.56:80 111.9.9.224:49582 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.221:38777 TIME_WAIT tcp 0 0 120.136.23.56:80 123.125.66.15:27007 TIME_WAIT tcp 0 0 120.136.23.56:80 67.195.37.98:59848 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5040 TIME_WAIT tcp 0 0 120.136.23.80:80 125.46.48.20:14651 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.221:58495 TIME_WAIT tcp 0 0 120.136.23.56:80 111.72.156.95:2765 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5053 TIME_WAIT tcp 0 0 120.136.23.56:80 120.86.215.180:62578 ESTABLISHED tcp 0 0 120.136.23.56:80 202.160.179.58:36715 TIME_WAIT tcp 0 0 120.136.23.56:80 122.86.41.132:5048 TIME_WAIT tcp 0 0 120.136.23.93:80 61.153.27.172:4889 TIME_WAIT tcp 0 0 120.136.23.56:80 111.72.156.95:1995 TIME_WAIT tcp 0 0 120.136.23.56:80 111.9.9.224:49501 TIME_WAIT tcp 0 12270 120.136.23.56:80 119.12.4.49:49551 ESTABLISHED tcp 0 6988 120.136.23.56:80 119.12.4.49:49550 ESTABLISHED tcp 0 0 120.136.23.56:80 66.249.67.106:60516 TIME_WAIT tcp 0 0 120.136.23.56:80 202.160.179.76:56301 TIME_WAIT tcp 0 0 120.136.23.56:80 202.160.178.41:32907 TIME_WAIT tcp 0 0 120.136.23.93:80 61.153.27.172:24811 TIME_WAIT tcp 0 0 120.136.23.56:80 202.160.180.155:35617 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.229:50081 TIME_WAIT tcp 0 3650 120.136.23.56:80 119.12.4.49:49555 ESTABLISHED tcp 0 0 120.136.23.56:80 116.9.9.250:55632 TIME_WAIT tcp 0 4590 120.136.23.56:80 119.12.4.49:49554 ESTABLISHED tcp 0 823 120.136.23.56:80 119.12.4.49:49553 ESTABLISHED tcp 0 778 120.136.23.56:80 119.12.4.49:49552 ESTABLISHED tcp 0 31944 120.136.23.93:80 222.67.49.170:52229 ESTABLISHED tcp 0 0 120.136.23.93:80 219.219.127.2:44661 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.102:38602 TIME_WAIT tcp 0 0 120.136.23.56:80 61.177.143.210:4208 TIME_WAIT tcp 0 0 120.136.23.56:80 117.23.111.2:3297 TIME_WAIT tcp 0 0 120.136.23.56:80 111.72.156.95:2079 TIME_WAIT tcp 0 0 120.136.23.92:80 220.181.7.49:44133 TIME_WAIT tcp 0 0 120.136.23.80:80 125.46.48.20:38627 TIME_WAIT tcp 0 660 120.136.23.56:80 113.16.37.24:62908 LAST_ACK tcp 0 0 120.136.23.56:80 220.181.94.231:62850 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.235:33423 TIME_WAIT tcp 0 0 120.136.23.56:80 216.129.119.40:53331 TIME_WAIT tcp 0 0 120.136.23.56:80 116.248.65.32:2580 ESTABLISHED tcp 0 0 120.136.23.56:80 61.177.143.210:4199 TIME_WAIT tcp 0 0 120.136.23.93:80 125.107.166.221:3052 TIME_WAIT tcp 0 0 120.136.23.56:80 216.7.175.100:36933 TIME_WAIT tcp 0 1 120.136.23.56:80 183.35.149.94:2414 FIN_WAIT1 tcp 0 26963 120.136.23.56:80 124.160.125.8:8274 LAST_ACK tcp 0 0 120.136.23.93:80 61.153.27.172:16350 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.229:64907 TIME_WAIT tcp 0 0 120.136.23.56:80 219.131.92.53:4116 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.102:32937 TIME_WAIT tcp 0 0 120.136.23.56:80 218.59.137.178:52731 FIN_WAIT2 tcp 0 0 120.136.23.56:80 123.125.66.53:31474 ESTABLISHED tcp 0 8950 120.136.23.56:80 221.194.136.245:21574 ESTABLISHED tcp 0 0 120.136.23.56:80 216.7.175.100:36922 TIME_WAIT tcp 0 0 120.136.23.56:80 216.7.175.100:36923 TIME_WAIT tcp 0 0 120.136.23.56:80 221.130.177.106:41386 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.221:62681 TIME_WAIT tcp 0 0 120.136.23.56:80 111.72.156.95:1639 ESTABLISHED tcp 0 0 120.136.23.56:80 219.131.92.53:4103 TIME_WAIT tcp 0 0 120.136.23.56:80 220.181.94.231:44007 TIME_WAIT tcp 0 0 120.136.23.93:80 61.153.27.172:15026 TIME_WAIT tcp 0 0 120.136.23.56:80 202.160.180.125:59521 TIME_WAIT tcp 0 660 120.136.23.56:80 113.16.37.24:62921 FIN_WAIT1 tcp 0 0 120.136.23.56:80 220.181.94.229:54767 TIME_WAIT tcp 0 0 120.136.23.56:80 219.131.92.53:4148 ESTABLISHED tcp 0 0 120.136.23.93:80 202.104.103.210:2423 TIME_WAIT tcp 0 0 120.136.23.56:80 219.131.92.53:4149 ESTABLISHED tcp 0 0 120.136.23.56:80 219.131.

    Read the article

  • Position:absolute

    - by Andrew
    I have I have a div called logo. I want the logo to be on top of other areas and to overlap into the the preface top of a drupal site, the logo currently sits in the header area. I looked up position absolute and I think that what I need to use but when I use position absolute the logo disappears, I can see it if I use position fixed, relative etc. I thought the logo was being hidden because I was not using a z-index but even with that I cant see the logo. What am I doing wrong? #logo { position: absolute; top: 30px; /* 30 pixels from the top of the page */ left: 80px; /* 80 pixels from the left hand side */ z-index:1099; border: 1px solid red; /* So we can see what is happening */ } Also does anyone know of a really good free online css course? Here is some additional information, namely the CSS and the page.tpl.php: <?php // $Id: page.tpl.php,v 1.1.2.5 2010/04/08 07:02:59 sociotech Exp $ ?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="<?php print $language->language; ?>" xml:lang="<?php print $language->language; ?>"> <head> <title><?php print $head_title; ?></title> <?php print $head; ?> <?php print $styles; ?> <?php print $setting_styles; ?> <!--[if IE 8]> <?php print $ie8_styles; ?> <![endif]--> <!--[if IE 7]> <?php print $ie7_styles; ?> <![endif]--> <!--[if lte IE 6]> <?php print $ie6_styles; ?> <![endif]--> <?php print $local_styles; ?> <?php print $scripts; ?> </head> <body id="<?php print $body_id; ?>" class="<?php print $body_classes; ?>"> <div id="page" class="page"> <div id="page-inner" class="page-inner"> <div id="skip"> <a href="#main-content-area"><?php print t('Skip to Main Content Area'); ?></a> </div> <!-- header-top row: width = grid_width --> <?php print theme('grid_row', $header_top, 'header-top', 'full-width', $grid_width); ?> <!-- header-group row: width = grid_width --> <div id="header-group-wrapper" class="header-group-wrapper full-width"> <div id="header-group" class="header-group row <?php print $grid_width; ?>"> <div id="header-group-inner" class="header-group-inner inner clearfix"> <?php print theme('grid_block', theme('links', $secondary_links), 'secondary-menu'); ?> <?php print theme('grid_block', $search_box, 'search-box'); ?> <?php if ($logo || $site_name || $site_slogan): ?> <div id="header-site-info" class="header-site-info block"> <div id="header-site-info-inner" class="header-site-info-inner inner"> <?php if ($logo): ?> <div id="logo"> <a href="<?php print check_url($front_page); ?>" title="<?php print t('Home'); ?>"><img src="<?php print $logo; ?>" alt="<?php print t('Home'); ?>" /></a> </div> <?php endif; ?> <?php if ($site_name || $site_slogan): ?> <div id="site-name-wrapper" class="clearfix"> <?php if ($site_name): ?> <span id="site-name"><a href="<?php print check_url($front_page); ?>" title="<?php print t('Home'); ?>"><?php print $site_name; ?></a></span> <?php endif; ?> <?php if ($site_slogan): ?> <span id="slogan"><?php print $site_slogan; ?></span> <?php endif; ?> </div><!-- /site-name-wrapper --> <?php endif; ?> </div><!-- /header-site-info-inner --> </div><!-- /header-site-info --> <?php endif; ?> <?php print $header; ?> <?php print theme('grid_block', $primary_links_tree, 'primary-menu'); ?> </div><!-- /header-group-inner --> </div><!-- /header-group --> </div><!-- /header-group-wrapper --> <!-- preface-top row: width = grid_width --> <?php print theme('grid_row', $preface_top, 'preface-top', 'full-width', $grid_width); ?> <!-- main row: width = grid_width --> <div id="main-wrapper" class="main-wrapper full-width<?php if ($is_front) { print ' front'; } ?>"> <div id="main" class="main row <?php print $grid_width; ?>"> <div id="main-inner" class="main-inner inner clearfix"> <?php print theme('grid_row', $sidebar_first, 'sidebar-first', 'nested', $sidebar_first_width); ?> <!-- main group: width = grid_width - sidebar_first_width --> <div id="main-group" class="main-group row nested <?php print $main_group_width; ?>"> <div id="main-group-inner" class="main-group-inner inner"> <?php print theme('grid_row', $preface_bottom, 'preface-bottom', 'nested'); ?> <div id="main-content" class="main-content row nested"> <div id="main-content-inner" class="main-content-inner inner"> <!-- content group: width = grid_width - (sidebar_first_width + sidebar_last_width) --> <div id="content-group" class="content-group row nested <?php print $content_group_width; ?>"> <div id="content-group-inner" class="content-group-inner inner"> <?php print theme('grid_block', $breadcrumb, 'breadcrumbs'); ?> <?php if ($content_top || $help || $messages): ?> <div id="content-top" class="content-top row nested"> <div id="content-top-inner" class="content-top-inner inner"> <?php print theme('grid_block', $help, 'content-help'); ?> <?php print theme('grid_block', $messages, 'content-messages'); ?> <?php print $content_top; ?> </div><!-- /content-top-inner --> </div><!-- /content-top --> <?php endif; ?> <div id="content-region" class="content-region row nested"> <div id="content-region-inner" class="content-region-inner inner"> <a name="main-content-area" id="main-content-area"></a> <?php print theme('grid_block', $tabs, 'content-tabs'); ?> <div id="content-inner" class="content-inner block"> <div id="content-inner-inner" class="content-inner-inner inner"> <?php if ($title): ?> <h1 class="title"><?php print $title; ?></h1> <?php endif; ?> <?php if ($content): ?> <div id="content-content" class="content-content"> <?php print $content; ?> <?php print $feed_icons; ?> </div><!-- /content-content --> <?php endif; ?> </div><!-- /content-inner-inner --> </div><!-- /content-inner --> </div><!-- /content-region-inner --> </div><!-- /content-region --> <?php print theme('grid_row', $content_bottom, 'content-bottom', 'nested'); ?> </div><!-- /content-group-inner --> </div><!-- /content-group --> <?php print theme('grid_row', $sidebar_last, 'sidebar-last', 'nested', $sidebar_last_width); ?> </div><!-- /main-content-inner --> </div><!-- /main-content --> <?php print theme('grid_row', $postscript_top, 'postscript-top', 'nested'); ?> </div><!-- /main-group-inner --> </div><!-- /main-group --> </div><!-- /main-inner --> </div><!-- /main --> </div><!-- /main-wrapper --> <!-- postscript-bottom row: width = grid_width --> <?php print theme('grid_row', $postscript_bottom, 'postscript-bottom', 'full-width', $grid_width); ?> <!-- footer row: width = grid_width --> <?php print theme('grid_row', $footer, 'footer', 'full-width', $grid_width); ?> <!-- footer-message row: width = grid_width --> <div id="footer-message-wrapper" class="footer-message-wrapper full-width"> <div id="footer-message" class="footer-message row <?php print $grid_width; ?>"> <div id="footer-message-inner" class="footer-message-inner inner clearfix"> <?php print theme('grid_block', $footer_message, 'footer-message-text'); ?> </div><!-- /footer-message-inner --> </div><!-- /footer-message --> </div><!-- /footer-message-wrapper --> </div><!-- /page-inner --> </div><!-- /page --> <?php print $closure; ?> </body> </html> CSS /* $Id: style.css,v 1.1.2.11 2010/07/02 22:11:04 sociotech Exp $ */ /* Margin, Padding, Border Resets -------------------------------------------------------------- */ html, body, div, span, p, dl, dt, dd, ul, ol, li, h1, h2, h3, h4, h5, h6, form, fieldset, input, textarea { margin: 0; padding: 0; } img, abbr, acronym { border: 0; } /* HTML Elements -------------------------------------------------------------- */ p { margin: 1em 0; } h1, h2, h3, h4, h5, h6 { margin: 0 0 0.5em 0; } h1 { color: white !important; text-shadow: black !important; } ul, ol, dd { margin-bottom: 1.5em; margin-left: 2em; /* LTR */ } li ul, li ol { margin-bottom: 0; } ul { list-style-type: disc; } ol { list-style-type: decimal; } a { margin: 0; padding: 0; text-decoration: none; } a:link, a:visited { } a:hover, a:focus, a:active { text-decoration: underline; } blockquote { } hr { height: 1px; border: 1px solid gray; } /* tables */ table { border-spacing: 0; width: 100%; } tr.even td, tr.odd td { background-color: #FFFFFF; border: 1px solid #dbdbdb; } caption { text-align: left; } th { margin: 0; padding: 0 10px 0 0; } th.active img { display: inline; } thead th { padding-right: 10px; } td { margin: 0; padding: 3px; } /* Remove grid block styles from Drupal's table ".block" class */ td.block { border: none; float: none; margin: 0; } /* Maintain light background/dark text on dragged table rows */ tr.drag td, tr.drag-previous td { background: #FFFFDD; color: #000; } /* Accessibility /-------------------------------------------------------------- */ /* skip-link to main content, hide offscreen */ #skip a, #skip a:hover, #skip a:visited { height: 1px; left: 0px; overflow: hidden; position: absolute; top: -500px; width: 1px; } /* make skip link visible when selected */ #skip a:active, #skip a:focus { background-color: #fff; color: #000; height: auto; padding: 5px 10px; position: absolute; top: 0; width: auto; z-index: 99; } #skip a:hover { text-decoration: none; } /* Helper Classes /-------------------------------------------------------------- */ .hide { display: none; visibility: hidden; } .left { float: left; } .right { float: right; } .clear { clear: both; } /* clear floats after an element */ /* (also in ie6-fixes.css, ie7-fixes.css) */ .clearfix:after, .clearfix .inner:after { clear: both; content: "."; display: block; font-size: 0; height: 0; line-height: 0; overflow: auto; visibility: hidden; } /* Grid Layout Basics (specifics in 'gridnn_x.css') -------------------------------------------------------------- */ /* center page and full rows: override this for left-aligned page */ .page, .row { margin: 0 auto; } /* fix layout/background display on floated elements */ .row, .nested, .block { overflow: hidden; } /* full-width row wrapper */ div.full-width { width: 100%; } /* float, un-center & expand nested rows */ .nested { float: left; /* LTR */ margin: 0; width: 100%; } /* allow Superfish menus to overflow */ #sidebar-first.nested, #sidebar-last.nested, div.superfish { overflow: visible; } /* sidebar layouts */ .sidebars-both-first .content-group { float: right; /* LTR */ } .sidebars-both-last .sidebar-first { float: right; /* LTR */ } /* Grid Mask Overlay -------------------------------------------------------------- */ #grid-mask-overlay { display: none; left: 0; opacity: 0.75; position: absolute; top: 0; width: 100%; z-index: 997; } #grid-mask-overlay .row { margin: 0 auto; } #grid-mask-overlay .block .inner { background-color: #e3fffc; outline: none; } .grid-mask #grid-mask-overlay { display: block; } .grid-mask .block { overflow: visible; } .grid-mask .block .inner { outline: #f00 dashed 1px; } #grid-mask-toggle { background-color: #777; border: 2px outset #fff; color: #fff; cursor: pointer; font-variant: small-caps; font-weight: normal; left: 0; -moz-border-radius: 5px; padding: 0 5px 2px 5px; position: absolute; text-align: center; top: 22px; -webkit-border-radius: 5px; z-index: 998; } #grid-mask-toggle.grid-on { border-style: inset; font-weight: bold; } /* Site Info -------------------------------------------------------------- */ #header-site-info { width: auto; } #site-name-wrapper { float: left; /* LTR */ } #site-name, #slogan { display: block; } #site-name a:link, #site-name a:visited, #site-name a:hover, #site-name a:active { text-decoration: none; } #site-name a { outline: 0; } /* Regions -------------------------------------------------------------- */ /* Header Regions -------------------------------------------------------------- */ #header-group { overflow: visible; } /* Content Regions (Main) -------------------------------------------------------------- */ .node-bottom { margin: 1.5em 0 0 0; } /* Clear floats on regions -------------------------------------------------------------- */ #header-top-wrapper, #header-group-wrapper, #preface-top-wrapper, #main-wrapper, #preface-bottom, #content-top, #content-region, #content-bottom, #postscript-top, #postscript-bottom-wrapper, #footer-wrapper, #footer-message-wrapper { clear: both; } /* Drupal Core /-------------------------------------------------------------- */ /* Lists /-------------------------------------------------------------- */ .item-list ul li { margin: 0; } .block ul, .block ol { margin-left: 2em; /* LTR */ padding: 0; } .content-inner ul, .content-inner ol { margin-bottom: 1.5em; } .content-inner li ul, .content-inner li ol { margin-bottom: 0; } .block ul.links { margin-left: 0; /* LTR */ } /* Menus /-------------------------------------------------------------- */ ul.menu li, ul.links li { margin: 0; padding: 0; } /* Primary Menu /-------------------------------------------------------------- */ /* use ID to override overflow: hidden for .block, dropdowns should always be visible */ #primary-menu { overflow: visible; } /* remove left margin from primary menu list */ #primary-menu.block ul { margin-left: 0; /* LTR */ } /* remove bullets, float left */ .primary-menu ul li { float: left; /* LTR */ list-style: none; position: relative; } /* style links, and unlinked parent items (via Special Menu Items module) */ .primary-menu ul li a, .primary-menu ul li .nolink { display: block; padding: 0.75em 1em; text-decoration: none; } /* Add cursor style for unlinked parent menu items */ .primary-menu ul li .nolink { cursor: default; } /* remove outline */ .primary-menu ul li:hover, .primary-menu ul li.sfHover, .primary-menu ul a:focus, .primary-menu ul a:hover, .primary-menu ul a:active { outline: 0; } /* Secondary Menu /-------------------------------------------------------------- */ .secondary-menu-inner ul.links { margin-left: 0; /* LTR */ } /* Skinr styles /-------------------------------------------------------------- */ /* Skinr selectable helper classes */ .fusion-clear { clear: both; } div.fusion-right { float: right; /* LTR */ } div.fusion-center { float: none; margin-left: auto; margin-right: auto; } .fusion-center-content .inner { text-align: center; } .fusion-center-content .inner ul.menu { display: inline-block; text-align: center; } /* required to override drupal core */ .fusion-center-content #user-login-form { text-align: center; } .fusion-right-content .inner { text-align: right; /* LTR */ } /* required to override drupal core */ .fusion-right-content #user-login-form { text-align: right; /* LTR */ } /* Large, bold callout text style */ .fusion-callout .inner { font-weight: bold; } /* Extra padding on block */ .fusion-padding .inner { padding: 30px; } /* Adds 1px border and padding */ .fusion-border .inner { border-width: 1px; border-style: solid; padding: 10px; } /* Single line menu with separators */ .fusion-inline-menu .inner ul.menu { margin-left: 0; /* LTR */ } .fusion-inline-menu .inner ul.menu li { border-right-style: solid; border-right-width: 1px; display: inline; margin: 0; padding: 0; white-space: nowrap; } .fusion-inline-menu .inner ul.menu li a { padding: 0 8px 0 5px; /* LTR */ } .fusion-inline-menu .inner ul li.last { border: none; } /* Hide second level (and beyond) menu items */ .fusion-inline-menu .inner ul li.expanded ul { display: none; } /* Multi-column menu style with bolded top level menu items */ .fusion-multicol-menu .inner ul { margin-left: 0; /* LTR */ text-align: left; /* LTR */ } .fusion-multicol-menu .inner ul li { border-right: none; display: block; font-weight: bold; } .fusion-multicol-menu .inner ul li.last { border-right: none; } .fusion-multicol-menu .inner ul li.last a { padding-right: 0; /* LTR */ } .fusion-multicol-menu .inner ul li.expanded, .fusion-multicol-menu .inner ul li.leaf { float: left; /* LTR */ list-style-image: none; margin-left: 50px; /* LTR */ } .fusion-multicol-menu .inner ul.menu li.first { margin-left: 0; /* LTR */ } .fusion-multicol-menu .inner ul li.expanded li.leaf { float: none; margin-left: 0; /* LTR */ } .fusion-multicol-menu .inner ul li.expanded ul { display: block; margin-left: 0; /* LTR */ } .fusion-multicol-menu .inner ul li.expanded ul li { border: none; margin-left: 0; /* LTR */ text-align: left; /* LTR */ } .fusion-multicol-menu .inner ul.menu li ul.menu li { font-weight: normal; } /* Split list across multiple columns */ .fusion-2-col-list .inner .item-list ul li, .fusion-2-col-list .inner ul.menu li { float: left; /* LTR */ width: 50%; } .fusion-3-col-list .inner .item-list ul li, .fusion-3-col-list .inner ul.menu li { float: left; /* LTR */ width: 33%; } .fusion-2-col-list .inner .item-list ul.pager li, .fusion-3-col-list .inner .item-list ul.pager li { float: none; width: auto; } /* List with bottom border Fixes a common issue when list items have bottom borders and appear to be doubled when nested lists end and begin. This removes the extra border-bottom */ .fusion-list-bottom-border .inner ul li { list-style: none; list-style-type: none; list-style-image: none; } .fusion-list-bottom-border .inner ul li, .fusion-list-bottom-border .view-content div.views-row { padding: 0 0 0 10px; /* LTR */ border-bottom-style: solid; border-bottom-width: 1px; line-height: 216.7%; /* 26px */ } .fusion-list-bottom-border .inner ul { margin: 0; } .fusion-list-bottom-border .inner ul li ul { border-bottom-style: solid; border-bottom-width: 1px; } .fusion-list-bottom-border .inner ul li ul li.last { border-bottom-style: solid; border-bottom-width: 1px; margin-bottom: -1px; margin-top: -1px; } #views_slideshow_singleframe_pager_slideshow-page_2 .pager-item { display:block; } #views_slideshow_singleframe_pager_slideshow-page_2 { position:absolute; right:0; top:0; } #header-group-wrapper { background: none; } #page { background-color:#F3F3F3; background-image:url('/sites/all/themes/fusion/fusion_core/images/runswithgradient.jpg'); background-repeat:no-repeat; background-attachment: fixed; width: auto; } #views_slideshow_singleframe_pager_slideshow-page_2 div a img { top:0px; height:60px; width:80px; padding-right:10px; padding-bottom:19px; } #mycontent{ width: 720px; } .product-body { -moz-border-radius: 4px 4px 4px 4px; margin: 0 0 20px; overflow: hidden; padding: 20px; background: none repeat scroll 0 0 #F7F7F7; border: 1px solid #000000; border-style:solid; border-width:thin; color:#000000; } #product-details { background: none repeat scroll 0 0 #F7F7F7 !important; border: 1px solid #000000 !important; color: #8E8E8E; } #logo { position: relative; top: 30px; /* 30 pixels from the top of the page */ left: 80px; /* 80 pixels from the left hand side */ z-index:1099; border: 1px solid red; /* So we can see what is happening */ } #breadcrumbs-inner { background: none; border-color: transparent; border-style: none; } #block-views-new_products-block_1{ height:200px; } /* List with no bullet and extra padding This is a common style for menus, which removes the bullet and adds more vertical padding for a simple list style */ .fusion-list-vertical-spacing .inner ul, .fusion-list-vertical-spacing div.views-row-first { margin-left: 0; margin-top: 10px; } .fusion-list-vertical-spacing .inner ul li, .fusion-list-vertical-spacing div.views-row { line-height: 133.3%; /* 16px/12px */ margin-bottom: 10px; padding: 0; } .fusion-list-vertical-spacing .inner ul li { list-style: none; list-style-image: none; list-style-type: none; } .fusion-list-vertical-spacing .inner ul li ul { margin-left: 10px; /* LTR */ } /* Bold all links */ .fusion-bold-links .inner a { font-weight: bold; } /* Float imagefield images left and add margin */ .fusion-float-imagefield-left .field-type-filefield, .fusion-float-imagefield-left .image-insert, .fusion-float-imagefield-left .imagecache { float: left; /* LTR */ margin: 0 15px 15px 0; /* LTR */ } /* Clear float on new Views item so each row drops to a new line */ .fusion-float-imagefield-left .views-row { clear: left; /* LTR */ } /* Float imagefield images right and add margin */ .fusion-float-imagefield-right .field-type-filefield, .fusion-float-imagefield-right .image-insert .fusion-float-imagefield-right .imagecache { float: right; /* LTR */ margin: 0 0 15px 15px; /* LTR */ } /* Clear float on new Views item so each row drops to a new line */ .fusion-float-imagefield-right .views-row { clear: right; /* LTR */ } /* Superfish: all menus */ .sf-menu li { list-style: none; list-style-image: none; list-style-type: none; } /* Superfish: vertical menus */ .superfish-vertical { position: relative; z-index: 9; } ul.sf-vertical { background: #fafafa; margin: 0; width: 100%; } ul.sf-vertical li { border-bottom: 1px solid #ccc; font-weight: bold; line-height: 200%; /* 24px */ padding: 0; width: 100%; } ul.sf-vertical li a:link, ul.sf-vertical li a:visited, ul.sf-vertical li .nolink { margin-left: 10px; padding: 2px; } ul.sf-vertical li a:hover, ul.sf-vertical li a.active { text-decoration: underline; } ul.sf-vertical li ul { background: #fafafa; border-top: 1px solid #ccc; margin-left: 0; width: 150px; } ul.sf-vertical li ul li.last { border-top: 1px solid #ccc; margin-bottom: -1px; margin-top: -1px; } ul.sf-vertical li ul { border-top: none; padding: 4px 0; } ul.sf-vertical li ul li { border-bottom: none; line-height: 150%; /* 24px */ More below but I can't paste that much Thanks for the suggestion I've tried this #header-group { position: relative; z-index: 9; } #logo { position: abosolute; top: 230px; /* 30 pixels from the top of the page */ left: 10px; /* 80 pixels from the left hand side */ z-index: 999; } but it's not working. I've taken a screen shot of the div to show the structure. http://i.stack.imgur.com/ff4DP.png

    Read the article

< Previous Page | 69 70 71 72 73