Search Results

Search found 8593 results on 344 pages for 'regular expression'.

Page 341/344 | < Previous Page | 337 338 339 340 341 342 343 344  | Next Page >

  • PERL XPath Parser Help

    - by cognvision
    I want to pull in data using a XML::XPath parser from a XML DB file from the Worldbank site. The problem is that I'm not seeing any results in the output. I must be missing something in the code. Ideally, I would like to extract just the death rate statistics from each country XML DB (year and value). I'm using this as part of my input: http://data.worldbank.org/sites/default/files/countries/en/afghanistan_en.xml use strict; use LWP 5.64; use HTML::ContentExtractor; use XML::XPath; my $agent1 = LWP::UserAgent->new; my $extractor = HTML::ContentExtractor->new(); #Retrieve main Worldbank country site my $mainlink = "http://data.worldbank.org/country/"; my $page = $agent1->get("$mainlink"); my $fulltext = $page->decoded_content(); #Match to just all available countries in Worldbank my $country = ""; my @countryList; if (@countryList = $fulltext =~ m/(http:\/\/data\.worldbank\.org\/country\/.*?")/gi){ foreach $country(@countryList){ #Remove " at the end of link $country=~s/\"//gi; print "\n" . $country; #Retrieve each country profile's XML DB file my $page = $agent1->get("$country"); my $fulltext = $page->decoded_content(); my $XML_DB = ""; my @countryXMLDBList; if (@countryXMLDBList = $fulltext =~ m/(http:\/\/data\.worldbank\.org\/sites\/default\/files\/countries\/en\/.*?\.xml)/gi){ foreach $XML_DB(@countryXMLDBList){ my $page = $agent1->get("$XML_DB"); my $fulltext = $page->decoded_content(); #print $fulltext; #Use XML XPath parser to find elements related to death rate my $xp = XML::XPath->new($fulltext); #my $xp = XML::XPath->new("afghanistan_en.xml"); my $nodeSet = $xp->find("//*"); if (!$nodeSet->isa('XML::XPath::NodeSet') || $nodeSet->size() == 0) { #No match found print "\nMatch not found!"; exit; } else { foreach my $node ($nodeSet->get_nodelist){ print "\n" . $node->find('country')->string_value; print "\n" . $node->find('indicator')->string_value; print "\n" . $node->find('year')->string_value; print "\n" . $node->find('value')->string_value; exit; } } } #Build line graph based on death rate statistics and output some image file format } } } I am also looking into using the xpath expression "following-sibling", but not sure how to use it correctly. For example, I have the following set of XML data where I am only interested in pulling siblings directly after the indicator for just death rate data. <data> <country id="AFG">Afghanistan</country> <indicator id="SP.DYN.CDRT.IN">Death rate, crude (per 1,000 people)</indicator> <year>2006</year> <value>20.3410000</value> </data> - <data> <country id="AFG">Afghanistan</country> <indicator id="SP.DYN.CDRT.IN">Death rate, crude (per 1,000 people)</indicator> <year>2007</year> <value>19.9480000</value> </data> - <data> <country id="AFG">Afghanistan</country> <indicator id="SP.DYN.CDRT.IN">Death rate, crude (per 1,000 people)</indicator> <year>2008</year> <value>19.5720000</value> </data> - <data> <country id="AFG">Afghanistan</country> <indicator id="IC.EXP.DOCS">Documents to export (number)</indicator> <year>2005</year> <value>7.0000000</value> </data> - <data> <country id="AFG">Afghanistan</country> <indicator id="IC.EXP.DOCS">Documents to export (number)</indicator> <year>2006</year> <value>12.0000000</value> </data> - <data> <country id="AFG">Afghanistan</country> <indicator id="IC.EXP.DOCS">Documents to export (number)</indicator> <year>2007</year> <value>12.0000000</value> </data> Any help would be much appreciated!!!

    Read the article

  • Unable to Calculate Position within Owner-Draw Text

    - by Jonathan Wood
    I'm trying to use Visual Studio 2012 to create a Windows Forms application that can place the caret at the current position within a owner-drawn string. However, I've been unable to find a way to accurately calculate that position. I've done this successfully before in C++. I've now tried numerous methods in C#. Originally, I tried using .NET classes to determine the correct position, but then I tried accessing the Windows API directly. In some cases, I came close, but after some time I still cannot place the caret accurately. I've created a small test program and posted key parts below. I've also posted the entire project here. The exact font used is not important to me; however, my application assumes a mono-spaced font. Any help is appreciated. Form1.cs This is my main form. public partial class Form1 : Form { private string TestString; private int AveCharWidth; private int Position; public Form1() { InitializeComponent(); TestString = "123456789012345678901234567890123456789012345678901234567890"; AveCharWidth = GetFontWidth(); Position = 0; } private void Form1_Load(object sender, EventArgs e) { Font = new Font(FontFamily.GenericMonospace, 12, FontStyle.Regular, GraphicsUnit.Pixel); } protected override void OnGotFocus(EventArgs e) { Windows.CreateCaret(Handle, (IntPtr)0, 2, (int)Font.Height); Windows.ShowCaret(Handle); UpdateCaretPosition(); base.OnGotFocus(e); } protected void UpdateCaretPosition() { Windows.SetCaretPos(Padding.Left + (Position * AveCharWidth), Padding.Top); } protected override void OnLostFocus(EventArgs e) { Windows.HideCaret(Handle); Windows.DestroyCaret(); base.OnLostFocus(e); } protected override void OnPaint(PaintEventArgs e) { e.Graphics.DrawString(TestString, Font, SystemBrushes.WindowText, new PointF(Padding.Left, Padding.Top)); } protected override bool IsInputKey(Keys keyData) { switch (keyData) { case Keys.Right: case Keys.Left: return true; } return base.IsInputKey(keyData); } protected override void OnKeyDown(KeyEventArgs e) { switch (e.KeyCode) { case Keys.Left: Position = Math.Max(Position - 1, 0); UpdateCaretPosition(); break; case Keys.Right: Position = Math.Min(Position + 1, TestString.Length); UpdateCaretPosition(); break; } base.OnKeyDown(e); } protected int GetFontWidth() { int AverageCharWidth = 0; using (var graphics = this.CreateGraphics()) { try { Windows.TEXTMETRIC tm; var hdc = graphics.GetHdc(); IntPtr hFont = this.Font.ToHfont(); IntPtr hOldFont = Windows.SelectObject(hdc, hFont); var a = Windows.GetTextMetrics(hdc, out tm); var b = Windows.SelectObject(hdc, hOldFont); var c = Windows.DeleteObject(hFont); AverageCharWidth = tm.tmAveCharWidth; } catch { } finally { graphics.ReleaseHdc(); } } return AverageCharWidth; } } Windows.cs Here are my Windows API declarations. public static class Windows { [Serializable, StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)] public struct TEXTMETRIC { public int tmHeight; public int tmAscent; public int tmDescent; public int tmInternalLeading; public int tmExternalLeading; public int tmAveCharWidth; public int tmMaxCharWidth; public int tmWeight; public int tmOverhang; public int tmDigitizedAspectX; public int tmDigitizedAspectY; public short tmFirstChar; public short tmLastChar; public short tmDefaultChar; public short tmBreakChar; public byte tmItalic; public byte tmUnderlined; public byte tmStruckOut; public byte tmPitchAndFamily; public byte tmCharSet; } [DllImport("user32.dll")] public static extern bool CreateCaret(IntPtr hWnd, IntPtr hBitmap, int nWidth, int nHeight); [DllImport("User32.dll")] public static extern bool SetCaretPos(int x, int y); [DllImport("User32.dll")] public static extern bool DestroyCaret(); [DllImport("User32.dll")] public static extern bool ShowCaret(IntPtr hWnd); [DllImport("User32.dll")] public static extern bool HideCaret(IntPtr hWnd); [DllImport("gdi32.dll", CharSet = CharSet.Auto)] public static extern bool GetTextMetrics(IntPtr hdc, out TEXTMETRIC lptm); [DllImport("gdi32.dll")] public static extern IntPtr SelectObject(IntPtr hdc, IntPtr hgdiobj); [DllImport("GDI32.dll")] public static extern bool DeleteObject(IntPtr hObject); }

    Read the article

  • I can't use Spring filters in servlet-context XML

    - by gotch4
    For some reason both Eclipse and Spring can't find the filter tag (there is even a red mark)... What's wrong? <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd"> <bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"></bean> <bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://localhost/jacciseweb" /> <property name="username" value="root" /> <property name="password" value="siussi" /> </bean> <bean id="mySessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="myDataSource" /> <property name="annotatedClasses"> <list> <value>it.jsoftware.jacciseweb.beans.Utente </value> <value>it.jsoftware.jacciseweb.beans.Ordine </value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect"> org.hibernate.dialect.MySQLDialect </prop> <prop key="hibernate.show_sql"> true </prop> <prop key="hibernate.hbm2ddl.auto"> update </prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.NoCacheProvider</prop> </props> </property> </bean> <filter> <filter-name>hibernateFilter</filter-name> <filter-class> org.springframework.orm.hibernate3.support.OpenSessionInViewFilter </filter-class> <init-param> <param-name>singleSession</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>sessionFactoryBeanName</param-name> <param-value>mySessionFactory</param-value> </init-param> </filter> <!-- <aop:config> --> <!-- <aop:pointcut id="productServiceMethods" --> <!-- expression="execution(* product.ProductService.*(..))" /> --> <!-- <aop:advisor advice-ref="txAdvice" pointcut-ref="productServiceMethods" /> --> <!-- </aop:config> --> <bean id="acciseHibernateDao" class="it.jsoftware.jacciseweb.model.JAcciseWebManagementDaoHibernate"> <property name="sessionFactory" ref="mySessionFactory" /> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="mySessionFactory" /> </bean> <tx:annotation-driven /> <bean id="acciseService" class="it.jsoftware.jacciseweb.model.JAcciseWebManagementServiceImpl"> <property name="dao" ref="acciseHibernateDao" /> </bean> <context:component-scan base-package="it.jsoftware.jacciseweb.controllers"></context:component-scan> <mvc:annotation-driven /> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter" p:synchronizeOnSession="true" /> <bean class="org.springframework.web.servlet.handler.BeanNameUrlHandlerMapping" /> <mvc:resources mapping="/resources/**" location="/resources/" /> <!-- non serve, è annotato --> <!-- <bean name="/accise" class="it.jsoftware.jacciseweb.controllers.MainController"> </bean> --> </beans> in particular it says "filter" is invalid content

    Read the article

  • How to make negate_unary work with any type?

    - by Chan
    Hi, Following this question: How to negate a predicate function using operator ! in C++? I want to create an operator ! can work with any functor that inherited from unary_function. I tried: template<typename T> inline std::unary_negate<T> operator !( const T& pred ) { return std::not1( pred ); } The compiler complained: Error 5 error C2955: 'std::unary_function' : use of class template requires template argument list c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 223 1 Graphic Error 7 error C2451: conditional expression of type 'std::unary_negate<_Fn1>' is illegal c:\program files\microsoft visual studio 10.0\vc\include\ostream 529 1 Graphic Error 3 error C2146: syntax error : missing ',' before identifier 'argument_type' c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 222 1 Graphic Error 4 error C2065: 'argument_type' : undeclared identifier c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 222 1 Graphic Error 2 error C2039: 'argument_type' : is not a member of 'std::basic_ostream<_Elem,_Traits>::sentry' c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 222 1 Graphic Error 6 error C2039: 'argument_type' : is not a member of 'std::basic_ostream<_Elem,_Traits>::sentry' c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 230 1 Graphic Any idea? Update Follow "templatetypedef" solution, I got new error: Error 3 error C2831: 'operator !' cannot have default parameters c:\visual studio 2010 projects\graphic\graphic\main.cpp 39 1 Graphic Error 2 error C2808: unary 'operator !' has too many formal parameters c:\visual studio 2010 projects\graphic\graphic\main.cpp 39 1 Graphic Error 4 error C2675: unary '!' : 'is_prime' does not define this operator or a conversion to a type acceptable to the predefined operator c:\visual studio 2010 projects\graphic\graphic\main.cpp 52 1 Graphic Update 1 Complete code: #include <iostream> #include <functional> #include <utility> #include <cmath> #include <algorithm> #include <iterator> #include <string> #include <boost/assign.hpp> #include <boost/assign/std/vector.hpp> #include <boost/assign/std/map.hpp> #include <boost/assign/std/set.hpp> #include <boost/assign/std/list.hpp> #include <boost/assign/std/stack.hpp> #include <boost/assign/std/deque.hpp> struct is_prime : std::unary_function<int, bool> { bool operator()( int n ) const { if( n < 2 ) return 0; if( n == 2 || n == 3 ) return 1; if( n % 2 == 0 || n % 3 == 0 ) return 0; int upper_bound = std::sqrt( static_cast<double>( n ) ); for( int pf = 5, step = 2; pf <= upper_bound; ) { if( n % pf == 0 ) return 0; pf += step; step = 6 - step; } return 1; } }; /* template<typename T> inline std::unary_negate<T> operator !( const T& pred, typename T::argument_type* dummy = 0 ) { return std::not1<T>( pred ); } */ inline std::unary_negate<is_prime> operator !( const is_prime& pred ) { return std::not1( pred ); } template<typename T> inline void print_con( const T& con, const std::string& ms = "", const std::string& sep = ", " ) { std::cout << ms << '\n'; std::copy( con.begin(), con.end(), std::ostream_iterator<typename T::value_type>( std::cout, sep.c_str() ) ); std::cout << "\n\n"; } int main() { using namespace boost::assign; std::vector<int> nums; nums += 1, 3, 5, 7, 9; nums.erase( remove_if( nums.begin(), nums.end(), !is_prime() ), nums.end() ); print_con( nums, "After remove all primes" ); } Thanks, Chan Nguyen

    Read the article

  • Why SELECT N + 1 with no foreign keys and LINQ?

    - by Daniel Flöijer
    I have a database that unfortunately have no real foreign keys (I plan to add this later, but prefer not to do it right now to make migration easier). I have manually written domain objects that map to the database to set up relationships (following this tutorial http://www.codeproject.com/Articles/43025/A-LINQ-Tutorial-Mapping-Tables-to-Objects), and I've finally gotten the code to run properly. However, I've noticed I now have the SELECT N + 1 problem. Instead of selecting all Product's they're selected one by one with this SQL: SELECT [t0].[id] AS [ProductID], [t0].[Name], [t0].[info] AS [Description] FROM [products] AS [t0] WHERE [t0].[id] = @p0 -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [65] Controller: public ViewResult List(string category, int page = 1) { var cat = categoriesRepository.Categories.SelectMany(c => c.LocalizedCategories).Where(lc => lc.CountryID == 1).First(lc => lc.Name == category).Category; var productsToShow = cat.Products; var viewModel = new ProductsListViewModel { Products = productsToShow.Skip((page - 1) * PageSize).Take(PageSize).ToList(), PagingInfo = new PagingInfo { CurrentPage = page, ItemsPerPage = PageSize, TotalItems = productsToShow.Count() }, CurrentCategory = cat }; return View("List", viewModel); } Since I wasn't sure if my LINQ expression was correct I tried to just use this but I still got N+1: var cat = categoriesRepository.Categories.First(); Domain objects: [Table(Name = "products")] public class Product { [Column(Name = "id", IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int ProductID { get; set; } [Column] public string Name { get; set; } [Column(Name = "info")] public string Description { get; set; } private EntitySet<ProductCategory> _productCategories = new EntitySet<ProductCategory>(); [System.Data.Linq.Mapping.Association(Storage = "_productCategories", OtherKey = "productId", ThisKey = "ProductID")] private ICollection<ProductCategory> ProductCategories { get { return _productCategories; } set { _productCategories.Assign(value); } } public ICollection<Category> Categories { get { return (from pc in ProductCategories select pc.Category).ToList(); } } } [Table(Name = "products_menu")] class ProductCategory { [Column(IsPrimaryKey = true, Name = "products_id")] private int productId; private EntityRef<Product> _product = new EntityRef<Product>(); [System.Data.Linq.Mapping.Association(Storage = "_product", ThisKey = "productId")] public Product Product { get { return _product.Entity; } set { _product.Entity = value; } } [Column(IsPrimaryKey = true, Name = "products_types_id")] private int categoryId; private EntityRef<Category> _category = new EntityRef<Category>(); [System.Data.Linq.Mapping.Association(Storage = "_category", ThisKey = "categoryId")] public Category Category { get { return _category.Entity; } set { _category.Entity = value; } } } [Table(Name = "products_types")] public class Category { [Column(Name = "id", IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int CategoryID { get; set; } private EntitySet<ProductCategory> _productCategories = new EntitySet<ProductCategory>(); [System.Data.Linq.Mapping.Association(Storage = "_productCategories", OtherKey = "categoryId", ThisKey = "CategoryID")] private ICollection<ProductCategory> ProductCategories { get { return _productCategories; } set { _productCategories.Assign(value); } } public ICollection<Product> Products { get { return (from pc in ProductCategories select pc.Product).ToList(); } } private EntitySet<LocalizedCategory> _LocalizedCategories = new EntitySet<LocalizedCategory>(); [System.Data.Linq.Mapping.Association(Storage = "_LocalizedCategories", OtherKey = "CategoryID")] public ICollection<LocalizedCategory> LocalizedCategories { get { return _LocalizedCategories; } set { _LocalizedCategories.Assign(value); } } } [Table(Name = "products_types_localized")] public class LocalizedCategory { [Column(Name = "id", IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int LocalizedCategoryID { get; set; } [Column(Name = "products_types_id")] private int CategoryID; private EntityRef<Category> _Category = new EntityRef<Category>(); [System.Data.Linq.Mapping.Association(Storage = "_Category", ThisKey = "CategoryID")] public Category Category { get { return _Category.Entity; } set { _Category.Entity = value; } } [Column(Name = "country_id")] public int CountryID { get; set; } [Column] public string Name { get; set; } } I've tried to comment out everything from my View, so nothing there seems to influence this. The ViewModel is as simple as it looks, so shouldn't be anything there. When reading this ( http://www.hookedonlinq.com/LinqToSQL5MinuteOVerview.ashx) I started suspecting it might be because I have no real foreign keys in the database and that I might need to use manual joins in my code. Is that correct? How would I go about it? Should I remove my mapping code from my domain model or is it something that I need to add/change to it? Note: I've stripped parts of the code out that I don't think is relevant to make it cleaner for this question. Please let me know if something is missing.

    Read the article

  • Maven : Is it possible to override the configuration of a plugin already defined for a profile in a parent POM

    - by Guillaume Cernier
    In a POM parent file of my project, I have such a profile defining some configurations useful for this project (so that I can't get rid of this parent POM) : <profile> <id>wls7</id> ... <build> <plugins> <!-- use java 1.4 --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <fork>true</fork> <source>1.4</source> <target>1.4</target> <meminitial>128m</meminitial> <maxmem>1024m</maxmem> <executable>%${jdk14.executable}</executable> </configuration> </plugin> </plugins> </build> ... </profile> But in my project I just would like to override the configuration of the maven-compiler-plugin in order to use jdk5 instead of jdk4 for compiling test-classes. That's why I did this section in the POM of my project : <profiles> <profile> <id>wls7</id> <activation> <property> <name>jdk</name> <value>4</value> </property> </activation> <build> <directory>target-1.4</directory> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <executions> <execution> <id>my-testCompile</id> <phase>test-compile</phase> <goals> <goal>testCompile</goal> </goals> <configuration> <fork>true</fork> <executable>${jdk15.executable}</executable> <compilerVersion>1.5</compilerVersion> <source>1.5</source> <target>1.5</target> <verbose>true</verbose> </configuration> </execution> </executions> </plugin> </plugins> </build> </profile> ... </profiles> and it's not working ... I even tried to override the configuration in regular plugin sections of my POM (I mean, not for a specific profile but for my whole POM). What could be the problem ? To clarify some of my requirements : I don't want to get rid of the parent POM and the profile (wls7) defined inside it (since I need many and many properties, configurations, ...) and that is not the process in my company. A solution based on duplicating the parent POM and/or the profile defined inside it is not a good one. Since if the responsible of the parent POM change something, I would have to report it in mine. It's just an inheritance matter (extend or override a profile, a configuration from an upper-level POM) so I think it should be possible with maven2.

    Read the article

  • SQL: Using a CASE Statement to update 1000 rows at once

    - by SoLoGHoST
    Ok, I would like to use a CASE STATEMENT for this, but I am lost with this. Basically, I need to update a ton of rows, but just on the "position" column. I need to update all "position" values from 0 - count(position) for each id_layout_position column per id_layout column. OK, here is a pic of what the table looks like: Now let's say I delete the circled row, this will remove position = 2 and give me: 0, 1, 3, 5, 6, 7, and 4. But I want to add something at the end now and make sure that it has the last possible position, but the positions are already messed up, so I need to reorder them like so before I insert the new row: 0, 1, 2, 3, 4, 5, 6. But it must be ordered by lowest first. So 0 stays at 0, 1 stays at 1, 3 gets changed to 2, the 4 at the end gets changed to a 3, 5 gets changed to 4, 6 gets changed to 5, and 7 gets changed to 6. Hopefully you guys get the picture now. I'm completely lost here. Also, note, this table is tiny compared to how fast it can grow in size, so it needs to be able to do this FAST, thus I was thinking on the CASE STATEMENT for an UPDATE QUERY. Here's what I got for a regular update, but I don't wanna throw this into a foreach loop, as it would take forever to do it. I'm using SMF (Simple Machines Forums), so it might look a little different, but the idea is the same, and CASE statements are supported... $smcFunc['db_query']('', ' UPDATE {db_prefix}dp_positions SET position = {int:position} WHERE id_layout_position = {int:id_layout_position} AND id_layout = {int:id_layout}', array( 'position' => $position++, 'id_layout_position' => (int) $id_layout_position, 'id_layout' => (int) $id_layout, ) ); Anyways, I need to apply some sort of CASE on this so that I can auto-increment by 1 all values that it finds and update to the next possible value. I know I'm doing this wrong, even in this QUERY. But I'm totally lost when it comes to CASES. Here's an example of a CASE being used within SMF, so you can see this and hopefully relate: $conditions = ''; foreach ($postgroups as $id => $min_posts) { $conditions .= ' WHEN posts >= ' . $min_posts . (!empty($lastMin) ? ' AND posts <= ' . $lastMin : '') . ' THEN ' . $id; $lastMin = $min_posts; } // A big fat CASE WHEN... END is faster than a zillion UPDATE's ;). $smcFunc['db_query']('', ' UPDATE {db_prefix}members SET id_post_group = CASE ' . $conditions . ' ELSE 0 END' . ($parameter1 != null ? ' WHERE ' . (is_array($parameter1) ? 'id_member IN ({array_int:members})' : 'id_member = {int:members}') : ''), array( 'members' => $parameter1, ) ); Before I do the update, I actually have a SELECT which throws everything I need into arrays like so: $disabled_sections = array(); $positions = array(); while ($row = $smcFunc['db_fetch_assoc']($request)) { if (!isset($disabled_sections[$row['id_group']][$row['id_layout']])) $disabled_sections[$row['id_group']][$row['id_layout']] = array( 'info' => $module_info[$name], 'id_layout_position' => $row['id_layout_position'] ); // Increment the positions... if (!is_null($row['position'])) { if (!isset($positions[$row['id_layout']][$row['id_layout_position']])) $positions[$row['id_layout']][$row['id_layout_position']] = 1; else $positions[$row['id_layout']][$row['id_layout_position']]++; } else $positions[$row['id_layout']][$row['id_layout_position']] = 0; } Thanks, I know if anyone can help me here it's definitely you guys and gals... Anyways, here is my question: How do I use a CASE statement in the first code example, so that I can update all of the rows in the position column from 0 - total # of rows found, that have that id_layout value and that id_layout_position value, and continue this for all different id_layout values in that table? Can I use the arrays above somehow? I'm sure I'll have to use the id_layout and id_layout_position values for this right? But how can I do this? Ok, guy, I get an error, saying "Hacking Attempt" with the following code: // Updating all positions in here. $smcFunc['db_query']('', ' SET @pos = 0; UPDATE {db_prefix}dp_positions SET position=@pos:=@pos+1 ORDER BY id_layout_position, position', array( ) ); Am I doing something wrong? Perhaps SMF has safeguards against this approach?? Perhaps I need to use a CASE STATEMENT instead?

    Read the article

  • Many-to-one relation exception due to closed session after loading

    - by Nick Thissen
    Hi, I am using NHibernate (version 1.2.1) for the first time so I wrote a simple test application (an ASP.NET project) that uses it. In my database I have two tables: Persons and Categories. Each person gets one category, seems easy enough. | Persons | | Categories | |--------------| |--------------| | Id (PK) | | Id (PK) | | Firstname | | CategoryName | | Lastname | | CreatedTime | | CategoryId | | UpdatedTime | | CreatedTime | | Deleted | | UpdatedTime | | Deleted | The Id, CreatedTime, UpdatedTime and Deleted attributes are a convention I use in all my tables, so I have tried to bring this fact into an additional abstraction layer. I have a project DatabaseFramework which has three important classes: Entity: an abstract class that defines these four properties. All 'entity objects' (in this case Person and Category) must inherit Entity. IEntityManager: a generic interface (type parameter as Entity) that defines methods like Load, Insert, Update, etc. NHibernateEntityManager: an implementation of this interface using NHibernate to do the loading, saving, etc. Now, the Person and Category classes are straightforward, they just define the attributes of the tables of course (keeping in mind that four of them are in the base Entity class). Since the Persons table is related to the Categories table via the CategoryId attribute, the Person class has a Category property that holds the related category. However, in my webpage, I will also need the name of this category (CategoryName), for databinding purposes for example. So I created an additional property CategoryName that returns the CategoryName property of the current Category property, or an empty string if the Category is null: Namespace Database Public Class Person Inherits DatabaseFramework.Entity Public Overridable Property Firstname As String Public Overridable Property Lastname As String Public Overridable Property Category As Category Public Overridable ReadOnly Property CategoryName As String Get Return If(Me.Category Is Nothing, _ String.Empty, _ Me.Category.CategoryName) End Get End Property End Class End Namespace I am mapping the Person class using this mapping file. The many-to-one relation was suggested by Yads in another thread: <id name="Id" column="Id" type="int" unsaved-value="0"> <generator class="identity" /> </id> <property name="CreatedTime" type="DateTime" not-null="true" /> <property name="UpdatedTime" type="DateTime" not-null="true" /> <property name="Deleted" type="Boolean" not-null="true" /> <property name="Firstname" type="String" /> <property name="Lastname" type="String" /> <many-to-one name="Category" column="CategoryId" class="NHibernateWebTest.Database.Category, NHibernateWebTest" /> (I can't get it to show the root node, this forum hides it, I don't know how to escape the html-like tags...) The final important detail is the Load method of the NHibernateEntityManager implementation. (This is in C# as it's in a different project, sorry about that). I simply open a new ISession (ISessionFactory.OpenSession) in the GetSession method and then use that to fill an EntityCollection(Of TEntity) which is just a collection inheriting System.Collections.ObjectModel.Collection(Of T). public virtual EntityCollection< TEntity Load() { using (ISession session = this.GetSession()) { var entities = session .CreateCriteria(typeof (TEntity)) .Add(Expression.Eq("Deleted", false)) .List< TEntity (); return new EntityCollection< TEntity (entities); } } (Again, I can't get it to format the code correctly, it hides the generic type parameters, probably because it reads the angled symbols as a HTML tag..? If you know how to let me do that, let me know!) Now, the idea of this Load method is that I get a fully functional collection of Persons, all their properties set to the correct values (including the Category property, and thus, the CategoryName property should return the correct name). However, it seems that is not the case. When I try to data-bind the result of this Load method to a GridView in ASP.NET, it tells me this: Property accessor 'CategoryName' on object 'NHibernateWebTest.Database.Person' threw the following exception:'Could not initialize proxy - the owning Session was closed.' The exception occurs on the DataBind method call here: public virtual void LoadGrid() { if (this.Grid == null) return; this.Grid.DataSource = this.Manager.Load(); this.Grid.DataBind(); } Well, of course the session is closed, I closed it via the using block. Isn't that the correct approach, should I keep the session open? And for how long? Can I close it after the DataBind method has been run? In each case, I'd really like my Load method to just return a functional collection of items. It seems to me that it is now only getting the Category when it is required (eg, when the GridView wants to read the CategoryName, which wants to read the Category property), but at that time the session is closed. Is that reasoning correct? How do I stop this behavior? Or shouldn't I? And what should I do otherwise? Thanks!

    Read the article

  • How should I store Dynamically Changing Data into Server Cache?

    - by Scott
    Hey all, EDIT: Purpose of this Website: Its called Utopiapimp.com. It is a third party utility for a game called utopia-game.com. The site currently has over 12k users to it an I run the site. The game is fully text based and will always remain that. Users copy and paste full pages of text from the game and paste the copied information into my site. I run a series of regular expressions against the pasted data and break it down. I then insert anywhere from 5 values to over 30 values into the DB based on that one paste. I then take those values and run queries against them to display the information back in a VERY simple and easy to understand way. The game is team based and each team has 25 users to it. So each team is a group and each row is ONE users information. The users can update all 25 rows or just one row at a time. I require storing things into cache because the site is very slow doing over 1,000 queries almost every minute. So here is the deal. Imagine I have an excel spreadsheet with 100 columns and 5000 rows. Each row has two unique identifiers. One for the row it self and one to group together 25 rows a piece. There are about 10 columns in the row that will almost never change and the other 90 columns will always be changing. We can say some will even change in a matter of seconds depending on how fast the row is updated. Rows can also be added and deleted from the group, but not from the database. The rows are taken from about 4 queries from the database to show the most recent and updated data from the database. So every time something in the database is updated, I would also like the row to be updated. If a row or a group has not been updated in 12 or so hours, it will be taken out of Cache. Once the user calls the group again via the DB queries. They will be placed into Cache. The above is what I would like. That is the wish. In Reality, I still have all the rows, but the way I store them in Cache is currently broken. I store each row in a class and the class is stored in the Server Cache via a HUGE list. When I go to update/Delete/Insert items in the list or rows, most the time it works, but sometimes it throws errors because the cache has changed. I want to be able to lock down the cache like the database throws a lock on a row more or less. I have DateTime stamps to remove things after 12 hours, but this almost always breaks because other users are updating the same 25 rows in the group or just the cache has changed. This is an example of how I add items to Cache, this one shows I only pull the 10 or so columns that very rarely change. This example all removes rows not updated after 12 hours: DateTime dt = DateTime.UtcNow; if (HttpContext.Current.Cache["GetRows"] != null) { List<RowIdentifiers> pis = (List<RowIdentifiers>)HttpContext.Current.Cache["GetRows"]; var ch = (from xx in pis where xx.groupID == groupID where xx.rowID== rowID select xx).ToList(); if (ch.Count() == 0) { var ck = GetInGroupNotCached(rowID, groupID, dt); //Pulling the group from the DB for (int i = 0; i < ck.Count(); i++) pis.Add(ck[i]); pis.RemoveAll((x) => x.updateDateTime < dt.AddHours(-12)); HttpContext.Current.Cache["GetRows"] = pis; return ck; } else return ch; } else { var pis = GetInGroupNotCached(rowID, groupID, dt);//Pulling the group from the DB HttpContext.Current.Cache["GetRows"] = pis; return pis; } On the last point, I remove items from the cache, so the cache doesn't actually get huge. To re-post the question, Whats a better way of doing this? Maybe and how to put locks on the cache? Can I get better than this? I just want it to stop breaking when removing or adding rows.

    Read the article

  • Does this language feature already exist?

    - by Pindatjuh
    I'm currently developing a new language for programming in a continuous environment (compare it to electrical engineering), and I've got some ideas on a certain language construction. Let me explain the feature by explanation and then by definition: x = a U b; Where x is a variable and a and b are other variables (or static values). This works like a union between a and b; no duplicates and no specific order. with(x) { // regular 'with' usage; using the global interpretation of "x" x = 5; // will replace the original definition of "x = a U b;" } with(x = a) { // this code block is executed when the "x" variable // has the "a" variable assigned. All references in // this code-block to "x" are references to "a". So saying: x = 5; // would only change the variable "a". If the variable "a" // later on changes, x still equals to 5, in this fashion: // 'x = a U b U 5;' // '[currentscope] = 5;' // thus, 'a = 5;' } with(x = b) { // same but with "b" } with(x != a) { // here the "x" variable refers to any variable // but "a"; thus saying x = 5; // is equal to the rewriting of // 'x = a U b U 5;' // 'b = 5;' (since it was the scope of this block) } with(x = (a U b)) { // guaranteed that "x" is 'a U b'; interacting with "x" // will interact with both "a" and "b". x = 5; // makes both "a" and "b" equal to 5; also the "x" variable // is updated to contain: // 'x = a U b U 5;' // '[currentscope] = 5;' // 'a U b = 5;' // and thus: 'a = 5; b = 5;'. } // etc. In the above, all code-blocks are executed, but the "scope" changes in each block how x is interpreted. In the first block, x is guaranteed to be a: thus interacting with x inside that block will interact on a. The second and the third code-block are only equal in this situation (because not a: then there only remains b). The last block guarantees that x is at least a or b. Further more; U is not the "bitwise or operator", but I've called it the "and/or"-operator. Its definition is: "U" = "and" U "or" (On my blog, http://cplang.wordpress.com/2009/12/19/binop-and-or/, there is more (mathematical) background information on this operator. It's loosely based on sets. Using different syntax, changed it in this question.) Update: more examples. print = "Hello world!" U "How are you?"; // this will print // both values, but the // order doesn't matter. // 'userkey' is a variable containing a key. with(userkey = "a") { print = userkey; // will only print "a". } with(userkey = ("shift" U "a")) { // pressed both "shift" and the "a" key. print = userkey; // will "print" shift and "a", even // if the user also pressed "ctrl": // the interpretation of "userkey" is changed, // such that it only contains the matched cases. } with((userkey = "shift") U (userkey = "a")) { // same as if-statement above this one, showing the distributivity. } x = 5 U 6 U 7; y = x + x; // will be: // y = (5 U 6 U 7) + (5 U 6 U 7) // = 10 U 11 U 12 U 13 U 14 somewantedkey = "ctrl" U "alt" U "space" with(userkey = somewantedkey) { // must match all elements of "somewantedkey" // (distributed the Boolean equals operated) // thus only executed when all the defined keys are pressed } with(somewantedkey = userkey) { // matches only one of the provided "somewantedkey" // thus when only "space" is pressed, this block is executed. } Update2: more examples and some more context. with(x = (a U b)) { // this } // can be written as with((x = a) U (x = b)) { // this: changing the variable like x = 5; // will be rewritten as: // a = 5 and b = 5 } Some background information: I'm building a language which is "time-independent", like Java is "platform-independant". Everything stated in the language is "as is", and is continuously actively executed. This means; the programmer does not know in which order (unless explicitly stated using constructions) elements are, nor when statements are executed. The language is completely separated from the "time"-concept, i.e. it's continuously executed: with(a < 5) { a++; } // this is a loop-structure; // how and when it's executed isn't known however. with(a) { // everytime the "a" variable changes, this code-block is executed. b = 4; with(b < 3) { // runs only three times. } with(b > 0) { b = b - 1; // runs four times } } Update 3: After pondering on the type of this language feature; it closely resemblances Netbeans Platform's Lookup, where each "with"-statement a synchronized agent is, working on it's specific "filter" of objects. Instead of type-based, this is variable-based (fundamentally quite the same; just a different way of identifiying objects). I greatly thank all of you for providing me with very insightful information and links/hints to great topics I can research. Thanks. I do not know if this construction already exists, so that's my question: does this language feature already exist?

    Read the article

  • JAVA-SQL- Data Migration - ResultSets comparing Failing JUnit test

    - by user1865053
    I CANNOT get this JUnit Test to pass for the life of me. Can somebody point out where this has gone wrong. I am doing a data migration(MSSQL SERVER 2005), but I have the sourceDBUrl and the targetDCUrl the same URL so to narrow it down to syntax errors. So that is what I have, a syntax error. I am comparing the results of a table for the query SELECT programmeapproval, resourceapproval FROM tr_timesheet WHERE timesheetid = ? and the test always fails, but passes for other junit tests I have developed. I created 3 diffemt resultSetsEqual methods and none work. Yet, some other JUnit tests I have developed have PASSED. THE QUERY: SELECT timesheetid, programmeapproval, resourceapproval FROM tr_timesheet Returns three columns timesheetid (PK,int, not null) (populated with a range of numbers 2240 - 2282) programmeapproval (smallint,not null) (populated with the number 1 in every field) resourceapproval (smallint, not null) (populated with a number 1 in every field) When I run the query that is embedded in the code it only returns one row with the programmeapproval and resourceapproval columns and both field populated with the number 1. I have all jdbc drivers correctly installed and tested for connectivity. The JUnit Test is failing at this point according to the IDE. assertTrue(helper.resultSetsEqual2(sourceVal,targetVal)); This is the code: /*THIS IS A JUNIT CLASS****? package a7.unittests.dao; import static org.junit.Assert.assertTrue; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.Types; import org.junit.Test; import artemispm.tritonalerts.TimesheetAlert; public class UnitTestTimesheetAlert { @Test public void testQUERY_CHECKALERT() throws Exception{ UnitTestHelper helper = new UnitTestHelper(); Connection con = helper.getConnection(helper.sourceDBUrl); Connection conTarget = helper.getConnection(helper.targetDBUrl); PreparedStatement stmt = con.prepareStatement("select programmeapproval, resourceapproval from tr_timesheet where timesheetid = ?"); stmt.setInt(1, 2240); ResultSet sourceVal = stmt.executeQuery(); stmt = conTarget.prepareStatement("select programmeapproval, resourceapproval from tr_timesheet where timesheetid = ?"); stmt.setInt(1,2240); ResultSet targetVal = stmt.executeQuery(); assertTrue(helper.resultSetsEqual2(sourceVal,targetVal)); }} /*END**/ /*THIS IS A REGULAR CLASS**/ package a7.unittests.dao; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.SQLException; public class UnitTestHelper { static String sourceDBUrl = "jdbc:sqlserver://127.0.0.1:1433;databaseName=a7itm;user=a7user;password=a7user"; static String targetDBUrl = "jdbc:sqlserver://127.0.0.1:1433;databaseName=a7itm;user=a7user;password=a7user"; public Connection getConnection(String url)throws Exception{ return DriverManager.getConnection(url); } public boolean resultSetsEqual3 (ResultSet rs1, ResultSet rs2) throws SQLException { int col = 1; //ResultSetMetaData metadata = rs1.getMetaData(); //int count = metadata.getColumnCount(); while (rs1.next() && rs2.next()) { final Object res1 = rs1.getObject(col); final Object res2 = rs2.getObject(col); // Check values if (!res1.equals(res2)) { throw new RuntimeException(String.format("%s and %s aren't equal at common position %d", res1, res2, col)); } // rs1 and rs2 must reach last row in the same iteration if ((rs1.isLast() != rs2.isLast())) { throw new RuntimeException("The two ResultSets contains different number of columns!"); } } return true; } public boolean resultSetsEqual (ResultSet source, ResultSet target) throws SQLException{ while(source.next()) { target.next(); ResultSetMetaData metadata = source.getMetaData(); int count = metadata.getColumnCount(); for (int i =1; i<=count; i++) { if(source.getObject(i) != target.getObject(i)) { return false; } } } return true; } public boolean resultSetsEqual2 (ResultSet source, ResultSet target) throws SQLException{ while(source.next()) { target.next(); ResultSetMetaData metadata = source.getMetaData(); int count = metadata.getColumnCount(); for (int i =1; i<=count; i++) { if(source.getObject(i).equals(target.getObject(i))) { return false; } } } return true; } } /END***/ /*PASTED NEW CLASS - THIS IS A JUNIT TEST CLASS*/ package a7.unittests.dao; import static org.junit.Assert.*; import java.sql.Connection; import java.sql.DriverManager; import org.junit.Test; public class TestDatabaseConnection { @Test public void testConnection() throws Exception{ UnitTestHelper helper = new UnitTestHelper(); Connection con = helper.getConnection(helper.sourceDBUrl); Connection conTarget = helper.getConnection(helper.targetDBUrl); assertTrue(con != null && conTarget != null); } } /**END***/

    Read the article

  • Spam Assassin on windows

    - by ebeworld
    I just installed spam assassin and run for its sample ham mail as spamassassin sample-nonspam.txt, but it ended up marking it as a spam. What configuration am i missing to change? Result of the check is: From: Keith Dawson To: [email protected] Subject: **SPAM** TBTF ping for 2001-04-20: Reviving Date: Fri, 20 Apr 2001 16:59:58 -0400 Message-Id: X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on ebeworld-PC X-Spam-Level: **** X-Spam-Status: Yes, score=10.5 required=6.3 tests=DCC_CHECK,DIGEST_MULTIPLE, DNS_FROM_OPENWHOIS,RAZOR2_CF_RANGE_51_100,RAZOR2_CF_RANGE_E4_51_100, RAZOR2_CHECK shortcircuit=no autolearn=no version=3.2.3 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----------=_4BF17E8E.BF8E0000" This is a multi-part message in MIME format. ------------=_4BF17E8E.BF8E0000 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit This mail is probably spam. The original message has been attached intact in RFC 822 format. Content preview: -----BEGIN PGP SIGNED MESSAGE----- TBTF ping for 2001-04-20: Reviving T a s t y B i t s f r o m t h e T e c h n o l o g y F r o n t [...] Content analysis details: (10.5 points, 6.3 required) 2.4 DNS_FROM_OPENWHOIS RBL: Envelope sender listed in bl.open-whois.org. 1.5 RAZOR2_CF_RANGE_E4_51_100 Razor2 gives engine 4 confidence level above 50% [cf: 58] 2.5 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/) 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50% [cf: 58] 3.6 DCC_CHECK Listed in DCC (http://rhyolite.com/anti-spam/dcc/) 0.0 DIGEST_MULTIPLE Message hits more than one network digest check ------------=_4BF17E8E.BF8E0000 Content-Type: message/rfc822; x-spam-type=original Content-Description: original message before SpamAssassin Content-Disposition: inline Content-Transfer-Encoding: 8bit Return-Path: Delivered-To: [email protected] Received: from europe.std.com (europe.std.com [199.172.62.20]) by mail.netnoteinc.com (Postfix) with ESMTP id 392E1114061 for ; Fri, 20 Apr 2001 21:34:46 +0000 (Eire) Received: (from daemon@localhost) by europe.std.com (8.9.3/8.9.3) id RAA09630 for tbtf-outgoing; Fri, 20 Apr 2001 17:31:18 -0400 (EDT) Received: from sgi04-e.std.com (sgi04-e.std.com [199.172.62.134]) by europe.std.com (8.9.3/8.9.3) with ESMTP id RAA08749 for ; Fri, 20 Apr 2001 17:24:31 -0400 (EDT) Received: from world.std.com (world-f.std.com [199.172.62.5]) by sgi04-e.std.com (8.9.3/8.9.3) with ESMTP id RAA8278330 for ; Fri, 20 Apr 2001 17:24:31 -0400 (EDT) Received: (from dawson@localhost) by world.std.com (8.9.3/8.9.3) id RAA26781 for [email protected]; Fri, 20 Apr 2001 17:24:31 -0400 (EDT) Received: from sgi04-e.std.com (sgi04-e.std.com [199.172.62.134]) by europe.std.com (8.9.3/8.9.3) with ESMTP id RAA07541 for ; Fri, 20 Apr 2001 17:12:06 -0400 (EDT) Received: from world.std.com (world-f.std.com [199.172.62.5]) by sgi04-e.std.com (8.9.3/8.9.3) with ESMTP id RAA8416421 for ; Fri, 20 Apr 2001 17:12:06 -0400 (EDT) Received: from [208.192.102.193] (ppp0c199.std.com [208.192.102.199]) by world.std.com (8.9.3/8.9.3) with ESMTP id RAA14226 for ; Fri, 20 Apr 2001 17:12:04 -0400 (EDT) Mime-Version: 1.0 Message-Id: Date: Fri, 20 Apr 2001 16:59:58 -0400 To: [email protected] From: Keith Dawson Subject: TBTF ping for 2001-04-20: Reviving Content-Type: text/plain; charset="us-ascii" Sender: [email protected] Precedence: list Reply-To: [email protected] -----BEGIN PGP SIGNED MESSAGE----- TBTF ping for 2001-04-20: Reviving T a s t y B i t s f r o m t h e T e c h n o l o g y F r o n t Timely news of the bellwethers in computer and communications technology that will affect electronic commerce -- since 1994 Your Host: Keith Dawson ISSN: 1524-9948 This issue: < http://tbtf.com/archive/2001-04-20.html > To comment on this issue, please use this forum at Quick Topic: < http://www.quicktopic.com/tbtf/H/kQGJR2TXL6H > ________________________________________________________________________ Q u o t e O f T h e M o m e n t Even organizations that promise "privacy for their customers" rarely if ever promise "continued privacy for their former customers..." Once you cancel your account with any business, their promises of keeping the information about their customers private no longer apply... you're not a customer any longer. This is in the large category of business behaviors that individuals would consider immoral and deceptive -- and businesses know are not illegal. -- "_ankh," writing on the XNStalk mailing list ________________________________________________________________________ ..TBTF's long hiatus is drawing to a close Hail subscribers to the TBTF mailing list. Some 2,000 [1] of you have signed up since the last issue [2] was mailed on 2000-07-20. This brief note is the first of several I will send to this list to excise the dead addresses prior to resuming regular publication. While you time the contractions of the newsletter's rebirth, I in- vite you to read the TBTF Log [3] and sign up for its separate free subscription. Send "subscribe" (no quotes) with any subject to [email protected] . I mail out collected Log items on Sun- days. If you need to stay more immediately on top of breaking stories, pick up the TBTF Log's syndication file [4] or read an aggregator that does. Examples are Slashdot's Cheesy Portal [5], Userland [6], and Sitescooper [7]. If your news obsession runs even deeper and you own an SMS-capable cell phone or PDA, sign up on TBTF's WebWire- lessNow portal [8]. A free call will bring you the latest TBTF Log headline, Jargon Scout [9] find, or Siliconium [10]. Two new columnists have bloomed on TBTF since last summer: Ted By- field's roving_reporter [11] and Gary Stock's UnBlinking [12]. Late- ly Byfield has been writing in unmatched depth about ICANN, but the roving_reporter nym's roots are in commentary at the intersection of technology and culture. Stock's UnBlinking latches onto topical sub- jects and pursues them to the ends of the Net. These writers' voices are compelling and utterly distinctive. [1] http://tbtf.com/growth.html [2] http://tbtf.com/archive/2000-07-20.html [3] http://tbtf.com/blog/ [4] http://tbtf.com/tbtf.rdf [5] http://www.slashdot.org/cheesyportal.shtml [6] http://my.userland.com/ [7] http://www.sitescooper.org/ [8] http://tbtf.com/pull-wwn/ [9] http://tbtf.com/jargon-scout.html [10] http://tbtf.com/siliconia.html [11] http://tbtf.com/roving_reporter/ [12] http://tbtf.com/unblinking/ ________________________________________________________________________ S o u r c e s For a complete list of TBTF's email and Web sources, see http://tbtf.com/sources.html . ________________________________________ B e n e f a c t o r s TBTF is free. If you get value from this publication, please visit the TBTF Benefactors page < http://tbtf.com/the-benefactors.html > and consider contributing to its upkeep. ________________________________________________________________________ TBTF home and archive at http://tbtf.com/ . To unsubscribe send the message "unsubscribe" to [email protected]. TBTF is Copy- right 1994-2000 by Keith Dawson, <[email protected]>. Commercial use prohibited. For non-commercial purposes please forward, post, and link as you see fit. _______________________________________________ Keith Dawson [email protected] Layer of ash separates morning and evening milk. -----BEGIN PGP SIGNATURE----- Version: PGPfreeware 6.5.2 for non-commercial use http://www.pgp.com iQCVAwUBOuCi3WAMawgf2iXRAQHeAQQA3YSePSQ0XzdHZUVskFDkTfpE9XS4fHQs WaT6a8qLZK9PdNcoz3zggM/Jnjdx6CJqNzxPEtxk9B2DoGll/C/60HWNPN+VujDu Xav65S0P+Px4knaQcCIeCamQJ7uGcsw+CqMpNbxWYaTYmjAfkbKH1EuLC2VRwdmD wQmwrDp70v8= =8hLB -----END PGP SIGNATURE----- ------------=_4BF17E8E.BF8E0000--

    Read the article

  • My computer freezes irregularly

    - by Manhim
    My computer started to freeze at irregular times for 3 weeks now. Please note that this question change with each things that i try. (For additional details) What happens My computer freezes, the video stops. (No graphic glitches, it just stops) Sound keeps playing up to some time (Usually 10-30 seconds) then stops playing. Sometimes, randomly, the screen on my G-15 keyboard flickers and I see caracters not at the right places. Usually happens for about 1-2 seconds and a bit before my computer freezes. I have to keep the power button pressed for 4 seconds to shut my computer down. I still hear my hard drives and fans working. Sometimes it works with no problems for a full day, some other times it just keeps freezing each time I restart my computer and I have to leave it for the rest of the day. Sometimes my mouse freezes for a fraction of a second (Like 0.01 to 0.2 seconds) quite randomly, usually before it freezes. No errors spotted by the "Action center" unlike when I had problems with my last video card on this system (Driver errors). My G-15 LCD screen also freezes. Sometimes my G-15 LCD screen flickers and caracters gets caried around temporary under heavy load. Now, most of the times, the BIOS hard disks boot order gets reversed for some reason and I have to put it to the right one and save each times I boot. (Might be unrelated, not sure, but it first started yesterday) Sometimes the BIOS doesn't detect my 750GB hard drive plugged in SATA1. What I did so far I have had similar problems in the past and I had changed my hard drive (It was faulty), so I tested my software RAID-0 array and it was faulty so I changed it. (I reinstalled Windows 7 with this part). I also tested with unplugging my secondary hard drive. My CPU was running at about 100 degree Celsius, I removed the dust between the fans and the heatsink and it's now between 45-55. I ran a CPU stress-test and it didn't freeze during the tests (using Prime95 on all cores) Ran a memory test (using memtest86+) for a single pass and there were no errors. Ran a GPU stress test with ati-tools and furmark and it didn't freeze during the tests. (No artefacts either) I had troubles with my graphic card when I got it, but I think that it got fixed with a driver update. I checked the voltages in my BIOS setup and they all seemed ok (±0.2 I think). I have ran on the computer without problems with Fedora 15 on an external hard drive (Appart that it couldn't load Gnome 3 and was reverting to Gnome 2, didn't want to install drivers since I use it on multiple computers) I used it to backup my files from the raid array to my 1TB hard drive for the reinstallation of Windows. (So the crashes only happenned on Windows) [The external hard drive is plugged directly on a SATA port] I contacted EVGA (My graphic card vendor) and pointed them on this question, I'm looking for an answer. Ran sensors on Fedora 15 and got this output: http://pastebin.com/0BHJnAvu Ran 6 short different CPU stress test on Fedora 15 (Haven't found any complete stress testers for Linux) and it didn't crash. Changed the thermal paste to some Artic Silver 5 for my CPU and stress tested the CPU, temperature was at 50 idle, then 64 highest and slowly went down to 62 during the test. Ran some stress testing with a temporary graphic card and it went ok. Ran furmark stress test with my original graphic card and it freezed again. GPU had a temp of 74C, a CPU temp of 58C and a mobo temp of 40C or 45C (Dunno which one it is from SpeedFan). Ran a furmark stress test and a CPU stress test at the same time, results: http://pastebin.com/2t6PLpdJ I have been using my computer without stressing it for about 2 hours now and no crashes yet. I also have disabled the AMD Cool'n'quiet function on the BIOS for a more regular power to the CPU. When I ran Furmark without C'n'q my computer didn't freeze but I had a "Driver Kernel Error" that have recovered (And Furmark crashed) all that while running a CPU stress test. The computer eventually frozed without me being at it, but this time my screen just went on sleep and I couldn't wake it. Using the stability tester in nTune my computer freezed again (In the same manner as before). I notived that Speedfan gives me a -12V of -16.97V and a -5V of -8.78V. I wonder if these numbers are reliable and if they are good or bad. I have swapped my G-15 with another basic USB keyboard (HP) and I have ran furmark for about 10 minutes with a CPU stability test running each 60 seconds for 30 seconds and my computer haven't crashed yet. Ran some more extended tests without my G-15 and it freezed like it usually do. Removed the nForce Hard disk controler. Disabled command queuing in the NVIDIA nForce SATA Controller for both port 0 and port 1 (Errors from the logs) Used CPUID HwMonitor, here are the voltages: http://pastebin.com/dfM7p4jV Changed some configurations in the motherboard BIOS: Disabled PEG Link Mode, Changed AI Tuning to Standard, Disabled the 1394 Controller, Disabled HD Audio, Disabled JMicron RAID controller and Disabled SATA Raid. When it happens When I play video games (Mostly) When I play flash games (Second most) When I'm looking at my desktop background (It rarely happens when I have a window open, but it does, sometimes) When my Graphic card and my CPU are stressed. Sometimes when my Graphic card is stressed. Never happenned while stressing only the CPU. Sometimes when my CPU is stressed. Specs Windows Seven x64 Home Premium Motherboard: M2N-SLI Deluxe CPU: AMD Phenom 9950 x2 @ 2.6GHz Memory: Kingston 4x2GB Dual Channel (Pretty basic memory sticks) Hard drives: Was 2x250GB (Western digital caviar) in raid-0 + 1TB (WD caviar black), I replaced the raid array with a 750GB (WD caviar black) [Yes I removed the array from the raid configurations] 750W Power supply No overcloking. Ever. There have been some power-downs like 4-5 weeks ago, but the problem didn't start immediately after. (I wasn't home, so my computer got shut-down) Event logs (Warnings, errors and critical errors) for the last 24 hours: http://pastebin.com/Bvvk31T7 My current to-try list Reinstall the drivers and software 1 by 1 and do extensive stress testing between each. Update the BIOS firmware to the most recent stable one. Change my motherboard. Status updates Keeping only the last 3 (28/06 04pm) More stress testing and still pass the tests. (28/06 03pm) Been stress testing for 10 minute straight now and 5 minutes with both CPU and GPU being stressed at the same time. (28/06 03pm) Stress-testing right now, so far no problems. A little hope Tests with Furmark and Prime95. Testing Windows bare-bone: 30 Minutes stress, no freeze. Installing an Anti-virus and some software, restarting computer. Testing with Anti-virus and some software (No drivers installed): 30 Minutes stress, no freeze. Installing audio drivers, restarting computer. Testing with the audio drivers: 30 Minutes stress, no freeze. Installing the latest graphic drivers from EVGA's website (without 3d vision since I don't use it), restarting computer. Testing with the graphic drivers: 30 Minutes stress, no freeze. Configuring Windows to my liking and installing more softwares. In this situation, how can I successfully pin-point the current hardware problem? (If it's a hardware problem) Because I don't really have the budget to just forget and replace everything. I also don't really have hardware to test-replace current hardware.

    Read the article

  • Tomcat IIS 7 Integration gives 503 errors on all requests.

    - by Yvan JANSSENS
    Hi, After many attempts to install Tomcat on IIS 7, I finally managed to get it working. At least I think so :-S. I finally got the 500 errors away, by setting the correct permissions. The only thing that doesn't work is ... serving stuff: neither regular stuff (like ASP, HTML files, or directory browsing) or Tomcat things work. Here are my configs: Worker.properties # The workers that your plugins should create and work with # worker.list=worker1 #------ DEFAULT ajp13 WORKER DEFINITION ------------------------------ #--------------------------------------------------------------------- # Defining a worker named ajp13 and of type ajp13 # Note that the name and the type do not have to match. # worker.worker1.port=8009 worker.worker1.host=127.0.0.1 worker.worker1.type=ajp13 URIWorkerMap.properties /|/*=worker1 # Exclude the subdirectory static: !/static|/*=worker1 # Exclude some suffixes: !*.html=worker1 !*.asp=worker1 Server.xml <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> http://localhost:8080 is working, I can view the apps and configure them there... I'm quite new to IIS 7, I used to work with IIS 6. Thanks in advance, Yvan

    Read the article

  • Troubleshooting latency spikes on ESXi NFS datastores

    - by exo_cw
    I'm experiencing fsync latencies of around five seconds on NFS datastores in ESXi, triggered by certain VMs. I suspect this might be caused by VMs using NCQ/TCQ, as this does not happen with virtual IDE drives. This can be reproduced using fsync-tester (by Ted Ts'o) and ioping. For example using a Grml live system with a 8GB disk: Linux 2.6.33-grml64: root@dynip211 /mnt/sda # ./fsync-tester fsync time: 5.0391 fsync time: 5.0438 fsync time: 5.0300 fsync time: 0.0231 fsync time: 0.0243 fsync time: 5.0382 fsync time: 5.0400 [... goes on like this ...] That is 5 seconds, not milliseconds. This is even creating IO-latencies on a different VM running on the same host and datastore: root@grml /mnt/sda/ioping-0.5 # ./ioping -i 0.3 -p 20 . 4096 bytes from . (reiserfs /dev/sda): request=1 time=7.2 ms 4096 bytes from . (reiserfs /dev/sda): request=2 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=3 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=4 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=5 time=4809.0 ms 4096 bytes from . (reiserfs /dev/sda): request=6 time=1.0 ms 4096 bytes from . (reiserfs /dev/sda): request=7 time=1.2 ms 4096 bytes from . (reiserfs /dev/sda): request=8 time=1.1 ms 4096 bytes from . (reiserfs /dev/sda): request=9 time=1.3 ms 4096 bytes from . (reiserfs /dev/sda): request=10 time=1.2 ms 4096 bytes from . (reiserfs /dev/sda): request=11 time=1.0 ms 4096 bytes from . (reiserfs /dev/sda): request=12 time=4950.0 ms When I move the first VM to local storage it looks perfectly normal: root@dynip211 /mnt/sda # ./fsync-tester fsync time: 0.0191 fsync time: 0.0201 fsync time: 0.0203 fsync time: 0.0206 fsync time: 0.0192 fsync time: 0.0231 fsync time: 0.0201 [... tried that for one hour: no spike ...] Things I've tried that made no difference: Tested several ESXi Builds: 381591, 348481, 260247 Tested on different hardware, different Intel and AMD boxes Tested with different NFS servers, all show the same behavior: OpenIndiana b147 (ZFS sync always or disabled: no difference) OpenIndiana b148 (ZFS sync always or disabled: no difference) Linux 2.6.32 (sync or async: no difference) It makes no difference if the NFS server is on the same machine (as a virtual storage appliance) or on a different host Guest OS tested, showing problems: Windows 7 64 Bit (using CrystalDiskMark, latency spikes happen mostly during preparing phase) Linux 2.6.32 (fsync-tester + ioping) Linux 2.6.38 (fsync-tester + ioping) I could not reproduce this problem on Linux 2.6.18 VMs. Another workaround is to use virtual IDE disks (vs SCSI/SAS), but that is limiting performance and the number of drives per VM. Update 2011-06-30: The latency spikes seem to happen more often if the application writes in multiple small blocks before fsync. For example fsync-tester does this (strace output): pwrite(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 1048576, 0) = 1048576 fsync(3) = 0 ioping does this while preparing the file: [lots of pwrites] pwrite(3, "********************************"..., 4096, 1036288) = 4096 pwrite(3, "********************************"..., 4096, 1040384) = 4096 pwrite(3, "********************************"..., 4096, 1044480) = 4096 fsync(3) = 0 The setup phase of ioping almost always hangs, while fsync-tester sometimes works fine. Is someone capable of updating fsync-tester to write multiple small blocks? My C skills suck ;) Update 2011-07-02: This problem does not occur with iSCSI. I tried this with the OpenIndiana COMSTAR iSCSI server. But iSCSI does not give you easy access to the VMDK files so you can move them between hosts with snapshots and rsync. Update 2011-07-06: This is part of a wireshark capture, captured by a third VM on the same vSwitch. This all happens on the same host, no physical network involved. I've started ioping around time 20. There were no packets sent until the five second delay was over: No. Time Source Destination Protocol Info 1082 16.164096 192.168.250.10 192.168.250.20 NFS V3 WRITE Call (Reply In 1085), FH:0x3eb56466 Offset:0 Len:84 FILE_SYNC 1083 16.164112 192.168.250.10 192.168.250.20 NFS V3 WRITE Call (Reply In 1086), FH:0x3eb56f66 Offset:0 Len:84 FILE_SYNC 1084 16.166060 192.168.250.20 192.168.250.10 TCP nfs > iclcnet-locate [ACK] Seq=445 Ack=1057 Win=32806 Len=0 TSV=432016 TSER=769110 1085 16.167678 192.168.250.20 192.168.250.10 NFS V3 WRITE Reply (Call In 1082) Len:84 FILE_SYNC 1086 16.168280 192.168.250.20 192.168.250.10 NFS V3 WRITE Reply (Call In 1083) Len:84 FILE_SYNC 1087 16.168417 192.168.250.10 192.168.250.20 TCP iclcnet-locate > nfs [ACK] Seq=1057 Ack=773 Win=4163 Len=0 TSV=769110 TSER=432016 1088 23.163028 192.168.250.10 192.168.250.20 NFS V3 GETATTR Call (Reply In 1089), FH:0x0bb04963 1089 23.164541 192.168.250.20 192.168.250.10 NFS V3 GETATTR Reply (Call In 1088) Directory mode:0777 uid:0 gid:0 1090 23.274252 192.168.250.10 192.168.250.20 TCP iclcnet-locate > nfs [ACK] Seq=1185 Ack=889 Win=4163 Len=0 TSV=769821 TSER=432716 1091 24.924188 192.168.250.10 192.168.250.20 RPC Continuation 1092 24.924210 192.168.250.10 192.168.250.20 RPC Continuation 1093 24.924216 192.168.250.10 192.168.250.20 RPC Continuation 1094 24.924225 192.168.250.10 192.168.250.20 RPC Continuation 1095 24.924555 192.168.250.20 192.168.250.10 TCP nfs > iclcnet_svinfo [ACK] Seq=6893 Ack=1118613 Win=32625 Len=0 TSV=432892 TSER=769986 1096 24.924626 192.168.250.10 192.168.250.20 RPC Continuation 1097 24.924635 192.168.250.10 192.168.250.20 RPC Continuation 1098 24.924643 192.168.250.10 192.168.250.20 RPC Continuation 1099 24.924649 192.168.250.10 192.168.250.20 RPC Continuation 1100 24.924653 192.168.250.10 192.168.250.20 RPC Continuation 2nd Update 2011-07-06: There seems to be some influence from TCP window sizes. I was not able to reproduce this problem using FreeNAS (based on FreeBSD) as a NFS server. The wireshark captures showed TCP window updates to 29127 bytes in regular intervals. I did not see them with OpenIndiana, which uses larger window sizes by default. I can no longer reproduce this problem if I set the following options in OpenIndiana and restart the NFS server: ndd -set /dev/tcp tcp_recv_hiwat 8192 # default is 128000 ndd -set /dev/tcp tcp_max_buf 1048575 # default is 1048576 But this kills performance: Writing from /dev/zero to a file with dd_rescue goes from 170MB/s to 80MB/s. Update 2011-07-07: I've uploaded this tcpdump capture (can be analyzed with wireshark). In this case 192.168.250.2 is the NFS server (OpenIndiana b148) and 192.168.250.10 is the ESXi host. Things I've tested during this capture: Started "ioping -w 5 -i 0.2 ." at time 30, 5 second hang in setup, completed at time 40. Started "ioping -w 5 -i 0.2 ." at time 60, 5 second hang in setup, completed at time 70. Started "fsync-tester" at time 90, with the following output, stopped at time 120: fsync time: 0.0248 fsync time: 5.0197 fsync time: 5.0287 fsync time: 5.0242 fsync time: 5.0225 fsync time: 0.0209 2nd Update 2011-07-07: Tested another NFS server VM, this time NexentaStor 3.0.5 community edition: Shows the same problems. Update 2011-07-31: I can also reproduce this problem on the new ESXi build 4.1.0.433742.

    Read the article

  • ProFTPd server on Ubuntu getting access denied message when successfully authenticated?

    - by exxoid
    I have a Ubuntu box with a ProFTPD 1.3.4a Server, when I try to log in via my FTP Client I cannot do anything as it does not allow me to list directories; I have tried logging in as root and as a regular user and tried accessing different paths within the FTP Server. The error I get in my FTP Client is: Status: Retrieving directory listing... Command: CDUP Response: 250 CDUP command successful Command: PWD Response: 257 "/var" is the current directory Command: PASV Response: 227 Entering Passive Mode (172,16,4,22,237,205). Command: MLSD Response: 550 Access is denied. Error: Failed to retrieve directory listing Any idea? Here is the config of my proftpd: # # /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file. # To really apply changes, reload proftpd after modifications, if # it runs in daemon mode. It is not required in inetd/xinetd mode. # # Includes DSO modules Include /etc/proftpd/modules.conf # Set off to disable IPv6 support which is annoying on IPv4 only boxes. UseIPv6 off # If set on you can experience a longer connection delay in many cases. IdentLookups off ServerName "Drupal Intranet" ServerType standalone ServerIdent on "FTP Server ready" DeferWelcome on # Set the user and group that the server runs as User nobody Group nogroup MultilineRFC2228 on DefaultServer on ShowSymlinks on TimeoutNoTransfer 600 TimeoutStalled 600 TimeoutIdle 1200 DisplayLogin welcome.msg DisplayChdir .message true ListOptions "-l" DenyFilter \*.*/ # Use this to jail all users in their homes # DefaultRoot ~ # Users require a valid shell listed in /etc/shells to login. # Use this directive to release that constrain. # RequireValidShell off # Port 21 is the standard FTP port. Port 21 # In some cases you have to specify passive ports range to by-pass # firewall limitations. Ephemeral ports can be used for that, but # feel free to use a more narrow range. # PassivePorts 49152 65534 # If your host was NATted, this option is useful in order to # allow passive tranfers to work. You have to use your public # address and opening the passive ports used on your firewall as well. # MasqueradeAddress 1.2.3.4 # This is useful for masquerading address with dynamic IPs: # refresh any configured MasqueradeAddress directives every 8 hours <IfModule mod_dynmasq.c> # DynMasqRefresh 28800 </IfModule> # To prevent DoS attacks, set the maximum number of child processes # to 30. If you need to allow more than 30 concurrent connections # at once, simply increase this value. Note that this ONLY works # in standalone mode, in inetd mode you should use an inetd server # that allows you to limit maximum number of processes per service # (such as xinetd) MaxInstances 30 # Set the user and group that the server normally runs at. # Umask 022 is a good standard umask to prevent new files and dirs # (second parm) from being group and world writable. Umask 022 022 # Normally, we want files to be overwriteable. AllowOverwrite on # Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords: # PersistentPasswd off # This is required to use both PAM-based authentication and local passwords AuthPAMConfig proftpd AuthOrder mod_auth_pam.c* mod_auth_unix.c # Be warned: use of this directive impacts CPU average load! # Uncomment this if you like to see progress and transfer rate with ftpwho # in downloads. That is not needed for uploads rates. # UseSendFile off TransferLog /var/log/proftpd/xferlog SystemLog /var/log/proftpd/proftpd.log # Logging onto /var/log/lastlog is enabled but set to off by default #UseLastlog on # In order to keep log file dates consistent after chroot, use timezone info # from /etc/localtime. If this is not set, and proftpd is configured to # chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight # savings timezone regardless of whether DST is in effect. #SetEnv TZ :/etc/localtime <IfModule mod_quotatab.c> QuotaEngine off </IfModule> <IfModule mod_ratio.c> Ratios off </IfModule> # Delay engine reduces impact of the so-called Timing Attack described in # http://www.securityfocus.com/bid/11430/discuss # It is on by default. <IfModule mod_delay.c> DelayEngine on </IfModule> <IfModule mod_ctrls.c> ControlsEngine off ControlsMaxClients 2 ControlsLog /var/log/proftpd/controls.log ControlsInterval 5 ControlsSocket /var/run/proftpd/proftpd.sock </IfModule> <IfModule mod_ctrls_admin.c> AdminControlsEngine off </IfModule> # # Alternative authentication frameworks # #Include /etc/proftpd/ldap.conf #Include /etc/proftpd/sql.conf # # This is used for FTPS connections # #Include /etc/proftpd/tls.conf # # Useful to keep VirtualHost/VirtualRoot directives separated # #Include /etc/proftpd/virtuals.con # A basic anonymous configuration, no upload directories. # <Anonymous ~ftp> # User ftp # Group nogroup # # We want clients to be able to login with "anonymous" as well as "ftp" # UserAlias anonymous ftp # # Cosmetic changes, all files belongs to ftp user # DirFakeUser on ftp # DirFakeGroup on ftp # # RequireValidShell off # # # Limit the maximum number of anonymous logins # MaxClients 10 # # # We want 'welcome.msg' displayed at login, and '.message' displayed # # in each newly chdired directory. # DisplayLogin welcome.msg # DisplayChdir .message # # # Limit WRITE everywhere in the anonymous chroot # <Directory *> # <Limit WRITE> # DenyAll # </Limit> # </Directory> # # # Uncomment this if you're brave. # # <Directory incoming> # # # Umask 022 is a good standard umask to prevent new files and dirs # # # (second parm) from being group and world writable. # # Umask 022 022 # # <Limit READ WRITE> # # DenyAll # # </Limit> # # <Limit STOR> # # AllowAll # # </Limit> # # </Directory> # # </Anonymous> # Include other custom configuration files Include /etc/proftpd/conf.d/ UseReverseDNS off <Global> RootLogin on UseFtpUsers on ServerIdent on DefaultChdir /var/www DeleteAbortedStores on LoginPasswordPrompt on AccessGrantMsg "You have been authenticated successfully." </Global> Any idea what could be wrong? Thanks for your help!

    Read the article

  • Why is Varnish not caching?

    - by Justin
    I am troubleshooting the setup of Varnish 3.x on my Ubuntu server. I'm running Drupal 7 on two sites set up on the box, via named-based vhosts. Before trying to get Varnish to play nice with Drupal I'm trying to just get Varnish to a PNG from cache. Here are the headers I get from a curl -I request of the PNG file: HTTP/1.1 200 OK Server: Apache/2.2.22 (Ubuntu) Last-Modified: Sun, 07 Oct 2012 21:18:59 GMT ETag: "a57c2-3850-4cb7ea73db6c0" Accept-Ranges: bytes Content-Length: 14416 Cache-Control: max-age=1209600 Expires: Thu, 25 Oct 2012 22:55:14 GMT Content-Type: image/png Accept-Ranges: bytes Date: Thu, 11 Oct 2012 22:55:14 GMT X-Varnish: 1766703058 Age: 0 Via: 1.1 varnish Connection: keep-alive X-Varnish-Cache: MISS Here is the Varnish VCL file I'm using (It's a default VCL configuration designed for Drupal): # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.1"; .port = "8080"; } # Respond to incoming requests. sub vcl_recv { # Use anonymous, cached pages if all backends are down. if (!req.backend.healthy) { unset req.http.Cookie; } # Allow the backend to serve up stale content if it is responding slowly. set req.grace = 6h; # Pipe these paths directly to Apache for streaming. #if (req.url ~ "^/admin/content/backup_migrate/export") { # return (pipe); #} # Do not cache these paths. if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } # Do not allow outside access to cron.php or install.php. #if (req.url ~ "^/(cron|install)\.php$" && !client.ip ~ internal) { # Have Varnish throw the error directly. # error 404 "Page not found."; # Use a custom error page that you've defined in Drupal at the path "404". # set req.url = "/404"; #} # Always cache the following file types for all users. This list of extensions # appears twice, once here and again in vcl_fetch so make sure you edit both # and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } # Remove all cookies that Drupal doesn't need to know about. We explicitly # list the ones that Drupal does need, the SESS and NO_CACHE. If, after # running this code we find that either of these two cookies remains, we # will pass as the page cannot be cached. if (req.http.Cookie) { # 1. Append a semi-colon to the front of the cookie string. # 2. Remove all spaces that appear after semi-colons. # 3. Match the cookies we want to keep, adding the space we removed # previously back. (\1) is first matching group in the regsuball. # 4. Remove all other cookies, identifying them by the fact that they have # no space after the preceding semi-colon. # 5. Remove all spaces and semi-colons from the beginning and end of the # cookie string. set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { # If there are no remaining cookies, remove the cookie header. If there # aren't any cookie headers, Varnish's default behavior will be to cache # the page. unset req.http.Cookie; } else { # If there is any cookies left (a session or NO_CACHE cookie), do not # cache the page. Pass it on to Apache directly. return (pass); } } } # Set a header to track a cache HIT/MISS. sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } # Code determining what to do when serving items from the Apache servers. # beresp == Back-end response from the web server. sub vcl_fetch { # We need this to cache 404s, 301s, 500s. Otherwise, depending on backend but # definitely in Drupal's case these responses are not cacheable by default. if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } # Don't allow static files to set cookies. # (?i) denotes case insensitive in PCRE (perl compatible regular expressions). # This list of extensions appears twice, once here and again in vcl_recv so # make sure you edit both and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } # Allow items to be stale if needed. set beresp.grace = 6h; } # In the event of an error, show friendlier messages. sub vcl_error { # Redirect to some other URL in the case of a homepage failure. #if (req.url ~ "^/?$") { # set obj.status = 302; # set obj.http.Location = "http://backup.example.com/"; #} # Otherwise redirect to the homepage, which will likely be in the cache. set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <html> <head> <title>Page Unavailable</title> <style> body { background: #303030; text-align: center; color: white; } #page { border: 1px solid #CCC; width: 500px; margin: 100px auto 0; padding: 30px; background: #323232; } a, a:link, a:visited { color: #CCC; } .error { color: #222; } </style> </head> <body onload="setTimeout(function() { window.location = '/' }, 5000)"> <div id="page"> <h1 class="title">Page Unavailable</h1> <p>The page you requested is temporarily unavailable.</p> <p>We're redirecting you to the <a href="/">homepage</a> in 5 seconds.</p> <div class="error">(Error "} + obj.status + " " + obj.response + {")</div> </div> </body> </html> "}; return (deliver); } I'm getting a MISS and age 0 every time. If I'm understanding correctly, this means the file isn't being returned from Varnish's cache. Is there a problem with my Varnish config?

    Read the article

  • Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1

    - by Bobby Diaz
    Update: Live demo and source code now available!  The recent wave of earthquakes (no pun intended) being reported in the news got me wondering about the frequency and severity of earthquakes around the world. Since I’ve been doing a lot of Silverlight development lately, I decided to scratch my curiosity with a nice little Bing Maps application that will show the location and relative strength of recent seismic activity. Here is a list of technologies this application will utilize, so be sure to have everything downloaded and installed if you plan on following along. Silverlight 3 WCF RIA Services Bing Maps Silverlight Control * Managed Extensibility Framework (optional) MVVM Light Toolkit (optional) log4net (optional) * If you are new to Bing Maps or have not signed up for a Developer Account, you will need to visit www.bingmapsportal.com to request a Bing Maps key for your application. Getting Started We start out by creating a new Silverlight Application called EarthquakeLocator and specify that we want to automatically create the Web Application Project with RIA Services enabled. I cleaned up the web app by removing the Default.aspx and EarthquakeLocatorTestPage.html. Then I renamed the EarthquakeLocatorTestPage.aspx to Default.aspx and set it as my start page. I also set the development server to use a specific port, as shown below. RIA Services Next, I created a Services folder in the EarthquakeLocator.Web project and added a new Domain Service Class called EarthquakeService.cs. This is the RIA Services Domain Service that will provide earthquake data for our client application. I am not using LINQ to SQL or Entity Framework, so I will use the <empty domain service class> option. We will be pulling data from an external Atom feed, but this example could just as easily pull data from a database or another web service. This is an important distinction to point out because each scenario I just mentioned could potentially use a different Domain Service base class (i.e. LinqToSqlDomainService<TDataContext>). Now we can start adding Query methods to our EarthquakeService that pull data from the USGS web site. Here is the complete code for our service class: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.ServiceModel.Syndication; using System.Web.DomainServices; using System.Web.Ria; using System.Xml; using log4net; using EarthquakeLocator.Web.Model;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// Provides earthquake data to client applications.     /// </summary>     [EnableClientAccess()]     public class EarthquakeService : DomainService     {         private static readonly ILog log = LogManager.GetLogger(typeof(EarthquakeService));           // USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/         private const string FeedForPreviousDay =             "http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml";         private const string FeedForPreviousWeek =             "http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml";           /// <summary>         /// Gets the earthquake data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Earthquake"/> objects.</returns>         public IQueryable<Earthquake> GetEarthquakes()         {             var feed = GetFeed(FeedForPreviousWeek);             var list = new List<Earthquake>();               if ( feed != null )             {                 foreach ( var entry in feed.Items )                 {                     var quake = CreateEarthquake(entry);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates an <see cref="Earthquake"/> object for each entry in the Atom feed.         /// </summary>         /// <param name="entry">The Atom entry.</param>         /// <returns></returns>         private Earthquake CreateEarthquake(SyndicationItem entry)         {             Earthquake quake = null;             string title = entry.Title.Text;             string summary = entry.Summary.Text;             string point = GetElementValue<String>(entry, "point");             string depth = GetElementValue<String>(entry, "elev");             string utcTime = null;             string localTime = null;             string depthDesc = null;             double? magnitude = null;             double? latitude = null;             double? longitude = null;             double? depthKm = null;               if ( !String.IsNullOrEmpty(title) && title.StartsWith("M") )             {                 title = title.Substring(2, title.IndexOf(',')-3).Trim();                 magnitude = TryParse(title);             }             if ( !String.IsNullOrEmpty(point) )             {                 var values = point.Split(' ');                 if ( values.Length == 2 )                 {                     latitude = TryParse(values[0]);                     longitude = TryParse(values[1]);                 }             }             if ( !String.IsNullOrEmpty(depth) )             {                 depthKm = TryParse(depth);                 if ( depthKm != null )                 {                     depthKm = Math.Round((-1 * depthKm.Value) / 100, 2);                 }             }             if ( !String.IsNullOrEmpty(summary) )             {                 summary = summary.Replace("</p>", "");                 var values = summary.Split(                     new string[] { "<p>" },                     StringSplitOptions.RemoveEmptyEntries);                   if ( values.Length == 3 )                 {                     var times = values[1].Split(                         new string[] { "<br>" },                         StringSplitOptions.RemoveEmptyEntries);                       if ( times.Length > 0 )                     {                         utcTime = times[0];                     }                     if ( times.Length > 1 )                     {                         localTime = times[1];                     }                       depthDesc = values[2];                     depthDesc = "Depth: " + depthDesc.Substring(depthDesc.IndexOf(":") + 2);                 }             }               if ( latitude != null && longitude != null )             {                 quake = new Earthquake()                 {                     Id = entry.Id,                     Title = entry.Title.Text,                     Summary = entry.Summary.Text,                     Date = entry.LastUpdatedTime.DateTime,                     Url = entry.Links.Select(l => Path.Combine(l.BaseUri.OriginalString,                         l.Uri.OriginalString)).FirstOrDefault(),                     Age = entry.Categories.Where(c => c.Label == "Age")                         .Select(c => c.Name).FirstOrDefault(),                     Magnitude = magnitude.GetValueOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault(),                     DepthInKm = depthKm.GetValueOrDefault(),                     DepthDesc = depthDesc,                     UtcTime = utcTime,                     LocalTime = localTime                 };             }               return quake;         }           private T GetElementValue<T>(SyndicationItem entry, String name)         {             var el = entry.ElementExtensions.Where(e => e.OuterName == name).FirstOrDefault();             T value = default(T);               if ( el != null )             {                 value = el.GetObject<T>();             }               return value;         }           private double? TryParse(String value)         {             double d;             if ( Double.TryParse(value, out d) )             {                 return d;             }             return null;         }           /// <summary>         /// Gets the feed at the specified URL.         /// </summary>         /// <param name="url">The URL.</param>         /// <returns>A <see cref="SyndicationFeed"/> object.</returns>         public static SyndicationFeed GetFeed(String url)         {             SyndicationFeed feed = null;               try             {                 log.Debug("Loading RSS feed: " + url);                   using ( var reader = XmlReader.Create(url) )                 {                     feed = SyndicationFeed.Load(reader);                 }             }             catch ( Exception ex )             {                 log.Error("Error occurred while loading RSS feed: " + url, ex);             }               return feed;         }     } }   The only method that will be generated in the client side proxy class, EarthquakeContext, will be the GetEarthquakes() method. The reason being that it is the only public instance method and it returns an IQueryable<Earthquake> collection that can be consumed by the client application. GetEarthquakes() calls the static GetFeed(String) method, which utilizes the built in SyndicationFeed API to load the external data feed. You will need to add a reference to the System.ServiceModel.Web library in order to take advantage of the RSS/Atom reader. The API will also allow you to create your own feeds to serve up in your applications. Model I have also created a Model folder and added a new class, Earthquake.cs. The Earthquake object will hold the various properties returned from the Atom feed. Here is a sample of the code for that class. Notice the [Key] attribute on the Id property, which is required by RIA Services to uniquely identify the entity. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ComponentModel.DataAnnotations;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     [DataContract]     public class Earthquake     {         /// <summary>         /// Gets or sets the id.         /// </summary>         /// <value>The id.</value>         [Key]         [DataMember]         public string Id { get; set; }           /// <summary>         /// Gets or sets the title.         /// </summary>         /// <value>The title.</value>         [DataMember]         public string Title { get; set; }           /// <summary>         /// Gets or sets the summary.         /// </summary>         /// <value>The summary.</value>         [DataMember]         public string Summary { get; set; }           // additional properties omitted     } }   View Model The recent trend to use the MVVM pattern for WPF and Silverlight provides a great way to separate the data and behavior logic out of the user interface layer of your client applications. I have chosen to use the MVVM Light Toolkit for the Earthquake Locator, but there are other options out there if you prefer another library. That said, I went ahead and created a ViewModel folder in the Silverlight project and added a EarthquakeViewModel class that derives from ViewModelBase. Here is the code: using System; using System.Collections.ObjectModel; using System.ComponentModel.Composition; using System.ComponentModel.Composition.Hosting; using Microsoft.Maps.MapControl; using GalaSoft.MvvmLight; using EarthquakeLocator.Web.Model; using EarthquakeLocator.Web.Services;   namespace EarthquakeLocator.ViewModel {     /// <summary>     /// Provides data for views displaying earthquake information.     /// </summary>     public class EarthquakeViewModel : ViewModelBase     {         [Import]         public EarthquakeContext Context;           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         public EarthquakeViewModel()         {             var catalog = new AssemblyCatalog(GetType().Assembly);             var container = new CompositionContainer(catalog);             container.ComposeParts(this);             Initialize();         }           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         /// <param name="context">The context.</param>         public EarthquakeViewModel(EarthquakeContext context)         {             Context = context;             Initialize();         }           private void Initialize()         {             MapCenter = new Location(20, -170);             ZoomLevel = 2;         }           #region Private Methods           private void OnAutoLoadDataChanged()         {             LoadEarthquakes();         }           private void LoadEarthquakes()         {             var query = Context.GetEarthquakesQuery();             Context.Earthquakes.Clear();               Context.Load(query, (op) =>             {                 if ( !op.HasError )                 {                     foreach ( var item in op.Entities )                     {                         Earthquakes.Add(item);                     }                 }             }, null);         }           #endregion Private Methods           #region Properties           private bool autoLoadData;         /// <summary>         /// Gets or sets a value indicating whether to auto load data.         /// </summary>         /// <value><c>true</c> if auto loading data; otherwise, <c>false</c>.</value>         public bool AutoLoadData         {             get { return autoLoadData; }             set             {                 if ( autoLoadData != value )                 {                     autoLoadData = value;                     RaisePropertyChanged("AutoLoadData");                     OnAutoLoadDataChanged();                 }             }         }           private ObservableCollection<Earthquake> earthquakes;         /// <summary>         /// Gets the collection of earthquakes to display.         /// </summary>         /// <value>The collection of earthquakes.</value>         public ObservableCollection<Earthquake> Earthquakes         {             get             {                 if ( earthquakes == null )                 {                     earthquakes = new ObservableCollection<Earthquake>();                 }                   return earthquakes;             }         }           private Location mapCenter;         /// <summary>         /// Gets or sets the map center.         /// </summary>         /// <value>The map center.</value>         public Location MapCenter         {             get { return mapCenter; }             set             {                 if ( mapCenter != value )                 {                     mapCenter = value;                     RaisePropertyChanged("MapCenter");                 }             }         }           private double zoomLevel;         /// <summary>         /// Gets or sets the zoom level.         /// </summary>         /// <value>The zoom level.</value>         public double ZoomLevel         {             get { return zoomLevel; }             set             {                 if ( zoomLevel != value )                 {                     zoomLevel = value;                     RaisePropertyChanged("ZoomLevel");                 }             }         }           #endregion Properties     } }   The EarthquakeViewModel class contains all of the properties that will be bound to by the various controls in our views. Be sure to read through the LoadEarthquakes() method, which handles calling the GetEarthquakes() method in our EarthquakeService via the EarthquakeContext proxy, and also transfers the loaded entities into the view model’s Earthquakes collection. Another thing to notice is what’s going on in the default constructor. I chose to use the Managed Extensibility Framework (MEF) for my composition needs, but you can use any dependency injection library or none at all. To allow the EarthquakeContext class to be discoverable by MEF, I added the following partial class so that I could supply the appropriate [Export] attribute: using System; using System.ComponentModel.Composition;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// The client side proxy for the EarthquakeService class.     /// </summary>     [Export]     public partial class EarthquakeContext     {     } }   One last piece I wanted to point out before moving on to the user interface, I added a client side partial class for the Earthquake entity that contains helper properties that we will bind to later: using System;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     public partial class Earthquake     {         /// <summary>         /// Gets the location based on the current Latitude/Longitude.         /// </summary>         /// <value>The location.</value>         public string Location         {             get { return String.Format("{0},{1}", Latitude, Longitude); }         }           /// <summary>         /// Gets the size based on the Magnitude.         /// </summary>         /// <value>The size.</value>         public double Size         {             get { return (Magnitude * 3); }         }     } }   View Now the fun part! Usually, I would create a Views folder to place all of my View controls in, but I took the easy way out and added the following XAML code to the default MainPage.xaml file. Be sure to add the bing prefix associating the Microsoft.Maps.MapControl namespace after adding the assembly reference to your project. The MVVM Light Toolkit project templates come with a ViewModelLocator class that you can use via a static resource, but I am instantiating the EarthquakeViewModel directly in my user control. I am setting the AutoLoadData property to true as a way to trigger the LoadEarthquakes() method call. The MapItemsControl found within the <bing:Map> control binds its ItemsSource property to the Earthquakes collection of the view model, and since it is an ObservableCollection<T>, we get the automatic two way data binding via the INotifyCollectionChanged interface. <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">             <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />         </bing:Map>       </Grid> </UserControl>   The EarthquakeTemplate defines the Ellipse that will represent each earthquake, the Width and Height that are determined by the Magnitude, the Position on the map, and also the tooltip that will appear when we mouse over each data point. Running the application will give us the following result (shown with a tooltip example): That concludes this portion of our show but I plan on implementing additional functionality in later blog posts. Be sure to come back soon to see the next installments in this series. Enjoy!   Additional Resources USGS Earthquake Data Feeds Brad Abrams shows how RIA Services and MVVM can work together

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • A Taxonomy of Numerical Methods v1

    - by JoshReuben
    Numerical Analysis – When, What, (but not how) Once you understand the Math & know C++, Numerical Methods are basically blocks of iterative & conditional math code. I found the real trick was seeing the forest for the trees – knowing which method to use for which situation. Its pretty easy to get lost in the details – so I’ve tried to organize these methods in a way that I can quickly look this up. I’ve included links to detailed explanations and to C++ code examples. I’ve tried to classify Numerical methods in the following broad categories: Solving Systems of Linear Equations Solving Non-Linear Equations Iteratively Interpolation Curve Fitting Optimization Numerical Differentiation & Integration Solving ODEs Boundary Problems Solving EigenValue problems Enjoy – I did ! Solving Systems of Linear Equations Overview Solve sets of algebraic equations with x unknowns The set is commonly in matrix form Gauss-Jordan Elimination http://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_elimination C++: http://www.codekeep.net/snippets/623f1923-e03c-4636-8c92-c9dc7aa0d3c0.aspx Produces solution of the equations & the coefficient matrix Efficient, stable 2 steps: · Forward Elimination – matrix decomposition: reduce set to triangular form (0s below the diagonal) or row echelon form. If degenerate, then there is no solution · Backward Elimination –write the original matrix as the product of ints inverse matrix & its reduced row-echelon matrix à reduce set to row canonical form & use back-substitution to find the solution to the set Elementary ops for matrix decomposition: · Row multiplication · Row switching · Add multiples of rows to other rows Use pivoting to ensure rows are ordered for achieving triangular form LU Decomposition http://en.wikipedia.org/wiki/LU_decomposition C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-lu-decomposition-for-solving.html Represent the matrix as a product of lower & upper triangular matrices A modified version of GJ Elimination Advantage – can easily apply forward & backward elimination to solve triangular matrices Techniques: · Doolittle Method – sets the L matrix diagonal to unity · Crout Method - sets the U matrix diagonal to unity Note: both the L & U matrices share the same unity diagonal & can be stored compactly in the same matrix Gauss-Seidel Iteration http://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method C++: http://www.nr.com/forum/showthread.php?t=722 Transform the linear set of equations into a single equation & then use numerical integration (as integration formulas have Sums, it is implemented iteratively). an optimization of Gauss-Jacobi: 1.5 times faster, requires 0.25 iterations to achieve the same tolerance Solving Non-Linear Equations Iteratively find roots of polynomials – there may be 0, 1 or n solutions for an n order polynomial use iterative techniques Iterative methods · used when there are no known analytical techniques · Requires set functions to be continuous & differentiable · Requires an initial seed value – choice is critical to convergence à conduct multiple runs with different starting points & then select best result · Systematic - iterate until diminishing returns, tolerance or max iteration conditions are met · bracketing techniques will always yield convergent solutions, non-bracketing methods may fail to converge Incremental method if a nonlinear function has opposite signs at 2 ends of a small interval x1 & x2, then there is likely to be a solution in their interval – solutions are detected by evaluating a function over interval steps, for a change in sign, adjusting the step size dynamically. Limitations – can miss closely spaced solutions in large intervals, cannot detect degenerate (coinciding) solutions, limited to functions that cross the x-axis, gives false positives for singularities Fixed point method http://en.wikipedia.org/wiki/Fixed-point_iteration C++: http://books.google.co.il/books?id=weYj75E_t6MC&pg=PA79&lpg=PA79&dq=fixed+point+method++c%2B%2B&source=bl&ots=LQ-5P_taoC&sig=lENUUIYBK53tZtTwNfHLy5PEWDk&hl=en&sa=X&ei=wezDUPW1J5DptQaMsIHQCw&redir_esc=y#v=onepage&q=fixed%20point%20method%20%20c%2B%2B&f=false Algebraically rearrange a solution to isolate a variable then apply incremental method Bisection method http://en.wikipedia.org/wiki/Bisection_method C++: http://numericalcomputing.wordpress.com/category/algorithms/ Bracketed - Select an initial interval, keep bisecting it ad midpoint into sub-intervals and then apply incremental method on smaller & smaller intervals – zoom in Adv: unaffected by function gradient à reliable Disadv: slow convergence False Position Method http://en.wikipedia.org/wiki/False_position_method C++: http://www.dreamincode.net/forums/topic/126100-bisection-and-false-position-methods/ Bracketed - Select an initial interval , & use the relative value of function at interval end points to select next sub-intervals (estimate how far between the end points the solution might be & subdivide based on this) Newton-Raphson method http://en.wikipedia.org/wiki/Newton's_method C++: http://www-users.cselabs.umn.edu/classes/Summer-2012/csci1113/index.php?page=./newt3 Also known as Newton's method Convenient, efficient Not bracketed – only a single initial guess is required to start iteration – requires an analytical expression for the first derivative of the function as input. Evaluates the function & its derivative at each step. Can be extended to the Newton MutiRoot method for solving multiple roots Can be easily applied to an of n-coupled set of non-linear equations – conduct a Taylor Series expansion of a function, dropping terms of order n, rewrite as a Jacobian matrix of PDs & convert to simultaneous linear equations !!! Secant Method http://en.wikipedia.org/wiki/Secant_method C++: http://forum.vcoderz.com/showthread.php?p=205230 Unlike N-R, can estimate first derivative from an initial interval (does not require root to be bracketed) instead of inputting it Since derivative is approximated, may converge slower. Is fast in practice as it does not have to evaluate the derivative at each step. Similar implementation to False Positive method Birge-Vieta Method http://mat.iitm.ac.in/home/sryedida/public_html/caimna/transcendental/polynomial%20methods/bv%20method.html C++: http://books.google.co.il/books?id=cL1boM2uyQwC&pg=SA3-PA51&lpg=SA3-PA51&dq=Birge-Vieta+Method+c%2B%2B&source=bl&ots=QZmnDTK3rC&sig=BPNcHHbpR_DKVoZXrLi4nVXD-gg&hl=en&sa=X&ei=R-_DUK2iNIjzsgbE5ID4Dg&redir_esc=y#v=onepage&q=Birge-Vieta%20Method%20c%2B%2B&f=false combines Horner's method of polynomial evaluation (transforming into lesser degree polynomials that are more computationally efficient to process) with Newton-Raphson to provide a computational speed-up Interpolation Overview Construct new data points for as close as possible fit within range of a discrete set of known points (that were obtained via sampling, experimentation) Use Taylor Series Expansion of a function f(x) around a specific value for x Linear Interpolation http://en.wikipedia.org/wiki/Linear_interpolation C++: http://www.hamaluik.com/?p=289 Straight line between 2 points à concatenate interpolants between each pair of data points Bilinear Interpolation http://en.wikipedia.org/wiki/Bilinear_interpolation C++: http://supercomputingblog.com/graphics/coding-bilinear-interpolation/2/ Extension of the linear function for interpolating functions of 2 variables – perform linear interpolation first in 1 direction, then in another. Used in image processing – e.g. texture mapping filter. Uses 4 vertices to interpolate a value within a unit cell. Lagrange Interpolation http://en.wikipedia.org/wiki/Lagrange_polynomial C++: http://www.codecogs.com/code/maths/approximation/interpolation/lagrange.php For polynomials Requires recomputation for all terms for each distinct x value – can only be applied for small number of nodes Numerically unstable Barycentric Interpolation http://epubs.siam.org/doi/pdf/10.1137/S0036144502417715 C++: http://www.gamedev.net/topic/621445-barycentric-coordinates-c-code-check/ Rearrange the terms in the equation of the Legrange interpolation by defining weight functions that are independent of the interpolated value of x Newton Divided Difference Interpolation http://en.wikipedia.org/wiki/Newton_polynomial C++: http://jee-appy.blogspot.co.il/2011/12/newton-divided-difference-interpolation.html Hermite Divided Differences: Interpolation polynomial approximation for a given set of data points in the NR form - divided differences are used to approximately calculate the various differences. For a given set of 3 data points , fit a quadratic interpolant through the data Bracketed functions allow Newton divided differences to be calculated recursively Difference table Cubic Spline Interpolation http://en.wikipedia.org/wiki/Spline_interpolation C++: https://www.marcusbannerman.co.uk/index.php/home/latestarticles/42-articles/96-cubic-spline-class.html Spline is a piecewise polynomial Provides smoothness – for interpolations with significantly varying data Use weighted coefficients to bend the function to be smooth & its 1st & 2nd derivatives are continuous through the edge points in the interval Curve Fitting A generalization of interpolating whereby given data points may contain noise à the curve does not necessarily pass through all the points Least Squares Fit http://en.wikipedia.org/wiki/Least_squares C++: http://www.ccas.ru/mmes/educat/lab04k/02/least-squares.c Residual – difference between observed value & expected value Model function is often chosen as a linear combination of the specified functions Determines: A) The model instance in which the sum of squared residuals has the least value B) param values for which model best fits data Straight Line Fit Linear correlation between independent variable and dependent variable Linear Regression http://en.wikipedia.org/wiki/Linear_regression C++: http://www.oocities.org/david_swaim/cpp/linregc.htm Special case of statistically exact extrapolation Leverage least squares Given a basis function, the sum of the residuals is determined and the corresponding gradient equation is expressed as a set of normal linear equations in matrix form that can be solved (e.g. using LU Decomposition) Can be weighted - Drop the assumption that all errors have the same significance –-> confidence of accuracy is different for each data point. Fit the function closer to points with higher weights Polynomial Fit - use a polynomial basis function Moving Average http://en.wikipedia.org/wiki/Moving_average C++: http://www.codeproject.com/Articles/17860/A-Simple-Moving-Average-Algorithm Used for smoothing (cancel fluctuations to highlight longer-term trends & cycles), time series data analysis, signal processing filters Replace each data point with average of neighbors. Can be simple (SMA), weighted (WMA), exponential (EMA). Lags behind latest data points – extra weight can be given to more recent data points. Weights can decrease arithmetically or exponentially according to distance from point. Parameters: smoothing factor, period, weight basis Optimization Overview Given function with multiple variables, find Min (or max by minimizing –f(x)) Iterative approach Efficient, but not necessarily reliable Conditions: noisy data, constraints, non-linear models Detection via sign of first derivative - Derivative of saddle points will be 0 Local minima Bisection method Similar method for finding a root for a non-linear equation Start with an interval that contains a minimum Golden Search method http://en.wikipedia.org/wiki/Golden_section_search C++: http://www.codecogs.com/code/maths/optimization/golden.php Bisect intervals according to golden ratio 0.618.. Achieves reduction by evaluating a single function instead of 2 Newton-Raphson Method Brent method http://en.wikipedia.org/wiki/Brent's_method C++: http://people.sc.fsu.edu/~jburkardt/cpp_src/brent/brent.cpp Based on quadratic or parabolic interpolation – if the function is smooth & parabolic near to the minimum, then a parabola fitted through any 3 points should approximate the minima – fails when the 3 points are collinear , in which case the denominator is 0 Simplex Method http://en.wikipedia.org/wiki/Simplex_algorithm C++: http://www.codeguru.com/cpp/article.php/c17505/Simplex-Optimization-Algorithm-and-Implemetation-in-C-Programming.htm Find the global minima of any multi-variable function Direct search – no derivatives required At each step it maintains a non-degenerative simplex – a convex hull of n+1 vertices. Obtains the minimum for a function with n variables by evaluating the function at n-1 points, iteratively replacing the point of worst result with the point of best result, shrinking the multidimensional simplex around the best point. Point replacement involves expanding & contracting the simplex near the worst value point to determine a better replacement point Oscillation can be avoided by choosing the 2nd worst result Restart if it gets stuck Parameters: contraction & expansion factors Simulated Annealing http://en.wikipedia.org/wiki/Simulated_annealing C++: http://code.google.com/p/cppsimulatedannealing/ Analogy to heating & cooling metal to strengthen its structure Stochastic method – apply random permutation search for global minima - Avoid entrapment in local minima via hill climbing Heating schedule - Annealing schedule params: temperature, iterations at each temp, temperature delta Cooling schedule – can be linear, step-wise or exponential Differential Evolution http://en.wikipedia.org/wiki/Differential_evolution C++: http://www.amichel.com/de/doc/html/ More advanced stochastic methods analogous to biological processes: Genetic algorithms, evolution strategies Parallel direct search method against multiple discrete or continuous variables Initial population of variable vectors chosen randomly – if weighted difference vector of 2 vectors yields a lower objective function value then it replaces the comparison vector Many params: #parents, #variables, step size, crossover constant etc Convergence is slow – many more function evaluations than simulated annealing Numerical Differentiation Overview 2 approaches to finite difference methods: · A) approximate function via polynomial interpolation then differentiate · B) Taylor series approximation – additionally provides error estimate Finite Difference methods http://en.wikipedia.org/wiki/Finite_difference_method C++: http://www.wpi.edu/Pubs/ETD/Available/etd-051807-164436/unrestricted/EAMPADU.pdf Find differences between high order derivative values - Approximate differential equations by finite differences at evenly spaced data points Based on forward & backward Taylor series expansion of f(x) about x plus or minus multiples of delta h. Forward / backward difference - the sums of the series contains even derivatives and the difference of the series contains odd derivatives – coupled equations that can be solved. Provide an approximation of the derivative within a O(h^2) accuracy There is also central difference & extended central difference which has a O(h^4) accuracy Richardson Extrapolation http://en.wikipedia.org/wiki/Richardson_extrapolation C++: http://mathscoding.blogspot.co.il/2012/02/introduction-richardson-extrapolation.html A sequence acceleration method applied to finite differences Fast convergence, high accuracy O(h^4) Derivatives via Interpolation Cannot apply Finite Difference method to discrete data points at uneven intervals – so need to approximate the derivative of f(x) using the derivative of the interpolant via 3 point Lagrange Interpolation Note: the higher the order of the derivative, the lower the approximation precision Numerical Integration Estimate finite & infinite integrals of functions More accurate procedure than numerical differentiation Use when it is not possible to obtain an integral of a function analytically or when the function is not given, only the data points are Newton Cotes Methods http://en.wikipedia.org/wiki/Newton%E2%80%93Cotes_formulas C++: http://www.siafoo.net/snippet/324 For equally spaced data points Computationally easy – based on local interpolation of n rectangular strip areas that is piecewise fitted to a polynomial to get the sum total area Evaluate the integrand at n+1 evenly spaced points – approximate definite integral by Sum Weights are derived from Lagrange Basis polynomials Leverage Trapezoidal Rule for default 2nd formulas, Simpson 1/3 Rule for substituting 3 point formulas, Simpson 3/8 Rule for 4 point formulas. For 4 point formulas use Bodes Rule. Higher orders obtain more accurate results Trapezoidal Rule uses simple area, Simpsons Rule replaces the integrand f(x) with a quadratic polynomial p(x) that uses the same values as f(x) for its end points, but adds a midpoint Romberg Integration http://en.wikipedia.org/wiki/Romberg's_method C++: http://code.google.com/p/romberg-integration/downloads/detail?name=romberg.cpp&can=2&q= Combines trapezoidal rule with Richardson Extrapolation Evaluates the integrand at equally spaced points The integrand must have continuous derivatives Each R(n,m) extrapolation uses a higher order integrand polynomial replacement rule (zeroth starts with trapezoidal) à a lower triangular matrix set of equation coefficients where the bottom right term has the most accurate approximation. The process continues until the difference between 2 successive diagonal terms becomes sufficiently small. Gaussian Quadrature http://en.wikipedia.org/wiki/Gaussian_quadrature C++: http://www.alglib.net/integration/gaussianquadratures.php Data points are chosen to yield best possible accuracy – requires fewer evaluations Ability to handle singularities, functions that are difficult to evaluate The integrand can include a weighting function determined by a set of orthogonal polynomials. Points & weights are selected so that the integrand yields the exact integral if f(x) is a polynomial of degree <= 2n+1 Techniques (basically different weighting functions): · Gauss-Legendre Integration w(x)=1 · Gauss-Laguerre Integration w(x)=e^-x · Gauss-Hermite Integration w(x)=e^-x^2 · Gauss-Chebyshev Integration w(x)= 1 / Sqrt(1-x^2) Solving ODEs Use when high order differential equations cannot be solved analytically Evaluated under boundary conditions RK for systems – a high order differential equation can always be transformed into a coupled first order system of equations Euler method http://en.wikipedia.org/wiki/Euler_method C++: http://rosettacode.org/wiki/Euler_method First order Runge–Kutta method. Simple recursive method – given an initial value, calculate derivative deltas. Unstable & not very accurate (O(h) error) – not used in practice A first-order method - the local error (truncation error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size In evolving solution between data points xn & xn+1, only evaluates derivatives at beginning of interval xn à asymmetric at boundaries Higher order Runge Kutta http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods C++: http://www.dreamincode.net/code/snippet1441.htm 2nd & 4th order RK - Introduces parameterized midpoints for more symmetric solutions à accuracy at higher computational cost Adaptive RK – RK-Fehlberg – estimate the truncation at each integration step & automatically adjust the step size to keep error within prescribed limits. At each step 2 approximations are compared – if in disagreement to a specific accuracy, the step size is reduced Boundary Value Problems Where solution of differential equations are located at 2 different values of the independent variable x à more difficult, because cannot just start at point of initial value – there may not be enough starting conditions available at the end points to produce a unique solution An n-order equation will require n boundary conditions – need to determine the missing n-1 conditions which cause the given conditions at the other boundary to be satisfied Shooting Method http://en.wikipedia.org/wiki/Shooting_method C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-shooting-method-for-solving.html Iteratively guess the missing values for one end & integrate, then inspect the discrepancy with the boundary values of the other end to adjust the estimate Given the starting boundary values u1 & u2 which contain the root u, solve u given the false position method (solving the differential equation as an initial value problem via 4th order RK), then use u to solve the differential equations. Finite Difference Method For linear & non-linear systems Higher order derivatives require more computational steps – some combinations for boundary conditions may not work though Improve the accuracy by increasing the number of mesh points Solving EigenValue Problems An eigenvalue can substitute a matrix when doing matrix multiplication à convert matrix multiplication into a polynomial EigenValue For a given set of equations in matrix form, determine what are the solution eigenvalue & eigenvectors Similar Matrices - have same eigenvalues. Use orthogonal similarity transforms to reduce a matrix to diagonal form from which eigenvalue(s) & eigenvectors can be computed iteratively Jacobi method http://en.wikipedia.org/wiki/Jacobi_method C++: http://people.sc.fsu.edu/~jburkardt/classes/acs2_2008/openmp/jacobi/jacobi.html Robust but Computationally intense – use for small matrices < 10x10 Power Iteration http://en.wikipedia.org/wiki/Power_iteration For any given real symmetric matrix, generate the largest single eigenvalue & its eigenvectors Simplest method – does not compute matrix decomposition à suitable for large, sparse matrices Inverse Iteration Variation of power iteration method – generates the smallest eigenvalue from the inverse matrix Rayleigh Method http://en.wikipedia.org/wiki/Rayleigh's_method_of_dimensional_analysis Variation of power iteration method Rayleigh Quotient Method Variation of inverse iteration method Matrix Tri-diagonalization Method Use householder algorithm to reduce an NxN symmetric matrix to a tridiagonal real symmetric matrix vua N-2 orthogonal transforms     Whats Next Outside of Numerical Methods there are lots of different types of algorithms that I’ve learned over the decades: Data Mining – (I covered this briefly in a previous post: http://geekswithblogs.net/JoshReuben/archive/2007/12/31/ssas-dm-algorithms.aspx ) Search & Sort Routing Problem Solving Logical Theorem Proving Planning Probabilistic Reasoning Machine Learning Solvers (eg MIP) Bioinformatics (Sequence Alignment, Protein Folding) Quant Finance (I read Wilmott’s books – interesting) Sooner or later, I’ll cover the above topics as well.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Extending NerdDinner: Adding Geolocated Flair

    - by Jon Galloway
    NerdDinner is a website with the audacious goal of “Organizing the world’s nerds and helping them eat in packs.” Because nerds aren’t likely to socialize with others unless a website tells them to do it. Scott Hanselman showed off a lot of the cool features we’ve added to NerdDinner lately during his popular talk at MIX10, Beyond File | New Company: From Cheesy Sample to Social Platform. Did you miss it? Go ahead and watch it, I’ll wait. One of the features we wanted to add was flair. You know about flair, right? It’s a way to let folks who like your site show it off in their own site. For example, here’s my StackOverflow flair: Great! So how could we add some of this flair stuff to NerdDinner? What do we want to show? If we’re going to encourage our users to give up a bit of their beautiful website to show off a bit of ours, we need to think about what they’ll want to show. For instance, my StackOverflow flair is all about me, not StackOverflow. So how will this apply to NerdDinner? Since NerdDinner is all about organizing local dinners, in order for the flair to be useful it needs to make sense for the person viewing the web page. If someone visits from Egypt visits my blog, they should see information about NerdDinners in Egypt. That’s geolocation – localizing site content based on where the browser’s sitting, and it makes sense for flair as well as entire websites. So we’ll set up a simple little callout that prompts them to host a dinner in their area: Hopefully our flair works and there is a dinner near your viewers, so they’ll see another view which lists upcoming dinners near them: The Geolocation Part Generally website geolocation is done by mapping the requestor’s IP address to a geographic area. It’s not an exact science, but I’ve always found it to be pretty accurate. There are (at least) three ways to handle it: You pay somebody like MaxMind for a database (with regular updates) that sits on your server, and you use their API to do lookups. I used this on a pretty big project a few years ago and it worked well. You use HTML 5 Geolocation API or Google Gears or some other browser based solution. I think those are cool (I use Google Gears a lot), but they’re both in flux right now and I don’t think either has a wide enough of an install base yet to rely on them. You might want to, but I’ve heard you do all kinds of crazy stuff, and sometimes it gets you in trouble. I don’t mean talk out of line, but we all laugh behind your back a bit. But, hey, it’s up to you. It’s your flair or whatever. There are some free webservices out there that will take an IP address and give you location information. Easy, and works for everyone. That’s what we’re doing. I looked at a few different services and settled on IPInfoDB. It’s free, has a great API, and even returns JSON, which is handy for Javascript use. The IP query is pretty simple. We hit a URL like this: http://ipinfodb.com/ip_query.php?ip=74.125.45.100&timezone=false … and we get an XML response back like this… <?xml version="1.0" encoding="UTF-8"?> <Response> <Ip>74.125.45.100</Ip> <Status>OK</Status> <CountryCode>US</CountryCode> <CountryName>United States</CountryName> <RegionCode>06</RegionCode> <RegionName>California</RegionName> <City>Mountain View</City> <ZipPostalCode>94043</ZipPostalCode> <Latitude>37.4192</Latitude> <Longitude>-122.057</Longitude> </Response> So we’ll build some data transfer classes to hold the location information, like this: public class LocationInfo { public string Country { get; set; } public string RegionName { get; set; } public string City { get; set; } public string ZipPostalCode { get; set; } public LatLong Position { get; set; } } public class LatLong { public float Lat { get; set; } public float Long { get; set; } } And now hitting the service is pretty simple: public static LocationInfo HostIpToPlaceName(string ip) { string url = "http://ipinfodb.com/ip_query.php?ip={0}&timezone=false"; url = String.Format(url, ip); var result = XDocument.Load(url); var location = (from x in result.Descendants("Response") select new LocationInfo { City = (string)x.Element("City"), RegionName = (string)x.Element("RegionName"), Country = (string)x.Element("CountryName"), ZipPostalCode = (string)x.Element("CountryName"), Position = new LatLong { Lat = (float)x.Element("Latitude"), Long = (float)x.Element("Longitude") } }).First(); return location; } Getting The User’s IP Okay, but first we need the end user’s IP, and you’d think it would be as simple as reading the value from HttpContext: HttpContext.Current.Request.UserHostAddress But you’d be wrong. Sorry. UserHostAddress just wraps HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"], but that doesn’t get you the IP for users behind a proxy. That’s in another header, “HTTP_X_FORWARDED_FOR". So you can either hit a wrapper and then check a header, or just check two headers. I went for uniformity: string SourceIP = string.IsNullOrEmpty(Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; We’re almost set to wrap this up, but first let’s talk about our views. Yes, views, because we’ll have two. Selecting the View We wanted to make it easy for people to include the flair in their sites, so we looked around at how other people were doing this. The StackOverflow folks have a pretty good flair system, which allows you to include the flair in your site as either an IFRAME reference or a Javascript include. We’ll do both. We have a ServicesController to handle use of the site information outside of NerdDinner.com, so this fits in pretty well there. We’ll be displaying the same information for both HTML and Javascript flair, so we can use one Flair controller action which will return a different view depending on the requested format. Here’s our general flow for our controller action: Get the user’s IP Translate it to a location Grab the top three upcoming dinners that are near that location Select the view based on the format (defaulted to “html”) Return a FlairViewModel which contains the list of dinners and the location information public ActionResult Flair(string format = "html") { string SourceIP = string.IsNullOrEmpty( Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; var location = GeolocationService.HostIpToPlaceName(SourceIP); var dinners = dinnerRepository. FindByLocation(location.Position.Lat, location.Position.Long). OrderByDescending(p => p.EventDate).Take(3); // Select the view we'll return. // Using a switch because we'll add in JSON and other formats later. string view; switch (format.ToLower()) { case "javascript": view = "JavascriptFlair"; break; default: view = "Flair"; break; } return View( view, new FlairViewModel { Dinners = dinners.ToList(), LocationName = string.IsNullOrEmpty(location.City) ? "you" : String.Format("{0}, {1}", location.City, location.RegionName) } ); } Note: I’m not in love with the logic here, but it seems like overkill to extract the switch statement away when we’ll probably just have two or three views. What do you think? The HTML View The HTML version of the view is pretty simple – the only thing of any real interest here is the use of an extension method to truncate strings that are would cause the titles to wrap. public static string Truncate(this string s, int maxLength) { if (string.IsNullOrEmpty(s) || maxLength <= 0) return string.Empty; else if (s.Length > maxLength) return s.Substring(0, maxLength) + "..."; else return s; }   So here’s how the HTML view ends up looking: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage<FlairViewModel>" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Nerd Dinner</title> <link href="/Content/Flair.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="nd-wrapper"> <h2 id="nd-header">NerdDinner.com</h2> <div id="nd-outer"> <% if (Model.Dinners.Count == 0) { %> <div id="nd-bummer"> Looks like there's no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target="_blank" href="http://www.nerddinner.com/Dinners/Create">host one</a>?</div> <% } else { %> <h3> Dinners Near You</h3> <ul> <% foreach (var item in Model.Dinners) { %> <li> <%: Html.ActionLink(String.Format("{0} with {1} on {2}", item.Title.Truncate(20), item.HostedBy, item.EventDate.ToShortDateString()), "Details", "Dinners", new { id = item.DinnerID }, new { target = "_blank" })%></li> <% } %> </ul> <% } %> <div id="nd-footer"> More dinners and fun at <a target="_blank" href="http://nrddnr.com">http://nrddnr.com</a></div> </div> </div> </body> </html> You’d include this in a page using an IFRAME, like this: <IFRAME height=230 marginHeight=0 src="http://nerddinner.com/services/flair" frameBorder=0 width=160 marginWidth=0 scrolling=no></IFRAME> The Javascript view The Javascript flair is written so you can include it in a webpage with a simple script include, like this: <script type="text/javascript" src="http://nerddinner.com/services/flair?format=javascript"></script> The goal of this view is very similar to the HTML embed view, with a few exceptions: We’re creating a script element and adding it to the head of the document, which will then document.write out the content. Note that you have to consider if your users will actually have a <head> element in their documents, but for website flair use cases I think that’s a safe bet. Since the content is being added to the existing page rather than shown in an IFRAME, all links need to be absolute. That means we can’t use Html.ActionLink, since it generates relative routes. We need to escape everything since it’s being written out as strings. We need to set the content type to application/x-javascript. The easiest way to do that is to use the <%@ Page ContentType%> directive. <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<NerdDinner.Models.FlairViewModel>" ContentType="application/x-javascript" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> document.write('<script>var link = document.createElement(\"link\");link.href = \"http://nerddinner.com/content/Flair.css\";link.rel = \"stylesheet\";link.type = \"text/css\";var head = document.getElementsByTagName(\"head\")[0];head.appendChild(link);</script>'); document.write('<div id=\"nd-wrapper\"><h2 id=\"nd-header\">NerdDinner.com</h2><div id=\"nd-outer\">'); <% if (Model.Dinners.Count == 0) { %> document.write('<div id=\"nd-bummer\">Looks like there\'s no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target=\"_blank\" href=\"http://www.nerddinner.com/Dinners/Create\">host one</a>?</div>'); <% } else { %> document.write('<h3> Dinners Near You</h3><ul>'); <% foreach (var item in Model.Dinners) { %> document.write('<li><a target=\"_blank\" href=\"http://nrddnr.com/<%: item.DinnerID %>\"><%: item.Title.Truncate(20) %> with <%: item.HostedBy %> on <%: item.EventDate.ToShortDateString() %></a></li>'); <% } %> document.write('</ul>'); <% } %> document.write('<div id=\"nd-footer\"> More dinners and fun at <a target=\"_blank\" href=\"http://nrddnr.com\">http://nrddnr.com</a></div></div></div>'); Getting IP’s for Testing There are a variety of online services that will translate a location to an IP, which were handy for testing these out. I found http://www.itouchmap.com/latlong.html to be most useful, but I’m open to suggestions if you know of something better. Next steps I think the next step here is to minimize load – you know, in case people start actually using this flair. There are two places to think about – the NerdDinner.com servers, and the services we’re using for Geolocation. I usually think about caching as a first attack on server load, but that’s less helpful here since every user will have a different IP. Instead, I’d look at taking advantage of Asynchronous Controller Actions, a cool new feature in ASP.NET MVC 2. Async Actions let you call a potentially long-running webservice without tying up a thread on the server while waiting for the response. There’s some good info on that in the MSDN documentation, and Dino Esposito wrote a great article on Asynchronous ASP.NET Pages in the April 2010 issue of MSDN Magazine. But let’s think of the children, shall we? What about ipinfodb.com? Well, they don’t have specific daily limits, but they do throttle you if you put a lot of traffic on them. From their FAQ: We do not have a specific daily limit but queries that are at a rate faster than 2 per second will be put in "queue". If you stay below 2 queries/second everything will be normal. If you go over the limit, you will still get an answer for all queries but they will be slowed down to about 1 per second. This should not affect most users but for high volume websites, you can either use our IP database on your server or we can whitelist your IP for 5$/month (simply use the donate form and leave a comment with your server IP). Good programming practices such as not querying our API for all page views (you can store the data in a cookie or a database) will also help not reaching the limit. So the first step there is to save the geolocalization information in a time-limited cookie, which will allow us to look up the local dinners immediately without having to hit the geolocation service.

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • Use Extension method to write cleaner code

    - by Fredrik N
    This blog post will show you step by step to refactoring some code to be more readable (at least what I think). Patrik Löwnedahl gave me some of the ideas when we where talking about making code much cleaner. The following is an simple application that will have a list of movies (Normal and Transfer). The task of the application is to calculate the total sum of each movie and also display the price of each movie. class Program { enum MovieType { Normal, Transfer } static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies) { if (movie == MovieType.Normal) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } else if (movie == MovieType.Transfer) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } } } private static IEnumerable<MovieType> GetMovies() { return new List<MovieType>() { MovieType.Normal, MovieType.Transfer, MovieType.Normal }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In the code above I’m using an enum, a good way to add types (isn’t it ;)). I also use one foreach loop to calculate the price, the loop has a condition statement to check what kind of movie is added to the list of movies. I want to reuse the foreach only to increase performance and let it do two things (isn’t that smart of me?! ;)). First of all I can admit, I’m not a big fan of enum. Enum often results in ugly condition statements and can be hard to maintain (if a new type is added we need to check all the code in our app to see if we use the enum somewhere else). I don’t often care about pre-optimizations when it comes to write code (of course I have performance in mind). I rather prefer to use two foreach to let them do one things instead of two. So based on what I don’t like and Martin Fowler’s Refactoring catalog, I’m going to refactoring this code to what I will call a more elegant and cleaner code. First of all I’m going to use Split Loop to make sure the foreach will do one thing not two, it will results in two foreach (Don’t care about performance here, if the results will results in bad performance, you can refactoring later, but computers are so fast to day, so iterating through a list is not often so time consuming.) Note: The foreach actually do four things, will come to is later. var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies) { if (movie == MovieType.Normal) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } } foreach (var movie in movies) { if (movie == MovieType.Transfer) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To remove the condition statement we can use the Where extension method added to the IEnumerable<T> and is located in the System.Linq namespace: foreach (var movie in movies.Where( m => m == MovieType.Normal)) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } foreach (var movie in movies.Where( m => m == MovieType.Transfer)) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above code will still do two things, calculate the total price, and display the price of the movie. I will not take care of it at the moment, instead I will focus on the enum and try to remove them. One way to remove enum is by using the Replace Conditional with Polymorphism. So I will create two classes, one base class called Movie, and one called MovieTransfer. The Movie class will have a property called Price, the Movie will now hold the price:   public class Movie { public virtual int Price { get { return 2; } } } public class MovieTransfer : Movie { public override int Price { get { return 3; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The following code has no enum and will use the new Movie classes instead: class Program { static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies.Where( m => m is Movie)) { totalPriceOfNormalMovie += movie.Price; Console.WriteLine(movie.Price); } foreach (var movie in movies.Where( m => m is MovieTransfer)) { totalPriceOfTransferMovie += movie.Price; Console.WriteLine(movie.Price); } } private static IEnumerable<Movie> GetMovies() { return new List<Movie>() { new Movie(), new MovieTransfer(), new Movie() }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   If you take a look at the foreach now, you can see it still actually do two things, calculate the price and display the price. We can do some more refactoring here by using the Sum extension method to calculate the total price of the movies:   static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = movies.Where(m => m is Movie) .Sum(m => m.Price); int totalPriceOfTransferMovie = movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); foreach (var movie in movies.Where( m => m is Movie)) Console.WriteLine(movie.Price); foreach (var movie in movies.Where( m => m is MovieTransfer)) Console.WriteLine(movie.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now when the Movie object will hold the price, there is no need to use two separate foreach to display the price of the movies in the list, so we can use only one instead: foreach (var movie in movies) Console.WriteLine(movie.Price); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If we want to increase the Maintainability index we can use the Extract Method to move the Sum of the prices into two separate methods. The name of the method will explain what we are doing: static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); foreach (var movie in movies) Console.WriteLine(movie.Price); } private static int TotalPriceOfMovieTransfer(IEnumerable<Movie> movies) { return movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); } private static int TotalPriceOfMovie(IEnumerable<Movie> movies) { return movies.Where(m => m is Movie) .Sum(m => m.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now to the last thing, I love the ForEach method of the List<T>, but the IEnumerable<T> doesn’t have it, so I created my own ForEach extension, here is the code of the ForEach extension method: public static class LoopExtensions { public static void ForEach<T>(this IEnumerable<T> values, Action<T> action) { Contract.Requires(values != null); Contract.Requires(action != null); foreach (var v in values) action(v); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I will now replace the foreach by using this ForEach method: static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); movies.ForEach(m => Console.WriteLine(m.Price)); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The ForEach on the movies will now display the price of the movie, but maybe we want to display the name of the movie etc, so we can use Extract Method by moving the lamdba expression into a method instead, and let the method explains what we are displaying: movies.ForEach(DisplayMovieInfo); private static void DisplayMovieInfo(Movie movie) { Console.WriteLine(movie.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now the refactoring is done! Here is the complete code:   class Program { static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); movies.ForEach(DisplayMovieInfo); } private static void DisplayMovieInfo(Movie movie) { Console.WriteLine(movie.Price); } private static int TotalPriceOfMovieTransfer(IEnumerable<Movie> movies) { return movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); } private static int TotalPriceOfMovie(IEnumerable<Movie> movies) { return movies.Where(m => m is Movie) .Sum(m => m.Price); } private static IEnumerable<Movie> GetMovies() { return new List<Movie>() { new Movie(), new MovieTransfer(), new Movie() }; } } public class Movie { public virtual int Price { get { return 2; } } } public class MovieTransfer : Movie { public override int Price { get { return 3; } } } pulbic static class LoopExtensions { public static void ForEach<T>(this IEnumerable<T> values, Action<T> action) { Contract.Requires(values != null); Contract.Requires(action != null); foreach (var v in values) action(v); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I think the new code is much cleaner than the first one, and I love the ForEach extension on the IEnumerable<T>, I can use it for different kind of things, for example: movies.Where(m => m is Movie) .ForEach(DoSomething); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } By using the Where and ForEach extension method, some if statements can be removed and will make the code much cleaner. But the beauty is in the eye of the beholder. What would you have done different, what do you think will make the first example in the blog post look much cleaner than my results, comments are welcome! If you want to know when I will publish a new blog post, you can follow me on twitter: http://www.twitter.com/fredrikn

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344  | Next Page >