Search Results

Search found 7891 results on 316 pages for 'multi layer perceptron'.

Page 314/316 | < Previous Page | 310 311 312 313 314 315 316  | Next Page >

  • Entity Framework Code-First, OData & Windows Phone Client

    - by Jon Galloway
    Entity Framework Code-First is the coolest thing since sliced bread, Windows  Phone is the hottest thing since Tickle-Me-Elmo and OData is just too great to ignore. As part of the Full Stack project, we wanted to put them together, which turns out to be pretty easy… once you know how.   EF Code-First CTP5 is available now and there should be very few breaking changes in the release edition, which is due early in 2011.  Note: EF Code-First evolved rapidly and many of the existing documents and blog posts which were written with earlier versions, may now be obsolete or at least misleading.   Code-First? With traditional Entity Framework you start with a database and from that you generate “entities” – classes that bridge between the relational database and your object oriented program. With Code-First (Magic-Unicorn) (see Hanselman’s write up and this later write up by Scott Guthrie) the Entity Framework looks at classes you created and says “if I had created these classes, the database would have to have looked like this…” and creates the database for you! By deriving your entity collections from DbSet and exposing them via a class that derives from DbContext, you "turn on" database backing for your POCO with a minimum of code and no hidden designer or configuration files. POCO == Plain Old CLR Objects Your entity objects can be used throughout your applications - in web applications, console applications, Silverlight and Windows Phone applications, etc. In our case, we'll want to read and update data from a Windows Phone client application, so we'll expose the entities through a DataService and hook the Windows Phone client application to that data via proxies.  Piece of Pie.  Easy as cake. The Demo Architecture To see this at work, we’ll create an ASP.NET/MVC application which will act as the host for our Data Service.  We’ll create an incredibly simple data layer using EF Code-First on top of SQLCE4 and we’ll expose the data in a WCF Data Service using the oData protocol.  Our Windows Phone 7 client will instantiate  the data context via a URI and load the data asynchronously. Setting up the Server project with MVC 3, EF Code First, and SQL CE 4 Create a new application of type ASP.NET MVC 3 and name it DeadSimpleServer.  We need to add the latest SQLCE4 and Entity Framework Code First CTP's to our project. Fortunately, NuGet makes that really easy. Open the Package Manager Console (View / Other Windows / Package Manager Console) and type in "Install-Package EFCodeFirst.SqlServerCompact" at the PM> command prompt. Since NuGet handles dependencies for you, you'll see that it installs everything you need to use Entity Framework Code First in your project. PM> install-package EFCodeFirst.SqlServerCompact 'SQLCE (= 4.0.8435.1)' not installed. Attempting to retrieve dependency from source... Done 'EFCodeFirst (= 0.8)' not installed. Attempting to retrieve dependency from source... Done 'WebActivator (= 1.0.0.0)' not installed. Attempting to retrieve dependency from source... Done You are downloading SQLCE from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'SQLCE 4.0.8435.1' You are downloading EFCodeFirst from Microsoft, the license agreement to which is available at http://go.microsoft.com/fwlink/?LinkID=206497. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst 0.8' Successfully installed 'WebActivator 1.0.0.0' You are downloading EFCodeFirst.SqlServerCompact from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst.SqlServerCompact 0.8' Successfully added 'SQLCE 4.0.8435.1' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst 0.8' to EfCodeFirst-CTP5 Successfully added 'WebActivator 1.0.0.0' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst.SqlServerCompact 0.8' to EfCodeFirst-CTP5 Note: We're using SQLCE 4 with Entity Framework here because they work really well together from a development scenario, but you can of course use Entity Framework Code First with other databases supported by Entity framework. Creating The Model using EF Code First Now we can create our model class. Right-click the Models folder and select Add/Class. Name the Class Person.cs and add the following code: using System.Data.Entity; namespace DeadSimpleServer.Models { public class Person { public int ID { get; set; } public string Name { get; set; } } public class PersonContext : DbContext { public DbSet<Person> People { get; set; } } } Notice that the entity class Person has no special interfaces or base class. There's nothing special needed to make it work - it's just a POCO. The context we'll use to access the entities in the application is called PersonContext, but you could name it anything you wanted. The important thing is that it inherits DbContext and contains one or more DbSet which holds our entity collections. Adding Seed Data We need some testing data to expose from our service. The simplest way to get that into our database is to modify the CreateCeDatabaseIfNotExists class in AppStart_SQLCEEntityFramework.cs by adding some seed data to the Seed method: protected virtual void Seed( TContext context ) { var personContext = context as PersonContext; personContext.People.Add( new Person { ID = 1, Name = "George Washington" } ); personContext.People.Add( new Person { ID = 2, Name = "John Adams" } ); personContext.People.Add( new Person { ID = 3, Name = "Thomas Jefferson" } ); personContext.SaveChanges(); } The CreateCeDatabaseIfNotExists class name is pretty self-explanatory - when our DbContext is accessed and the database isn't found, a new one will be created and populated with the data in the Seed method. There's one more step to make that work - we need to uncomment a line in the Start method at the top of of the AppStart_SQLCEEntityFramework class and set the context name, as shown here, public static class AppStart_SQLCEEntityFramework { public static void Start() { DbDatabase.DefaultConnectionFactory = new SqlCeConnectionFactory("System.Data.SqlServerCe.4.0"); // Sets the default database initialization code for working with Sql Server Compact databases // Uncomment this line and replace CONTEXT_NAME with the name of your DbContext if you are // using your DbContext to create and manage your database DbDatabase.SetInitializer(new CreateCeDatabaseIfNotExists<PersonContext>()); } } Now our database and entity framework are set up, so we can expose data via WCF Data Services. Note: This is a bare-bones implementation with no administration screens. If you'd like to see how those are added, check out The Full Stack screencast series. Creating the oData Service using WCF Data Services Add a new WCF Data Service to the project (right-click the project / Add New Item / Web / WCF Data Service). We’ll be exposing all the data as read/write.  Remember to reconfigure to control and minimize access as appropriate for your own application. Open the code behind for your service. In our case, the service was called PersonTestDataService.svc so the code behind class file is PersonTestDataService.svc.cs. using System.Data.Services; using System.Data.Services.Common; using System.ServiceModel; using DeadSimpleServer.Models; namespace DeadSimpleServer { [ServiceBehavior( IncludeExceptionDetailInFaults = true )] public class PersonTestDataService : DataService<PersonContext> { // This method is called only once to initialize service-wide policies. public static void InitializeService( DataServiceConfiguration config ) { config.SetEntitySetAccessRule( "*", EntitySetRights.All ); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; config.UseVerboseErrors = true; } } } We're enabling a few additional settings to make it easier to debug if you run into trouble. The ServiceBehavior attribute is set to include exception details in faults, and we're using verbose errors. You can remove both of these when your service is working, as your public production service shouldn't be revealing exception information. You can view the output of the service by running the application and browsing to http://localhost:[portnumber]/PersonTestDataService.svc/: <service xml:base="http://localhost:49786/PersonTestDataService.svc/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app"> <workspace> <atom:title>Default</atom:title> <collection href="People"> <atom:title>People</atom:title> </collection> </workspace> </service> This indicates that the service exposes one collection, which is accessible by browsing to http://localhost:[portnumber]/PersonTestDataService.svc/People <?xml version="1.0" encoding="iso-8859-1" standalone="yes"?> <feed xml:base=http://localhost:49786/PersonTestDataService.svc/ xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <title type="text">People</title> <id>http://localhost:49786/PersonTestDataService.svc/People</id> <updated>2010-12-29T01:01:50Z</updated> <link rel="self" title="People" href="People" /> <entry> <id>http://localhost:49786/PersonTestDataService.svc/People(1)</id> <title type="text"></title> <updated>2010-12-29T01:01:50Z</updated> <author> <name /> </author> <link rel="edit" title="Person" href="People(1)" /> <category term="DeadSimpleServer.Models.Person" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:ID m:type="Edm.Int32">1</d:ID> <d:Name>George Washington</d:Name> </m:properties> </content> </entry> <entry> ... </entry> </feed> Let's recap what we've done so far. But enough with services and XML - let's get this into our Windows Phone client application. Creating the DataServiceContext for the Client Use the latest DataSvcUtil.exe from http://odata.codeplex.com. As of today, that's in this download: http://odata.codeplex.com/releases/view/54698 You need to run it with a few options: /uri - This will point to the service URI. In this case, it's http://localhost:59342/PersonTestDataService.svc  Pick up the port number from your running server (e.g., the server formerly known as Cassini). /out - This is the DataServiceContext class that will be generated. You can name it whatever you'd like. /Version - should be set to 2.0 /DataServiceCollection - Include this flag to generate collections derived from the DataServiceCollection base, which brings in all the ObservableCollection goodness that handles your INotifyPropertyChanged events for you. Here's the console session from when we ran it: <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> Next, to keep things simple, change the Binding on the two TextBlocks within the DataTemplate to Name and ID, <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Margin="0,0,0,17" Width="432"> <TextBlock Text="{Binding Name}" TextWrapping="Wrap" Style="{StaticResource PhoneTextExtraLargeStyle}" /> <TextBlock Text="{Binding ID}" TextWrapping="Wrap" Margin="12,-6,12,0" Style="{StaticResource PhoneTextSubtleStyle}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Getting The Context In the code-behind you’ll first declare a member variable to hold the context from the Entity Framework. This is named using convention over configuration. The db type is Person and the context is of type PersonContext, You initialize it by providing the URI, in this case using the URL obtained from the Cassini web server, PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); Create a second member variable of type DataServiceCollection<Person> but do not initialize it, DataServiceCollection<Person> people; In the constructor you’ll initialize the DataServiceCollection using the PersonContext, public MainPage() { InitializeComponent(); people = new DataServiceCollection<Person>( context ); Finally, you’ll load the people collection using the LoadAsync method, passing in the fully specified URI for the People collection in the web service, people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); Note that this method runs asynchronously and when it is finished the people  collection is already populated. Thus, since we didn’t need or want to override any of the behavior we don’t implement the LoadCompleted. You can use the LoadCompleted event if you need to do any other UI updates, but you don't need to. The final code is as shown below: using System; using System.Data.Services.Client; using System.Windows; using System.Windows.Controls; using DeadSimpleServer.Models; using Microsoft.Phone.Controls; namespace WindowsPhoneODataTest { public partial class MainPage : PhoneApplicationPage { PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); DataServiceCollection<Person> people; // Constructor public MainPage() { InitializeComponent(); // Set the data context of the listbox control to the sample data // DataContext = App.ViewModel; people = new DataServiceCollection<Person>( context ); people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); DataContext = people; this.Loaded += new RoutedEventHandler( MainPage_Loaded ); } // Handle selection changed on ListBox private void MainListBox_SelectionChanged( object sender, SelectionChangedEventArgs e ) { // If selected index is -1 (no selection) do nothing if ( MainListBox.SelectedIndex == -1 ) return; // Navigate to the new page NavigationService.Navigate( new Uri( "/DetailsPage.xaml?selectedItem=" + MainListBox.SelectedIndex, UriKind.Relative ) ); // Reset selected index to -1 (no selection) MainListBox.SelectedIndex = -1; } // Load data for the ViewModel Items private void MainPage_Loaded( object sender, RoutedEventArgs e ) { if ( !App.ViewModel.IsDataLoaded ) { App.ViewModel.LoadData(); } } } } With people populated we can set it as the DataContext and run the application; you’ll find that the Name and ID are displayed in the list on the Mainpage. Here's how the pieces in the client fit together: Complete source code available here

    Read the article

  • A Taxonomy of Numerical Methods v1

    - by JoshReuben
    Numerical Analysis – When, What, (but not how) Once you understand the Math & know C++, Numerical Methods are basically blocks of iterative & conditional math code. I found the real trick was seeing the forest for the trees – knowing which method to use for which situation. Its pretty easy to get lost in the details – so I’ve tried to organize these methods in a way that I can quickly look this up. I’ve included links to detailed explanations and to C++ code examples. I’ve tried to classify Numerical methods in the following broad categories: Solving Systems of Linear Equations Solving Non-Linear Equations Iteratively Interpolation Curve Fitting Optimization Numerical Differentiation & Integration Solving ODEs Boundary Problems Solving EigenValue problems Enjoy – I did ! Solving Systems of Linear Equations Overview Solve sets of algebraic equations with x unknowns The set is commonly in matrix form Gauss-Jordan Elimination http://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_elimination C++: http://www.codekeep.net/snippets/623f1923-e03c-4636-8c92-c9dc7aa0d3c0.aspx Produces solution of the equations & the coefficient matrix Efficient, stable 2 steps: · Forward Elimination – matrix decomposition: reduce set to triangular form (0s below the diagonal) or row echelon form. If degenerate, then there is no solution · Backward Elimination –write the original matrix as the product of ints inverse matrix & its reduced row-echelon matrix à reduce set to row canonical form & use back-substitution to find the solution to the set Elementary ops for matrix decomposition: · Row multiplication · Row switching · Add multiples of rows to other rows Use pivoting to ensure rows are ordered for achieving triangular form LU Decomposition http://en.wikipedia.org/wiki/LU_decomposition C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-lu-decomposition-for-solving.html Represent the matrix as a product of lower & upper triangular matrices A modified version of GJ Elimination Advantage – can easily apply forward & backward elimination to solve triangular matrices Techniques: · Doolittle Method – sets the L matrix diagonal to unity · Crout Method - sets the U matrix diagonal to unity Note: both the L & U matrices share the same unity diagonal & can be stored compactly in the same matrix Gauss-Seidel Iteration http://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method C++: http://www.nr.com/forum/showthread.php?t=722 Transform the linear set of equations into a single equation & then use numerical integration (as integration formulas have Sums, it is implemented iteratively). an optimization of Gauss-Jacobi: 1.5 times faster, requires 0.25 iterations to achieve the same tolerance Solving Non-Linear Equations Iteratively find roots of polynomials – there may be 0, 1 or n solutions for an n order polynomial use iterative techniques Iterative methods · used when there are no known analytical techniques · Requires set functions to be continuous & differentiable · Requires an initial seed value – choice is critical to convergence à conduct multiple runs with different starting points & then select best result · Systematic - iterate until diminishing returns, tolerance or max iteration conditions are met · bracketing techniques will always yield convergent solutions, non-bracketing methods may fail to converge Incremental method if a nonlinear function has opposite signs at 2 ends of a small interval x1 & x2, then there is likely to be a solution in their interval – solutions are detected by evaluating a function over interval steps, for a change in sign, adjusting the step size dynamically. Limitations – can miss closely spaced solutions in large intervals, cannot detect degenerate (coinciding) solutions, limited to functions that cross the x-axis, gives false positives for singularities Fixed point method http://en.wikipedia.org/wiki/Fixed-point_iteration C++: http://books.google.co.il/books?id=weYj75E_t6MC&pg=PA79&lpg=PA79&dq=fixed+point+method++c%2B%2B&source=bl&ots=LQ-5P_taoC&sig=lENUUIYBK53tZtTwNfHLy5PEWDk&hl=en&sa=X&ei=wezDUPW1J5DptQaMsIHQCw&redir_esc=y#v=onepage&q=fixed%20point%20method%20%20c%2B%2B&f=false Algebraically rearrange a solution to isolate a variable then apply incremental method Bisection method http://en.wikipedia.org/wiki/Bisection_method C++: http://numericalcomputing.wordpress.com/category/algorithms/ Bracketed - Select an initial interval, keep bisecting it ad midpoint into sub-intervals and then apply incremental method on smaller & smaller intervals – zoom in Adv: unaffected by function gradient à reliable Disadv: slow convergence False Position Method http://en.wikipedia.org/wiki/False_position_method C++: http://www.dreamincode.net/forums/topic/126100-bisection-and-false-position-methods/ Bracketed - Select an initial interval , & use the relative value of function at interval end points to select next sub-intervals (estimate how far between the end points the solution might be & subdivide based on this) Newton-Raphson method http://en.wikipedia.org/wiki/Newton's_method C++: http://www-users.cselabs.umn.edu/classes/Summer-2012/csci1113/index.php?page=./newt3 Also known as Newton's method Convenient, efficient Not bracketed – only a single initial guess is required to start iteration – requires an analytical expression for the first derivative of the function as input. Evaluates the function & its derivative at each step. Can be extended to the Newton MutiRoot method for solving multiple roots Can be easily applied to an of n-coupled set of non-linear equations – conduct a Taylor Series expansion of a function, dropping terms of order n, rewrite as a Jacobian matrix of PDs & convert to simultaneous linear equations !!! Secant Method http://en.wikipedia.org/wiki/Secant_method C++: http://forum.vcoderz.com/showthread.php?p=205230 Unlike N-R, can estimate first derivative from an initial interval (does not require root to be bracketed) instead of inputting it Since derivative is approximated, may converge slower. Is fast in practice as it does not have to evaluate the derivative at each step. Similar implementation to False Positive method Birge-Vieta Method http://mat.iitm.ac.in/home/sryedida/public_html/caimna/transcendental/polynomial%20methods/bv%20method.html C++: http://books.google.co.il/books?id=cL1boM2uyQwC&pg=SA3-PA51&lpg=SA3-PA51&dq=Birge-Vieta+Method+c%2B%2B&source=bl&ots=QZmnDTK3rC&sig=BPNcHHbpR_DKVoZXrLi4nVXD-gg&hl=en&sa=X&ei=R-_DUK2iNIjzsgbE5ID4Dg&redir_esc=y#v=onepage&q=Birge-Vieta%20Method%20c%2B%2B&f=false combines Horner's method of polynomial evaluation (transforming into lesser degree polynomials that are more computationally efficient to process) with Newton-Raphson to provide a computational speed-up Interpolation Overview Construct new data points for as close as possible fit within range of a discrete set of known points (that were obtained via sampling, experimentation) Use Taylor Series Expansion of a function f(x) around a specific value for x Linear Interpolation http://en.wikipedia.org/wiki/Linear_interpolation C++: http://www.hamaluik.com/?p=289 Straight line between 2 points à concatenate interpolants between each pair of data points Bilinear Interpolation http://en.wikipedia.org/wiki/Bilinear_interpolation C++: http://supercomputingblog.com/graphics/coding-bilinear-interpolation/2/ Extension of the linear function for interpolating functions of 2 variables – perform linear interpolation first in 1 direction, then in another. Used in image processing – e.g. texture mapping filter. Uses 4 vertices to interpolate a value within a unit cell. Lagrange Interpolation http://en.wikipedia.org/wiki/Lagrange_polynomial C++: http://www.codecogs.com/code/maths/approximation/interpolation/lagrange.php For polynomials Requires recomputation for all terms for each distinct x value – can only be applied for small number of nodes Numerically unstable Barycentric Interpolation http://epubs.siam.org/doi/pdf/10.1137/S0036144502417715 C++: http://www.gamedev.net/topic/621445-barycentric-coordinates-c-code-check/ Rearrange the terms in the equation of the Legrange interpolation by defining weight functions that are independent of the interpolated value of x Newton Divided Difference Interpolation http://en.wikipedia.org/wiki/Newton_polynomial C++: http://jee-appy.blogspot.co.il/2011/12/newton-divided-difference-interpolation.html Hermite Divided Differences: Interpolation polynomial approximation for a given set of data points in the NR form - divided differences are used to approximately calculate the various differences. For a given set of 3 data points , fit a quadratic interpolant through the data Bracketed functions allow Newton divided differences to be calculated recursively Difference table Cubic Spline Interpolation http://en.wikipedia.org/wiki/Spline_interpolation C++: https://www.marcusbannerman.co.uk/index.php/home/latestarticles/42-articles/96-cubic-spline-class.html Spline is a piecewise polynomial Provides smoothness – for interpolations with significantly varying data Use weighted coefficients to bend the function to be smooth & its 1st & 2nd derivatives are continuous through the edge points in the interval Curve Fitting A generalization of interpolating whereby given data points may contain noise à the curve does not necessarily pass through all the points Least Squares Fit http://en.wikipedia.org/wiki/Least_squares C++: http://www.ccas.ru/mmes/educat/lab04k/02/least-squares.c Residual – difference between observed value & expected value Model function is often chosen as a linear combination of the specified functions Determines: A) The model instance in which the sum of squared residuals has the least value B) param values for which model best fits data Straight Line Fit Linear correlation between independent variable and dependent variable Linear Regression http://en.wikipedia.org/wiki/Linear_regression C++: http://www.oocities.org/david_swaim/cpp/linregc.htm Special case of statistically exact extrapolation Leverage least squares Given a basis function, the sum of the residuals is determined and the corresponding gradient equation is expressed as a set of normal linear equations in matrix form that can be solved (e.g. using LU Decomposition) Can be weighted - Drop the assumption that all errors have the same significance –-> confidence of accuracy is different for each data point. Fit the function closer to points with higher weights Polynomial Fit - use a polynomial basis function Moving Average http://en.wikipedia.org/wiki/Moving_average C++: http://www.codeproject.com/Articles/17860/A-Simple-Moving-Average-Algorithm Used for smoothing (cancel fluctuations to highlight longer-term trends & cycles), time series data analysis, signal processing filters Replace each data point with average of neighbors. Can be simple (SMA), weighted (WMA), exponential (EMA). Lags behind latest data points – extra weight can be given to more recent data points. Weights can decrease arithmetically or exponentially according to distance from point. Parameters: smoothing factor, period, weight basis Optimization Overview Given function with multiple variables, find Min (or max by minimizing –f(x)) Iterative approach Efficient, but not necessarily reliable Conditions: noisy data, constraints, non-linear models Detection via sign of first derivative - Derivative of saddle points will be 0 Local minima Bisection method Similar method for finding a root for a non-linear equation Start with an interval that contains a minimum Golden Search method http://en.wikipedia.org/wiki/Golden_section_search C++: http://www.codecogs.com/code/maths/optimization/golden.php Bisect intervals according to golden ratio 0.618.. Achieves reduction by evaluating a single function instead of 2 Newton-Raphson Method Brent method http://en.wikipedia.org/wiki/Brent's_method C++: http://people.sc.fsu.edu/~jburkardt/cpp_src/brent/brent.cpp Based on quadratic or parabolic interpolation – if the function is smooth & parabolic near to the minimum, then a parabola fitted through any 3 points should approximate the minima – fails when the 3 points are collinear , in which case the denominator is 0 Simplex Method http://en.wikipedia.org/wiki/Simplex_algorithm C++: http://www.codeguru.com/cpp/article.php/c17505/Simplex-Optimization-Algorithm-and-Implemetation-in-C-Programming.htm Find the global minima of any multi-variable function Direct search – no derivatives required At each step it maintains a non-degenerative simplex – a convex hull of n+1 vertices. Obtains the minimum for a function with n variables by evaluating the function at n-1 points, iteratively replacing the point of worst result with the point of best result, shrinking the multidimensional simplex around the best point. Point replacement involves expanding & contracting the simplex near the worst value point to determine a better replacement point Oscillation can be avoided by choosing the 2nd worst result Restart if it gets stuck Parameters: contraction & expansion factors Simulated Annealing http://en.wikipedia.org/wiki/Simulated_annealing C++: http://code.google.com/p/cppsimulatedannealing/ Analogy to heating & cooling metal to strengthen its structure Stochastic method – apply random permutation search for global minima - Avoid entrapment in local minima via hill climbing Heating schedule - Annealing schedule params: temperature, iterations at each temp, temperature delta Cooling schedule – can be linear, step-wise or exponential Differential Evolution http://en.wikipedia.org/wiki/Differential_evolution C++: http://www.amichel.com/de/doc/html/ More advanced stochastic methods analogous to biological processes: Genetic algorithms, evolution strategies Parallel direct search method against multiple discrete or continuous variables Initial population of variable vectors chosen randomly – if weighted difference vector of 2 vectors yields a lower objective function value then it replaces the comparison vector Many params: #parents, #variables, step size, crossover constant etc Convergence is slow – many more function evaluations than simulated annealing Numerical Differentiation Overview 2 approaches to finite difference methods: · A) approximate function via polynomial interpolation then differentiate · B) Taylor series approximation – additionally provides error estimate Finite Difference methods http://en.wikipedia.org/wiki/Finite_difference_method C++: http://www.wpi.edu/Pubs/ETD/Available/etd-051807-164436/unrestricted/EAMPADU.pdf Find differences between high order derivative values - Approximate differential equations by finite differences at evenly spaced data points Based on forward & backward Taylor series expansion of f(x) about x plus or minus multiples of delta h. Forward / backward difference - the sums of the series contains even derivatives and the difference of the series contains odd derivatives – coupled equations that can be solved. Provide an approximation of the derivative within a O(h^2) accuracy There is also central difference & extended central difference which has a O(h^4) accuracy Richardson Extrapolation http://en.wikipedia.org/wiki/Richardson_extrapolation C++: http://mathscoding.blogspot.co.il/2012/02/introduction-richardson-extrapolation.html A sequence acceleration method applied to finite differences Fast convergence, high accuracy O(h^4) Derivatives via Interpolation Cannot apply Finite Difference method to discrete data points at uneven intervals – so need to approximate the derivative of f(x) using the derivative of the interpolant via 3 point Lagrange Interpolation Note: the higher the order of the derivative, the lower the approximation precision Numerical Integration Estimate finite & infinite integrals of functions More accurate procedure than numerical differentiation Use when it is not possible to obtain an integral of a function analytically or when the function is not given, only the data points are Newton Cotes Methods http://en.wikipedia.org/wiki/Newton%E2%80%93Cotes_formulas C++: http://www.siafoo.net/snippet/324 For equally spaced data points Computationally easy – based on local interpolation of n rectangular strip areas that is piecewise fitted to a polynomial to get the sum total area Evaluate the integrand at n+1 evenly spaced points – approximate definite integral by Sum Weights are derived from Lagrange Basis polynomials Leverage Trapezoidal Rule for default 2nd formulas, Simpson 1/3 Rule for substituting 3 point formulas, Simpson 3/8 Rule for 4 point formulas. For 4 point formulas use Bodes Rule. Higher orders obtain more accurate results Trapezoidal Rule uses simple area, Simpsons Rule replaces the integrand f(x) with a quadratic polynomial p(x) that uses the same values as f(x) for its end points, but adds a midpoint Romberg Integration http://en.wikipedia.org/wiki/Romberg's_method C++: http://code.google.com/p/romberg-integration/downloads/detail?name=romberg.cpp&can=2&q= Combines trapezoidal rule with Richardson Extrapolation Evaluates the integrand at equally spaced points The integrand must have continuous derivatives Each R(n,m) extrapolation uses a higher order integrand polynomial replacement rule (zeroth starts with trapezoidal) à a lower triangular matrix set of equation coefficients where the bottom right term has the most accurate approximation. The process continues until the difference between 2 successive diagonal terms becomes sufficiently small. Gaussian Quadrature http://en.wikipedia.org/wiki/Gaussian_quadrature C++: http://www.alglib.net/integration/gaussianquadratures.php Data points are chosen to yield best possible accuracy – requires fewer evaluations Ability to handle singularities, functions that are difficult to evaluate The integrand can include a weighting function determined by a set of orthogonal polynomials. Points & weights are selected so that the integrand yields the exact integral if f(x) is a polynomial of degree <= 2n+1 Techniques (basically different weighting functions): · Gauss-Legendre Integration w(x)=1 · Gauss-Laguerre Integration w(x)=e^-x · Gauss-Hermite Integration w(x)=e^-x^2 · Gauss-Chebyshev Integration w(x)= 1 / Sqrt(1-x^2) Solving ODEs Use when high order differential equations cannot be solved analytically Evaluated under boundary conditions RK for systems – a high order differential equation can always be transformed into a coupled first order system of equations Euler method http://en.wikipedia.org/wiki/Euler_method C++: http://rosettacode.org/wiki/Euler_method First order Runge–Kutta method. Simple recursive method – given an initial value, calculate derivative deltas. Unstable & not very accurate (O(h) error) – not used in practice A first-order method - the local error (truncation error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size In evolving solution between data points xn & xn+1, only evaluates derivatives at beginning of interval xn à asymmetric at boundaries Higher order Runge Kutta http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods C++: http://www.dreamincode.net/code/snippet1441.htm 2nd & 4th order RK - Introduces parameterized midpoints for more symmetric solutions à accuracy at higher computational cost Adaptive RK – RK-Fehlberg – estimate the truncation at each integration step & automatically adjust the step size to keep error within prescribed limits. At each step 2 approximations are compared – if in disagreement to a specific accuracy, the step size is reduced Boundary Value Problems Where solution of differential equations are located at 2 different values of the independent variable x à more difficult, because cannot just start at point of initial value – there may not be enough starting conditions available at the end points to produce a unique solution An n-order equation will require n boundary conditions – need to determine the missing n-1 conditions which cause the given conditions at the other boundary to be satisfied Shooting Method http://en.wikipedia.org/wiki/Shooting_method C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-shooting-method-for-solving.html Iteratively guess the missing values for one end & integrate, then inspect the discrepancy with the boundary values of the other end to adjust the estimate Given the starting boundary values u1 & u2 which contain the root u, solve u given the false position method (solving the differential equation as an initial value problem via 4th order RK), then use u to solve the differential equations. Finite Difference Method For linear & non-linear systems Higher order derivatives require more computational steps – some combinations for boundary conditions may not work though Improve the accuracy by increasing the number of mesh points Solving EigenValue Problems An eigenvalue can substitute a matrix when doing matrix multiplication à convert matrix multiplication into a polynomial EigenValue For a given set of equations in matrix form, determine what are the solution eigenvalue & eigenvectors Similar Matrices - have same eigenvalues. Use orthogonal similarity transforms to reduce a matrix to diagonal form from which eigenvalue(s) & eigenvectors can be computed iteratively Jacobi method http://en.wikipedia.org/wiki/Jacobi_method C++: http://people.sc.fsu.edu/~jburkardt/classes/acs2_2008/openmp/jacobi/jacobi.html Robust but Computationally intense – use for small matrices < 10x10 Power Iteration http://en.wikipedia.org/wiki/Power_iteration For any given real symmetric matrix, generate the largest single eigenvalue & its eigenvectors Simplest method – does not compute matrix decomposition à suitable for large, sparse matrices Inverse Iteration Variation of power iteration method – generates the smallest eigenvalue from the inverse matrix Rayleigh Method http://en.wikipedia.org/wiki/Rayleigh's_method_of_dimensional_analysis Variation of power iteration method Rayleigh Quotient Method Variation of inverse iteration method Matrix Tri-diagonalization Method Use householder algorithm to reduce an NxN symmetric matrix to a tridiagonal real symmetric matrix vua N-2 orthogonal transforms     Whats Next Outside of Numerical Methods there are lots of different types of algorithms that I’ve learned over the decades: Data Mining – (I covered this briefly in a previous post: http://geekswithblogs.net/JoshReuben/archive/2007/12/31/ssas-dm-algorithms.aspx ) Search & Sort Routing Problem Solving Logical Theorem Proving Planning Probabilistic Reasoning Machine Learning Solvers (eg MIP) Bioinformatics (Sequence Alignment, Protein Folding) Quant Finance (I read Wilmott’s books – interesting) Sooner or later, I’ll cover the above topics as well.

    Read the article

  • NTFS Corruption: Files created in Linux corrupted when Windows Boots

    - by Logan Mayfield
    I'm getting some file loss and corruption on my Win7/Ubuntu 12.04 dual boot setup. I have a large shared NTFS partition. I have my Windows Docs/Music/etc. directories on that file and have the comparable directors in Linux setup as a sym. link. I'm using ntfs-3g on the linux side of things to manage the ntfs partition. The shared partition is on a logical partition along with my Linux /home / and /swap partitions. The ntfs partition is mounted at boot time via fstab with the following options: ntfs-3g users,nls=utf8,locale=en_US.UTF-8,exec,rw The problem seems to be confined to newly created and recently edited files. I have not see data loss or corruption when creating/editing files in Windows and then moving over to Ubuntu. I've been using the sync command aggressively in Ubuntu to try to ensure everything is getting written to the HDD. I do not use hibernate in Windows so I know it's not the usual missing files due to Hibernation problem. I'm not seeing any mount related issues on dmesg. Most recently I had a set of files related to a LaTeX document go bad. Some of them show up in Ubuntu but I am unable to delete them. In the GUI file browser they are given thumbnails associated with files I created on my last boot of Windows. To be more specific: I created a few png files in Windows. The files corrupted by that Windows boot are associated with running PdfLatex on a file and are not image files. However, two of the corrupted files show up with the thumbnail image of one of the previously mentioned png files. The png files are not in the same directory as the latex files but they are both win the Document Folder tree. I've had sucess with using NTFS for shared data in the past and am hoping there's some quirk here I'm missing and it's not just bad luck. On one hand this appears to be some kind of Windows problem as data loss occurs when I boot to Windows after having worked in Ubuntu for a while. However, I'm assuming it's more on the Ubuntu end as it requires the special NTFS drivers. Edit for more info: This is a Lenovo Thinkpad L430. Purchased new in the last month. So it's a fairly fresh install. Many of the files on the shared partition were copied over from a previous NTFS formatted shared partition on another HDD. As requested: here's a sample chkdsk log. Some of the files its mentioning were files that got deleted off the partition while in Ubuntu. Others were created/edited but not deleted. Checking file system on D: Volume dismounted. All opened handles to this volume are now invalid. Volume label is Files. CHKDSK is verifying files (stage 1 of 3)... Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x789f47 for possibly 0x21 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x42 is already in use. Deleting corrupt attribute record (128, "") from file record segment 66. 86496 file records processed. File verification completed. 385 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 3)... Deleted invalid filename Screenshot from 2012-09-09 09:51:27.png (72) in directory 46. The NTFS file name attribute in file 0x48 is incorrect. 53 00 63 00 72 00 65 00 65 00 6e 00 73 00 68 00 S.c.r.e.e.n.s.h. 6f 00 74 00 20 00 66 00 72 00 6f 00 6d 00 20 00 o.t. .f.r.o.m. . 32 00 30 00 31 00 32 00 2d 00 30 00 39 00 2d 00 2.0.1.2.-.0.9.-. 30 00 39 00 20 00 30 00 39 00 3a 00 35 00 31 00 0.9. .0.9.:.5.1. 3a 00 32 00 37 00 2e 00 70 00 6e 00 67 00 0d 00 :.2.7...p.n.g... 00 00 00 00 00 00 90 94 49 1f 5e 00 00 80 d4 00 ......I.^.... File 72 has been orphaned since all its filenames were invalid Windows will recover the file in the orphan recovery phase. Correcting minor file name errors in file 72. Index entry found.000 of index $I30 in file 0x5 points to unused file 0x11. Deleting index entry found.000 in index $I30 of file 5. Index entry found.001 of index $I30 in file 0x5 points to unused file 0x16. Deleting index entry found.001 in index $I30 of file 5. Index entry found.002 of index $I30 in file 0x5 points to unused file 0x15. Deleting index entry found.002 in index $I30 of file 5. Index entry DOWNLO~1 of index $I30 in file 0x28 points to unused file 0x2b6. Deleting index entry DOWNLO~1 in index $I30 of file 40. Unable to locate the file name attribute of index entry Screenshot from 2012-09-09 09:51:27.png of index $I30 with parent 0x2e in file 0x48. Deleting index entry Screenshot from 2012-09-09 09:51:27.png in index $I30 of file 46. An index entry of index $I30 in file 0x32 points to file 0x151e8 which is beyond the MFT. Deleting index entry latexsheet.tex in index $I30 of file 50. An index entry of index $I30 in file 0x58bc points to file 0x151eb which is beyond the MFT. Deleting index entry D8CZ82PK in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151f7 which is beyond the MFT. Deleting index entry EGA4QEAX in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151e9 which is beyond the MFT. Deleting index entry NGTB469M in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151fb which is beyond the MFT. Deleting index entry WU5RKXAB in index $I30 of file 22716. Index entry comp220-lab3.synctex.gz of index $I30 in file 0xda69 points to unused file 0xd098. Deleting index entry comp220-lab3.synctex.gz in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp220-numberGrammars.aux of index $I30 with parent 0xda69 in file 0xa276. Deleting index entry comp220-numberGrammars.aux in index $I30 of file 55913. The file reference 0x500000000cd43 of index entry comp220-numberGrammars.out of index $I30 with parent 0xda69 is not the same as 0x600000000cd43. Deleting index entry comp220-numberGrammars.out in index $I30 of file 55913. The file reference 0x500000000cd45 of index entry comp220-numberGrammars.pdf of index $I30 with parent 0xda69 is not the same as 0xc00000000cd45. Deleting index entry comp220-numberGrammars.pdf in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15290 which is beyond the MFT. Deleting index entry gram.aux in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15291 which is beyond the MFT. Deleting index entry gram.out in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15292 which is beyond the MFT. Deleting index entry gram.pdf in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp230-quiz1.synctex.gz of index $I30 with parent 0xda6f in file 0xd183. Deleting index entry comp230-quiz1.synctex.gz in index $I30 of file 55919. An index entry of index $I30 in file 0xf3cc points to file 0x15283 which is beyond the MFT. Deleting index entry require-transform.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf3cc points to file 0x15284 which is beyond the MFT. Deleting index entry set.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf497 points to file 0x15280 which is beyond the MFT. Deleting index entry logger.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15281 which is beyond the MFT. Deleting index entry misc.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15282 which is beyond the MFT. Deleting index entry more-scheme.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf5bf points to file 0x15285 which is beyond the MFT. Deleting index entry core-layout.rkt in index $I30 of file 62911. An index entry of index $I30 in file 0xf5e0 points to file 0x15286 which is beyond the MFT. Deleting index entry ref.scrbl in index $I30 of file 62944. An index entry of index $I30 in file 0xf6f0 points to file 0x15287 which is beyond the MFT. Deleting index entry base-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15288 which is beyond the MFT. Deleting index entry html-properties.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15289 which is beyond the MFT. Deleting index entry html-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528b which is beyond the MFT. Deleting index entry latex-prefix.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528c which is beyond the MFT. Deleting index entry latex-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528e which is beyond the MFT. Deleting index entry scribble.tex in index $I30 of file 63216. An index entry of index $I30 in file 0xf717 points to file 0x1528a which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63255. An index entry of index $I30 in file 0xf721 points to file 0x1528d which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63265. An index entry of index $I30 in file 0xf764 points to file 0x1528f which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63332. An index entry of index $I30 in file 0x14261 points to file 0x15270 which is beyond the MFT. Deleting index entry fddff3ae9ae2221207f144821d475c08ec3d05 in index $I30 of file 82529. An index entry of index $I30 in file 0x14621 points to file 0x15268 which is beyond the MFT. Deleting index entry FETCH_HEAD in index $I30 of file 83489. An index entry of index $I30 in file 0x14650 points to file 0x15272 which is beyond the MFT. Deleting index entry 86 in index $I30 of file 83536. An index entry of index $I30 in file 0x14651 points to file 0x15266 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.idx in index $I30 of file 83537. An index entry of index $I30 in file 0x14651 points to file 0x15265 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.pack in index $I30 of file 83537. An index entry of index $I30 in file 0x146f1 points to file 0x15275 which is beyond the MFT. Deleting index entry master in index $I30 of file 83697. An index entry of index $I30 in file 0x146f6 points to file 0x15276 which is beyond the MFT. Deleting index entry remotes in index $I30 of file 83702. An index entry of index $I30 in file 0x1477d points to file 0x15278 which is beyond the MFT. Deleting index entry pad.rkt in index $I30 of file 83837. An index entry of index $I30 in file 0x14797 points to file 0x1527c which is beyond the MFT. Deleting index entry pad1.rkt in index $I30 of file 83863. An index entry of index $I30 in file 0x14810 points to file 0x1527d which is beyond the MFT. Deleting index entry cm.rkt in index $I30 of file 83984. An index entry of index $I30 in file 0x14926 points to file 0x1527e which is beyond the MFT. Deleting index entry multi-file-search.rkt in index $I30 of file 84262. An index entry of index $I30 in file 0x149ef points to file 0x1527f which is beyond the MFT. Deleting index entry com.rkt in index $I30 of file 84463. An index entry of index $I30 in file 0x14b47 points to file 0x15202 which is beyond the MFT. Deleting index entry COMMIT_EDITMSG in index $I30 of file 84807. An index entry of index $I30 in file 0x14b47 points to file 0x15279 which is beyond the MFT. Deleting index entry index in index $I30 of file 84807. An index entry of index $I30 in file 0x14b4c points to file 0x15274 which is beyond the MFT. Deleting index entry master in index $I30 of file 84812. An index entry of index $I30 in file 0x14b61 points to file 0x1520b which is beyond the MFT. Deleting index entry 02 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1525a which is beyond the MFT. Deleting index entry 28 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15208 which is beyond the MFT. Deleting index entry 29 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521f which is beyond the MFT. Deleting index entry 2c in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15261 which is beyond the MFT. Deleting index entry 2e in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f0 which is beyond the MFT. Deleting index entry 45 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1523e which is beyond the MFT. Deleting index entry 47 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151e5 which is beyond the MFT. Deleting index entry 49 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15214 which is beyond the MFT. Deleting index entry 58 in index $I30 of file 84833. Index entry 6e of index $I30 in file 0x14b61 points to unused file 0xd182. Deleting index entry 6e in index $I30 of file 84833. Unable to locate the file name attribute of index entry a0 of index $I30 with parent 0x14b61 in file 0xd29c. Deleting index entry a0 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521b which is beyond the MFT. Deleting index entry cd in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15249 which is beyond the MFT. Deleting index entry d6 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15242 which is beyond the MFT. Deleting index entry df in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15227 which is beyond the MFT. Deleting index entry ea in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1522e which is beyond the MFT. Deleting index entry f3 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f2 which is beyond the MFT. Deleting index entry ff in index $I30 of file 84833. An index entry of index $I30 in file 0x14b62 points to file 0x15254 which is beyond the MFT. Deleting index entry 1ed39b36ad4bd48c91d22cbafd7390f1ea38da in index $I30 of file 84834. An index entry of index $I30 in file 0x14b75 points to file 0x15224 which is beyond the MFT. Deleting index entry 96260247010fe9811fea773c08c5f3a314df3f in index $I30 of file 84853. An index entry of index $I30 in file 0x14b79 points to file 0x15219 which is beyond the MFT. Deleting index entry 8f689724ca23528dd4f4ab8b475ace6edcb8f5 in index $I30 of file 84857. An index entry of index $I30 in file 0x14b7c points to file 0x15223 which is beyond the MFT. Deleting index entry 1df17cf850656be42c947cba6295d29c248d94 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15217 which is beyond the MFT. Deleting index entry 31db8a3c72a3e44769bbd8db58d36f8298242c in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15267 which is beyond the MFT. Deleting index entry 8e1254d755ff1882d61c07011272bac3612f57 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b82 points to file 0x15246 which is beyond the MFT. Deleting index entry f959bfaf9643c1b9e78d5ecf8f669133efdbf3 in index $I30 of file 84866. An index entry of index $I30 in file 0x14b88 points to file 0x151fe which is beyond the MFT. Deleting index entry 7e9aa15b1196b2c60116afa4ffa613397f2185 in index $I30 of file 84872. An index entry of index $I30 in file 0x14b8a points to file 0x151ea which is beyond the MFT. Deleting index entry 73cb0cd248e494bb508f41b55d862e84cdd6e0 in index $I30 of file 84874. An index entry of index $I30 in file 0x14b8e points to file 0x15264 which is beyond the MFT. Deleting index entry bd555d9f0383cc14c317120149e9376a8094c4 in index $I30 of file 84878. An index entry of index $I30 in file 0x14b96 points to file 0x15212 which is beyond the MFT. Deleting index entry 630dba40562d991bc6cbb6fed4ba638542e9c5 in index $I30 of file 84886. An index entry of index $I30 in file 0x14b99 points to file 0x151ec which is beyond the MFT. Deleting index entry 478be31ca8e538769246e22bba3330d81dc3c8 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b99 points to file 0x15258 which is beyond the MFT. Deleting index entry 66c60c0a0f3253bc9a5112697e4cbb0dfc0c78 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b9c points to file 0x15238 which is beyond the MFT. Deleting index entry 1c7ceeddc2953496f9ffbfc0b6fb28846e3fe3 in index $I30 of file 84892. An index entry of index $I30 in file 0x14b9c points to file 0x15247 which is beyond the MFT. Deleting index entry ae6e32ffc49d897d8f8aeced970a90d3653533 in index $I30 of file 84892. An index entry of index $I30 in file 0x14ba0 points to file 0x15233 which is beyond the MFT. Deleting index entry f71c7d874e45179a32e138b49bf007e5bbf514 in index $I30 of file 84896. Index entry 2e04fefbd794f050d45e7a717d009e39204431 of index $I30 in file 0x14ba7 points to unused file 0xd097. Deleting index entry 2e04fefbd794f050d45e7a717d009e39204431 in index $I30 of file 84903. An index entry of index $I30 in file 0x14baa points to file 0x15241 which is beyond the MFT. Deleting index entry 0dda7dec1c635cd646dfef308e403c2843d5dc in index $I30 of file 84906. An index entry of index $I30 in file 0x14baa points to file 0x151fc which is beyond the MFT. Deleting index entry 98151e654dd546edcfdec630bc82d90619ac8e in index $I30 of file 84906. An index entry of index $I30 in file 0x14bb1 points to file 0x151e9 which is beyond the MFT. Deleting index entry 1997c5be62ffeebc99253cced7608415e38e4e in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x1521d which is beyond the MFT. Deleting index entry 6bf3aedefd3ac62d9c49cad72d05e8c0ad242c in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x151f4 which is beyond the MFT. Deleting index entry 907b755afdca14c00be0010962d0861af29264 in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb3 points to file 0x15218 which is beyond the MFT. Deleting index entry

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Red Gate Coder interviews: Robin Hellen

    - by Michael Williamson
    Robin Hellen is a test engineer here at Red Gate, and is also the latest coder I’ve interviewed. We chatted about debugging code, the roles of software engineers and testers, and why Vala is currently his favourite programming language. How did you get started with programming?It started when I was about six. My dad’s a professional programmer, and he gave me and my sister one of his old computers and taught us a bit about programming. It was an old Amiga 500 with a variant of BASIC. I don’t think I ever successfully completed anything! It was just faffing around. I didn’t really get anywhere with it.But then presumably you did get somewhere with it at some point.At some point. The PC emerged as the dominant platform, and I learnt a bit of Visual Basic. I didn’t really do much, just a couple of quick hacky things. A bit of demo animation. Took me a long time to get anywhere with programming, really.When did you feel like you did start to get somewhere?I think it was when I started doing things for someone else, which was my sister’s final year of university project. She called up my dad two days before she was due to submit, saying “We need something to display a graph!”. Dad says, “I’m too busy, go talk to your brother”. So I hacked up this ugly piece of code, sent it off and they won a prize for that project. Apparently, the graph, the bit that I wrote, was the reason they won a prize! That was when I first felt that I’d actually done something that was worthwhile. That was my first real bit of code, and the ugliest code I’ve ever written. It’s basically an array of pre-drawn line elements that I shifted round the screen to draw a very spikey graph.When did you decide that programming might actually be something that you wanted to do as a career?It’s not really a decision I took, I always wanted to do something with computers. And I had to take a gap year for uni, so I was looking for twelve month internships. I applied to Red Gate, and they gave me a job as a tester. And that’s where I really started having to write code well. To a better standard that I had been up to that point.How did you find coming to Red Gate and working with other coders?I thought it was really nice. I learnt so much just from other people around. I think one of the things that’s really great is that people are just willing to help you learn. Instead of “Don’t you know that, you’re so stupid”, it’s “You can just do it this way”.If you could go back to the very start of that internship, is there something that you would tell yourself?Write shorter code. I have a tendency to write massive, many-thousand line files that I break out of right at the end. And then half-way through a project I’m doing something, I think “Where did I write that bit that does that thing?”, and it’s almost impossible to find. I wrote some horrendous code when I started. Just that principle, just keep things short. Even if looks a bit crazy to be jumping around all over the place all of the time, it’s actually a lot more understandable.And how do you hold yourself to that?Generally, if a function’s going off my screen, it’s probably too long. That’s what I tell myself, and within the team here we have code reviews, so the guys I’m with at the moment are pretty good at pulling me up on, “Doesn’t that look like it’s getting a bit long?”. It’s more just the subjective standard of readability than anything.So you’re an advocate of code review?Yes, definitely. Both to spot errors that you might have made, and to improve your knowledge. The person you’re reviewing will say “Oh, you could have done it that way”. That’s how we learn, by talking to others, and also just sharing knowledge of how your project works around the team, or even outside the team. Definitely a very firm advocate of code reviews.Do you think there’s more we could do with them?I don’t know. We’re struggling with how to add them as part of the process without it becoming too cumbersome. We’ve experimented with a few different ways, and we’ve not found anything that just works.To get more into the nitty gritty: how do you like to debug code?The first thing is to do it in my head. I’ll actually think what piece of code is likely to have caused that error, and take a quick look at it, just to see if there’s anything glaringly obvious there. The next thing I’ll probably do is throw in print statements, or throw some exceptions from various points, just to check: is it going through the code path I expect it to? A last resort is to actually debug code using a debugger.Why is the debugger the last resort?Probably because of the environments I learnt programming in. VB and early BASIC didn’t have much of a debugger, the only way to find out what your program was doing was to add print statements. Also, because a lot of the stuff I tend to work with is non-interactive, if it’s something that takes a long time to run, I can throw in the print statements, set a run off, go and do something else, and look at it again later, rather than trying to remember what happened at that point when I was debugging through it. So it also gives me the record of what happens. I hate just sitting there pressing F5, F5, continually. If you’re having to find out what your code is doing at each line, you’ve probably got a very wrong mental model of what your code’s doing, and you can find that out just as easily by inspecting a couple of values through the print statements.If I were on some codebase that you were also working on, what should I do to make it as easy as possible to understand?I’d say short and well-named methods. The one thing I like to do when I’m looking at code is to find out where a value comes from, and the more layers of indirection there are, particularly DI [dependency injection] frameworks, the harder it is to find out where something’s come from. I really hate that. I want to know if the value come from the user here or is a constant here, and if I can’t find that out, that makes code very hard to understand for me.As a tester, where do you think the split should lie between software engineers and testers?I think the split is less on areas of the code you write and more what you’re designing and creating. The developers put a structure on the code, while my major role is to say which tests we should have, whether we should test that, or it’s not worth testing that because it’s a tiny function in code that nobody’s ever actually going to see. So it’s not a split in the code, it’s a split in what you’re thinking about. Saying what code we should write, but alternatively what code we should take out.In your experience, do the software engineers tend to do much testing themselves?They tend to control the lowest layer of tests. And, depending on how the balance of people is in the team, they might write some of the higher levels of test. Or that might go to the testers. I’m the only tester on my team with three other developers, so they’ll be writing quite a lot of the actual test code, with input from me as to whether we should test that functionality, whereas on other teams, where it’s been more equal numbers, the testers have written pretty much all of the high level tests, just because that’s the best use of resource.If you could shuffle resources around however you liked, do you think that the developers should be writing those high-level tests?I think they should be writing them occasionally. It helps when they have an understanding of how testing code works and possibly what assumptions we’ve made in tests, and they can say “actually, it doesn’t work like that under the hood so you’ve missed this whole area”. It’s one of those agile things that everyone on the team should be at least comfortable doing the various jobs. So if the developers can write test code then I think that’s a very good thing.So you think testers should be able to write production code?Yes, although given most testers skills at coding, I wouldn’t advise it too much! I have written a few things, and I did make a few changes that have actually gone into our production code base. They’re not necessarily running every time but they are there. I think having that mix of skill sets is really useful. In some ways we’re using our own product to test itself, so being able to make those changes where it’s not working saves me a round-trip through the developers. It can be really annoying if the developers have no time to make a change, and I can’t touch the code.If the software engineers are consistently writing tests at all levels, what role do you think the role of a tester is?I think on a team like that, those distinctions aren’t quite so useful. There’ll be two cases. There’s either the case where the developers think they’ve written good tests, but you still need someone with a test engineer mind-set to go through the tests and validate that it’s a useful set, or the correct set for that code. Or they won’t actually be pure developers, they’ll have that mix of test ability in there.I think having slightly more distinct roles is useful. When it starts to blur, then you lose that view of the tests as a whole. The tester job is not to create tests, it’s to validate the quality of the product, and you don’t do that just by writing tests. There’s more things you’ve got to keep in your mind. And I think when you blur the roles, you start to lose that end of the tester.So because you’re working on those features, you lose that holistic view of the whole system?Yeah, and anyone who’s worked on the feature shouldn’t be testing it. You always need to have it tested it by someone who didn’t write it. Otherwise you’re a bit too close and you assume “yes, people will only use it that way”, but the tester will come along and go “how do people use this? How would our most idiotic user use this?”. I might not test that because it might be completely irrelevant. But it’s coming in and trying to have a different set of assumptions.Are you a believer that it should all be automated if possible?Not entirely. So an automated test is always better than a manual test for the long-term, but there’s still nothing that beats a human sitting in front of the application and thinking “What could I do at this point?”. The automated test is very good but they follow that strict path, and they never check anything off the path. The human tester will look at things that they weren’t expecting, whereas the automated test can only ever go “Is that value correct?” in many respects, and it won’t notice that on the other side of the screen you’re showing something completely wrong. And that value might have been checked independently, but you always find a few odd interactions when you’re going through something manually, and you always need to go through something manually to start with anyway, otherwise you won’t know where the important bits to write your automation are.When you’re doing that manual testing, do you think it’s important to do that across the entire product, or just the bits that you’ve touched recently?I think it’s important to do it mostly on the bits you’ve touched, but you can’t ignore the rest of the product. Unless you’re dealing with a very, very self-contained bit, you’re almost always encounter other bits of the product along the way. Most testers I know, even if they are looking at just one path, they’ll keep open and move around a bit anyway, just because they want to find something that’s broken. If we find that your path is right, we’ll go out and hunt something else.How do you think this fits into the idea of continuously deploying, so long as the tests pass?With deploying a website it’s a bit different because you can always pull it back. If you’re deploying an application to customers, when you’ve released it, it’s out there, you can’t pull it back. Someone’s going to keep it, no matter how hard you try there will be a few installations that stay around. So I’d always have at least a human element on that path. With websites, you could probably automate straight out, or at least straight out to an internal environment or a single server in a cloud of fifty that will serve some people. But I don’t think you should release to everyone just on automated tests passing.You’ve already mentioned using BASIC and C# — are there any other languages that you’ve used?I’ve used a few. That’s something that has changed more recently, I’ve become familiar with more languages. Before I started at Red Gate I learnt a bit of C. Then last year, I taught myself Python which I actually really enjoyed using. I’ve also come across another language called Vala, which is sort of a C#-like language. It’s basically a pre-processor for C, but it has very nice syntax. I think that’s currently my favourite language.Any particular reason for trying Vala?I have a completely Linux environment at home, and I’ve been looking for a nice language, and C# just doesn’t cut it because I won’t touch Mono. So, I was looking for something like C# but that was useable in an open source environment, and Vala’s what I found. C#’s got a few features that Vala doesn’t, and Vala’s got a few features where I think “It would be awesome if C# had that”.What are some of the features that it’s missing?Extension methods. And I think that’s the only one that really bugs me. I like to use them when I’m writing C# because it makes some things really easy, especially with libraries that you can’t touch the internals of. It doesn’t have method overloading, which is sometimes annoying.Where it does win over C#?Everything is non-nullable by default, you never have to check that something’s unexpectedly null.Also, Vala has code contracts. This is starting to come in C# 4, but the way it works in Vala is that you specify requirements in short phrases as part of your function signature and they stick to the signature, so that when you inherit it, it has exactly the same code contract as the base one, or when you inherit from an interface, you have to match the signature exactly. Just using those makes you think a bit more about how you’re writing your method, it’s not an afterthought when you’ve got contracts from base classes given to you, you can’t change it. Which I think is a lot nicer than the way C# handles it. When are those actually checked?They’re checked both at compile and run-time. The compile-time checking isn’t very strong yet, it’s quite a new feature in the compiler, and because it compiles down to C, you can write C code and interface with your methods, so you can bypass that compile-time check anyway. So there’s an extra runtime check, and if you violate one of the contracts at runtime, it’s game over for your program, there’s no exception to catch, it’s just goodbye!One thing I dislike about C# is the exceptions. You write a bit of code and fifty exceptions could come from any point in your ten lines, and you can’t mentally model how those exceptions are going to come out, and you can’t even predict them based on the functions you’re calling, because if you’ve accidentally got a derived class there instead of a base class, that can throw a completely different set of exceptions. So I’ve got no way of mentally modelling those, whereas in Vala they’re checked like Java, so you know only these exceptions can come out. You know in advance the error conditions.I think Raymond Chen on Old New Thing says “the only thing you know when you throw an exception is that you’re in an invalid state somewhere in your program, so just kill it and be done with it!”You said you’ve also learnt bits of Python. How did you find that compared to Vala and C#?Very different because of the dynamic typing. I’ve been writing a website for my own use. I’m quite into photography, so I take photos off my camera, post-process them, dump them in a file, and I get a webpage with all my thumbnails. So sort of like Picassa, but written by myself because I wanted something to learn Python with. There are some things that are really nice, I just found it really difficult to cope with the fact that I’m not quite sure what this object type that I’m passed is, I might not ever be sure, so it can randomly blow up on me. But once I train myself to ignore that and just say “well, I’m fairly sure it’s going to be something that looks like this, so I’ll use it like this”, then it’s quite nice.Any particular features that you’ve appreciated?I don’t like any particular feature, it’s just very straightforward to work with. It’s very quick to write something in, particularly as you don’t have to worry that you’ve changed something that affects a different part of the program. If you have, then that part blows up, but I can get this part working right now.If you were doing a big project, would you be willing to do it in Python rather than C# or Vala?I think I might be willing to try something bigger or long term with Python. We’re currently doing an ASP.NET MVC project on C#, and I don’t like the amount of reflection. There’s a lot of magic that pulls values out, and it’s all done under the scenes. It’s almost managed to put a dynamic type system on top of C#, which in many ways destroys the language to me, whereas if you’re already in a dynamic language, having things done dynamically is much more natural. In many ways, you get the worst of both worlds. I think for web projects, I would go with Python again, whereas for anything desktop, command-line or GUI-based, I’d probably go for C# or Vala, depending on what environment I’m in.It’s the fact that you can gain from the strong typing in ways that you can’t so much on the web app. Or, in a web app, you have to use dynamic typing at some point, or you have to write a hell of a lot of boilerplate, and I’d rather use the dynamic typing than write the boilerplate.What do you think separates great programmers from everyone else?Probably design choices. Choosing to write it a piece of code one way or another. For any given program you ask me to write, I could probably do it five thousand ways. A programmer who is capable will see four or five of them, and choose one of the better ones. The excellent programmer will see the largest proportion and manage to pick the best one very quickly without having to think too much about it. I think that’s probably what separates, is the speed at which they can see what’s the best path to write the program in. More Red Gater Coder interviews

    Read the article

  • Syncing Data with a Server using Silverlight and HTTP Polling Duplex

    - by dwahlin
    Many applications have the need to stay in-sync with data provided by a service. Although web applications typically rely on standard polling techniques to check if data has changed, Silverlight provides several interesting options for keeping an application in-sync that rely on server “push” technologies. A few years back I wrote several blog posts covering different “push” technologies available in Silverlight that rely on sockets or HTTP Polling Duplex. We recently had a project that looked like it could benefit from pushing data from a server to one or more clients so I thought I’d revisit the subject and provide some updates to the original code posted. If you’ve worked with AJAX before in Web applications then you know that until browsers fully support web sockets or other duplex (bi-directional communication) technologies that it’s difficult to keep applications in-sync with a server without relying on polling. The problem with polling is that you have to check for changes on the server on a timed-basis which can often be wasteful and take up unnecessary resources. With server “push” technologies, data can be pushed from the server to the client as it changes. Once the data is received, the client can update the user interface as appropriate. Using “push” technologies allows the client to listen for changes from the data but stay 100% focused on client activities as opposed to worrying about polling and asking the server if anything has changed. Silverlight provides several options for pushing data from a server to a client including sockets, TCP bindings and HTTP Polling Duplex.  Each has its own strengths and weaknesses as far as performance and setup work with HTTP Polling Duplex arguably being the easiest to setup and get going.  In this article I’ll demonstrate how HTTP Polling Duplex can be used in Silverlight 4 applications to push data and show how you can create a WCF server that provides an HTTP Polling Duplex binding that a Silverlight client can consume.   What is HTTP Polling Duplex? Technologies that allow data to be pushed from a server to a client rely on duplex functionality. Duplex (or bi-directional) communication allows data to be passed in both directions.  A client can call a service and the server can call the client. HTTP Polling Duplex (as its name implies) allows a server to communicate with a client without forcing the client to constantly poll the server. It has the benefit of being able to run on port 80 making setup a breeze compared to the other options which require specific ports to be used and cross-domain policy files to be exposed on port 943 (as with sockets and TCP bindings). Having said that, if you’re looking for the best speed possible then sockets and TCP bindings are the way to go. But, they’re not the only game in town when it comes to duplex communication. The first time I heard about HTTP Polling Duplex (initially available in Silverlight 2) I wasn’t exactly sure how it was any better than standard polling used in AJAX applications. I read the Silverlight SDK, looked at various resources and generally found the following definition unhelpful as far as understanding the actual benefits that HTTP Polling Duplex provided: "The Silverlight client periodically polls the service on the network layer, and checks for any new messages that the service wants to send on the callback channel. The service queues all messages sent on the client callback channel and delivers them to the client when the client polls the service." Although the previous definition explained the overall process, it sounded as if standard polling was used. Fortunately, Microsoft’s Scott Guthrie provided me with a more clear definition several years back that explains the benefits provided by HTTP Polling Duplex quite well (used with his permission): "The [HTTP Polling Duplex] duplex support does use polling in the background to implement notifications – although the way it does it is different than manual polling. It initiates a network request, and then the request is effectively “put to sleep” waiting for the server to respond (it doesn’t come back immediately). The server then keeps the connection open but not active until it has something to send back (or the connection times out after 90 seconds – at which point the duplex client will connect again and wait). This way you are avoiding hitting the server repeatedly – but still get an immediate response when there is data to send." After hearing Scott’s definition the light bulb went on and it all made sense. A client makes a request to a server to check for changes, but instead of the request returning immediately, it parks itself on the server and waits for data. It’s kind of like waiting to pick up a pizza at the store. Instead of calling the store over and over to check the status, you sit in the store and wait until the pizza (the request data) is ready. Once it’s ready you take it back home (to the client). This technique provides a lot of efficiency gains over standard polling techniques even though it does use some polling of its own as a request is initially made from a client to a server. So how do you implement HTTP Polling Duplex in your Silverlight applications? Let’s take a look at the process by starting with the server. Creating an HTTP Polling Duplex WCF Service Creating a WCF service that exposes an HTTP Polling Duplex binding is straightforward as far as coding goes. Add some one way operations into an interface, create a client callback interface and you’re ready to go. The most challenging part comes into play when configuring the service to properly support the necessary binding and that’s more of a cut and paste operation once you know the configuration code to use. To create an HTTP Polling Duplex service you’ll need to expose server-side and client-side interfaces and reference the System.ServiceModel.PollingDuplex assembly (located at C:\Program Files (x86)\Microsoft SDKs\Silverlight\v4.0\Libraries\Server on my machine) in the server project. For the demo application I upgraded a basketball simulation service to support the latest polling duplex assemblies. The service simulates a simple basketball game using a Game class and pushes information about the game such as score, fouls, shots and more to the client as the game changes over time. Before jumping too far into the game push service, it’s important to discuss two interfaces used by the service to communicate in a bi-directional manner. The first is called IGameStreamService and defines the methods/operations that the client can call on the server (see Listing 1). The second is IGameStreamClient which defines the callback methods that a server can use to communicate with a client (see Listing 2).   [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(IGameStreamClient))] public interface IGameStreamService { [OperationContract(IsOneWay = true)] void GetTeamData(); } Listing 1. The IGameStreamService interface defines server operations that can be called on the server.   [ServiceContract] public interface IGameStreamClient { [OperationContract(IsOneWay = true)] void ReceiveTeamData(List<Team> teamData); [OperationContract(IsOneWay = true, AsyncPattern=true)] IAsyncResult BeginReceiveGameData(GameData gameData, AsyncCallback callback, object state); void EndReceiveGameData(IAsyncResult result); } Listing 2. The IGameStreamClient interfaces defines client operations that a server can call.   The IGameStreamService interface is decorated with the standard ServiceContract attribute but also contains a value for the CallbackContract property.  This property is used to define the interface that the client will expose (IGameStreamClient in this example) and use to receive data pushed from the service. Notice that each OperationContract attribute in both interfaces sets the IsOneWay property to true. This means that the operation can be called and passed data as appropriate, however, no data will be passed back. Instead, data will be pushed back to the client as it’s available.  Looking through the IGameStreamService interface you can see that the client can request team data whereas the IGameStreamClient interface allows team and game data to be received by the client. One interesting point about the IGameStreamClient interface is the inclusion of the AsyncPattern property on the BeginReceiveGameData operation. I initially created this operation as a standard one way operation and it worked most of the time. However, as I disconnected clients and reconnected new ones game data wasn’t being passed properly. After researching the problem more I realized that because the service could take up to 7 seconds to return game data, things were getting hung up. By setting the AsyncPattern property to true on the BeginReceivedGameData operation and providing a corresponding EndReceiveGameData operation I was able to get around this problem and get everything running properly. I’ll provide more details on the implementation of these two methods later in this post. Once the interfaces were created I moved on to the game service class. The first order of business was to create a class that implemented the IGameStreamService interface. Since the service can be used by multiple clients wanting game data I added the ServiceBehavior attribute to the class definition so that I could set its InstanceContextMode to InstanceContextMode.Single (in effect creating a Singleton service object). Listing 3 shows the game service class as well as its fields and constructor.   [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.Single)] public class GameStreamService : IGameStreamService { object _Key = new object(); Game _Game = null; Timer _Timer = null; Random _Random = null; Dictionary<string, IGameStreamClient> _ClientCallbacks = new Dictionary<string, IGameStreamClient>(); static AsyncCallback _ReceiveGameDataCompleted = new AsyncCallback(ReceiveGameDataCompleted); public GameStreamService() { _Game = new Game(); _Timer = new Timer { Enabled = false, Interval = 2000, AutoReset = true }; _Timer.Elapsed += new ElapsedEventHandler(_Timer_Elapsed); _Timer.Start(); _Random = new Random(); }} Listing 3. The GameStreamService implements the IGameStreamService interface which defines a callback contract that allows the service class to push data back to the client. By implementing the IGameStreamService interface, GameStreamService must supply a GetTeamData() method which is responsible for supplying information about the teams that are playing as well as individual players.  GetTeamData() also acts as a client subscription method that tracks clients wanting to receive game data.  Listing 4 shows the GetTeamData() method. public void GetTeamData() { //Get client callback channel var context = OperationContext.Current; var sessionID = context.SessionId; var currClient = context.GetCallbackChannel<IGameStreamClient>(); context.Channel.Faulted += Disconnect; context.Channel.Closed += Disconnect; IGameStreamClient client; if (!_ClientCallbacks.TryGetValue(sessionID, out client)) { lock (_Key) { _ClientCallbacks[sessionID] = currClient; } } currClient.ReceiveTeamData(_Game.GetTeamData()); //Start timer which when fired sends updated score information to client if (!_Timer.Enabled) { _Timer.Enabled = true; } } Listing 4. The GetTeamData() method subscribes a given client to the game service and returns. The key the line of code in the GetTeamData() method is the call to GetCallbackChannel<IGameStreamClient>().  This method is responsible for accessing the calling client’s callback channel. The callback channel is defined by the IGameStreamClient interface shown earlier in Listing 2 and used by the server to communicate with the client. Before passing team data back to the client, GetTeamData() grabs the client’s session ID and checks if it already exists in the _ClientCallbacks dictionary object used to track clients wanting callbacks from the server. If the client doesn’t exist it adds it into the collection. It then pushes team data from the Game class back to the client by calling ReceiveTeamData().  Since the service simulates a basketball game, a timer is then started if it’s not already enabled which is then used to randomly send data to the client. When the timer fires, game data is pushed down to the client. Listing 5 shows the _Timer_Elapsed() method that is called when the timer fires as well as the SendGameData() method used to send data to the client. void _Timer_Elapsed(object sender, ElapsedEventArgs e) { int interval = _Random.Next(3000, 7000); lock (_Key) { _Timer.Interval = interval; _Timer.Enabled = false; } SendGameData(_Game.GetGameData()); } private void SendGameData(GameData gameData) { var cbs = _ClientCallbacks.Where(cb => ((IContextChannel)cb.Value).State == CommunicationState.Opened); for (int i = 0; i < cbs.Count(); i++) { var cb = cbs.ElementAt(i).Value; try { cb.BeginReceiveGameData(gameData, _ReceiveGameDataCompleted, cb); } catch (TimeoutException texp) { //Log timeout error } catch (CommunicationException cexp) { //Log communication error } } lock (_Key) _Timer.Enabled = true; } private static void ReceiveGameDataCompleted(IAsyncResult result) { try { ((IGameStreamClient)(result.AsyncState)).EndReceiveGameData(result); } catch (CommunicationException) { // empty } catch (TimeoutException) { // empty } } LIsting 5. _Timer_Elapsed is used to simulate time in a basketball game. When _Timer_Elapsed() fires the SendGameData() method is called which iterates through the clients wanting to be notified of changes. As each client is identified, their respective BeginReceiveGameData() method is called which ultimately pushes game data down to the client. Recall that this method was defined in the client callback interface named IGameStreamClient shown earlier in Listing 2. Notice that BeginReceiveGameData() accepts _ReceiveGameDataCompleted as its second parameter (an AsyncCallback delegate defined in the service class) and passes the client callback as the third parameter. The initial version of the sample application had a standard ReceiveGameData() method in the client callback interface. However, sometimes the client callbacks would work properly and sometimes they wouldn’t which was a little baffling at first glance. After some investigation I realized that I needed to implement an asynchronous pattern for client callbacks to work properly since 3 – 7 second delays are occurring as a result of the timer. Once I added the BeginReceiveGameData() and ReceiveGameDataCompleted() methods everything worked properly since each call was handled in an asynchronous manner. The final task that had to be completed to get the server working properly with HTTP Polling Duplex was adding configuration code into web.config. In the interest of brevity I won’t post all of the code here since the sample application includes everything you need. However, Listing 6 shows the key configuration code to handle creating a custom binding named pollingDuplexBinding and associate it with the service’s endpoint.   <bindings> <customBinding> <binding name="pollingDuplexBinding"> <binaryMessageEncoding /> <pollingDuplex maxPendingSessions="2147483647" maxPendingMessagesPerSession="2147483647" inactivityTimeout="02:00:00" serverPollTimeout="00:05:00"/> <httpTransport /> </binding> </customBinding> </bindings> <services> <service name="GameService.GameStreamService" behaviorConfiguration="GameStreamServiceBehavior"> <endpoint address="" binding="customBinding" bindingConfiguration="pollingDuplexBinding" contract="GameService.IGameStreamService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>   Listing 6. Configuring an HTTP Polling Duplex binding in web.config and associating an endpoint with it. Calling the Service and Receiving “Pushed” Data Calling the service and handling data that is pushed from the server is a simple and straightforward process in Silverlight. Since the service is configured with a MEX endpoint and exposes a WSDL file, you can right-click on the Silverlight project and select the standard Add Service Reference item. After the web service proxy is created you may notice that the ServiceReferences.ClientConfig file only contains an empty configuration element instead of the normal configuration elements created when creating a standard WCF proxy. You can certainly update the file if you want to read from it at runtime but for the sample application I fed the service URI directly to the service proxy as shown next: var address = new EndpointAddress("http://localhost.:5661/GameStreamService.svc"); var binding = new PollingDuplexHttpBinding(); _Proxy = new GameStreamServiceClient(binding, address); _Proxy.ReceiveTeamDataReceived += _Proxy_ReceiveTeamDataReceived; _Proxy.ReceiveGameDataReceived += _Proxy_ReceiveGameDataReceived; _Proxy.GetTeamDataAsync(); This code creates the proxy and passes the endpoint address and binding to use to its constructor. It then wires the different receive events to callback methods and calls GetTeamDataAsync().  Calling GetTeamDataAsync() causes the server to store the client in the server-side dictionary collection mentioned earlier so that it can receive data that is pushed.  As the server-side timer fires and game data is pushed to the client, the user interface is updated as shown in Listing 7. Listing 8 shows the _Proxy_ReceiveGameDataReceived() method responsible for handling the data and calling UpdateGameData() to process it.   Listing 7. The Silverlight interface. Game data is pushed from the server to the client using HTTP Polling Duplex. void _Proxy_ReceiveGameDataReceived(object sender, ReceiveGameDataReceivedEventArgs e) { UpdateGameData(e.gameData); } private void UpdateGameData(GameData gameData) { //Update Score this.tbTeam1Score.Text = gameData.Team1Score.ToString(); this.tbTeam2Score.Text = gameData.Team2Score.ToString(); //Update ball visibility if (gameData.Action != ActionsEnum.Foul) { if (tbTeam1.Text == gameData.TeamOnOffense) { AnimateBall(this.BB1, this.BB2); } else //Team 2 { AnimateBall(this.BB2, this.BB1); } } if (this.lbActions.Items.Count > 9) this.lbActions.Items.Clear(); this.lbActions.Items.Add(gameData.LastAction); if (this.lbActions.Visibility == Visibility.Collapsed) this.lbActions.Visibility = Visibility.Visible; } private void AnimateBall(Image onBall, Image offBall) { this.FadeIn.Stop(); Storyboard.SetTarget(this.FadeInAnimation, onBall); Storyboard.SetTarget(this.FadeOutAnimation, offBall); this.FadeIn.Begin(); } Listing 8. As the server pushes game data, the client’s _Proxy_ReceiveGameDataReceived() method is called to process the data. In a real-life application I’d go with a ViewModel class to handle retrieving team data, setup data bindings and handle data that is pushed from the server. However, for the sample application I wanted to focus on HTTP Polling Duplex and keep things as simple as possible.   Summary Silverlight supports three options when duplex communication is required in an application including TCP bindins, sockets and HTTP Polling Duplex. In this post you’ve seen how HTTP Polling Duplex interfaces can be created and implemented on the server as well as how they can be consumed by a Silverlight client. HTTP Polling Duplex provides a nice way to “push” data from a server while still allowing the data to flow over port 80 or another port of your choice.   Sample Application Download

    Read the article

  • XNA Screen Manager problem with transitions

    - by NexAddo
    I'm having issues using the game statemanagement example in the game I am developing. I have no issues with my first three screens transitioning between one another. I have a main menu screen, a splash screen and a high score screen that cycle: mainMenuScreen->splashScreen->highScoreScreen->mainMenuScreen The screens change every 15 seconds. Transition times public MainMenuScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.0); currentCreditAmount = Global.CurrentCredits; } public SplashScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } public HighScoreScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } public GamePlayScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } When a user inserts credits they can play the game after pressing start mainMenuScreen->splashScreen->highScoreScreen->(loops forever) || || || ===========Credits In============= || Start || \/ LoadingScreen || Start || \/ GamePlayScreen During each of these transitions, between screens, the same code is used, which exits(removes) all current active screens and respects transitions, then adds the new screen to the screen manager: foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); //AddScreen takes a new screen to manage and the controlling player ScreenManager.AddScreen(new NameOfScreenHere(), null); Each screen is removed from the ScreenManager with ExitScreen() and using this function, each screen transition is respected. The problem I am having is with my gamePlayScreen. When the current game is finished and the transition is complete for the gamePlayScreen, it should be removed and the next screens should be added to the ScreenManager. GamePlayScreen Code Snippet private void FinishCurrentGame() { AudioManager.StopSounds(); this.UnloadContent(); if (Global.SaveDevice.IsReady) Stats.Save(); if (HighScoreScreen.IsInHighscores(timeLimit)) { foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); Global.TimeRemaining = timeLimit; ScreenManager.AddScreen(new BackgroundScreen(), null); ScreenManager.AddScreen(new MessageBoxScreen("Enter your Initials", true), null); } else { foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); ScreenManager.AddScreen(new BackgroundScreen(), null); ScreenManager.AddScreen(new MainMenuScreen(), null); } } The problem is that when isExiting is set to true by screen.ExitScreen() for the gamePlayScreen, the transition never completes the transition and removes the screen from the ScreenManager. Every other screen that I use the same technique to add and remove each screen fully transitions On/Off and is removed at the appropriate time from the ScreenManager, but noy my GamePlayScreen. Has anyone that has used the GameStateManagement example experienced this issue or can someone see the mistake I am making? EDIT This is what I tracked down. When the game is done, I call foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); to start the transition off process for the gameplay screen. At this point there is only 1 screen on the ScreenManager stack. The gamePlay screen gets isExiting set to true and starts to transition off. Right after the above call to ExitScreen() I add a background screen and menu screen to the screenManager: ScreenManager.AddScreen(new background(), null); ScreenManager.AddScreen(new Menu(), null); The count of the ScreenManager is now 3. What I noticed while stepping through the updates for GameScreen and ScreenManager, the gameplay screen never gets to the point where the transistion process finishes so the ScreenManager can remove it from the stack. This anomaly does not happen to any of my other screens when I switch between them. Screen Manager Code #region File Description //----------------------------------------------------------------------------- // ScreenManager.cs // // Microsoft XNA Community Game Platform // Copyright (C) Microsoft Corporation. All rights reserved. //----------------------------------------------------------------------------- #endregion #define DEMO #region Using Statements using System; using System.Diagnostics; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using PerformanceUtility.GameDebugTools; #endregion namespace GameStateManagement { /// <summary> /// The screen manager is a component which manages one or more GameScreen /// instances. It maintains a stack of screens, calls their Update and Draw /// methods at the appropriate times, and automatically routes input to the /// topmost active screen. /// </summary> public class ScreenManager : DrawableGameComponent { #region Fields List<GameScreen> screens = new List<GameScreen>(); List<GameScreen> screensToUpdate = new List<GameScreen>(); InputState input = new InputState(); SpriteBatch spriteBatch; SpriteFont font; Texture2D blankTexture; bool isInitialized; bool getOut; bool traceEnabled; #if DEBUG DebugSystem debugSystem; Stopwatch stopwatch = new Stopwatch(); bool debugTextEnabled; #endif #endregion #region Properties /// <summary> /// A default SpriteBatch shared by all the screens. This saves /// each screen having to bother creating their own local instance. /// </summary> public SpriteBatch SpriteBatch { get { return spriteBatch; } } /// <summary> /// A default font shared by all the screens. This saves /// each screen having to bother loading their own local copy. /// </summary> public SpriteFont Font { get { return font; } } public Rectangle ScreenRectangle { get { return new Rectangle(0, 0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height); } } /// <summary> /// If true, the manager prints out a list of all the screens /// each time it is updated. This can be useful for making sure /// everything is being added and removed at the right times. /// </summary> public bool TraceEnabled { get { return traceEnabled; } set { traceEnabled = value; } } #if DEBUG public bool DebugTextEnabled { get { return debugTextEnabled; } set { debugTextEnabled = value; } } public DebugSystem DebugSystem { get { return debugSystem; } } #endif #endregion #region Initialization /// <summary> /// Constructs a new screen manager component. /// </summary> public ScreenManager(Game game) : base(game) { // we must set EnabledGestures before we can query for them, but // we don't assume the game wants to read them. //TouchPanel.EnabledGestures = GestureType.None; } /// <summary> /// Initializes the screen manager component. /// </summary> public override void Initialize() { base.Initialize(); #if DEBUG debugSystem = DebugSystem.Initialize(Game, "Fonts/MenuFont"); #endif isInitialized = true; } /// <summary> /// Load your graphics content. /// </summary> protected override void LoadContent() { // Load content belonging to the screen manager. ContentManager content = Game.Content; spriteBatch = new SpriteBatch(GraphicsDevice); font = content.Load<SpriteFont>(@"Fonts\menufont"); blankTexture = content.Load<Texture2D>(@"Textures\Backgrounds\blank"); // Tell each of the screens to load their content. foreach (GameScreen screen in screens) { screen.LoadContent(); } } /// <summary> /// Unload your graphics content. /// </summary> protected override void UnloadContent() { // Tell each of the screens to unload their content. foreach (GameScreen screen in screens) { screen.UnloadContent(); } } #endregion #region Update and Draw /// <summary> /// Allows each screen to run logic. /// </summary> public override void Update(GameTime gameTime) { #if DEBUG debugSystem.TimeRuler.StartFrame(); debugSystem.TimeRuler.BeginMark("Update", Color.Blue); if (debugTextEnabled && getOut == false) { debugSystem.FpsCounter.Visible = true; debugSystem.TimeRuler.Visible = true; debugSystem.TimeRuler.ShowLog = true; getOut = true; } else if (debugTextEnabled == false) { getOut = false; debugSystem.FpsCounter.Visible = false; debugSystem.TimeRuler.Visible = false; debugSystem.TimeRuler.ShowLog = false; } #endif // Read the keyboard and gamepad. input.Update(); // Make a copy of the master screen list, to avoid confusion if // the process of updating one screen adds or removes others. screensToUpdate.Clear(); foreach (GameScreen screen in screens) screensToUpdate.Add(screen); bool otherScreenHasFocus = !Game.IsActive; bool coveredByOtherScreen = false; // Loop as long as there are screens waiting to be updated. while (screensToUpdate.Count > 0) { // Pop the topmost screen off the waiting list. GameScreen screen = screensToUpdate[screensToUpdate.Count - 1]; screensToUpdate.RemoveAt(screensToUpdate.Count - 1); // Update the screen. screen.Update(gameTime, otherScreenHasFocus, coveredByOtherScreen); if (screen.ScreenState == ScreenState.TransitionOn || screen.ScreenState == ScreenState.Active) { // If this is the first active screen we came across, // give it a chance to handle input. if (!otherScreenHasFocus) { screen.HandleInput(input); otherScreenHasFocus = true; } // If this is an active non-popup, inform any subsequent // screens that they are covered by it. if (!screen.IsPopup) coveredByOtherScreen = true; } } // Print debug trace? if (traceEnabled) TraceScreens(); #if DEBUG debugSystem.TimeRuler.EndMark("Update"); #endif } /// <summary> /// Prints a list of all the screens, for debugging. /// </summary> void TraceScreens() { List<string> screenNames = new List<string>(); foreach (GameScreen screen in screens) screenNames.Add(screen.GetType().Name); Debug.WriteLine(string.Join(", ", screenNames.ToArray())); } /// <summary> /// Tells each screen to draw itself. /// </summary> public override void Draw(GameTime gameTime) { #if DEBUG debugSystem.TimeRuler.StartFrame(); debugSystem.TimeRuler.BeginMark("Draw", Color.Yellow); #endif foreach (GameScreen screen in screens) { if (screen.ScreenState == ScreenState.Hidden) continue; screen.Draw(gameTime); } #if DEBUG debugSystem.TimeRuler.EndMark("Draw"); #endif #if DEMO SpriteBatch.Begin(); SpriteBatch.DrawString(font, "DEMO - NOT FOR RESALE", new Vector2(20, 80), Color.White); SpriteBatch.End(); #endif } #endregion #region Public Methods /// <summary> /// Adds a new screen to the screen manager. /// </summary> public void AddScreen(GameScreen screen, PlayerIndex? controllingPlayer) { screen.ControllingPlayer = controllingPlayer; screen.ScreenManager = this; screen.IsExiting = false; // If we have a graphics device, tell the screen to load content. if (isInitialized) { screen.LoadContent(); } screens.Add(screen); } /// <summary> /// Removes a screen from the screen manager. You should normally /// use GameScreen.ExitScreen instead of calling this directly, so /// the screen can gradually transition off rather than just being /// instantly removed. /// </summary> public void RemoveScreen(GameScreen screen) { // If we have a graphics device, tell the screen to unload content. if (isInitialized) { screen.UnloadContent(); } screens.Remove(screen); screensToUpdate.Remove(screen); } /// <summary> /// Expose an array holding all the screens. We return a copy rather /// than the real master list, because screens should only ever be added /// or removed using the AddScreen and RemoveScreen methods. /// </summary> public GameScreen[] GetScreens() { return screens.ToArray(); } /// <summary> /// Helper draws a translucent black fullscreen sprite, used for fading /// screens in and out, and for darkening the background behind popups. /// </summary> public void FadeBackBufferToBlack(float alpha) { Viewport viewport = GraphicsDevice.Viewport; spriteBatch.Begin(); spriteBatch.Draw(blankTexture, new Rectangle(0, 0, viewport.Width, viewport.Height), Color.Black * alpha); spriteBatch.End(); } #endregion } } Game Screen Parent of GamePlayScreen #region File Description //----------------------------------------------------------------------------- // GameScreen.cs // // Microsoft XNA Community Game Platform // Copyright (C) Microsoft Corporation. All rights reserved. //----------------------------------------------------------------------------- #endregion #region Using Statements using System; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Input; //using Microsoft.Xna.Framework.Input.Touch; using System.IO; #endregion namespace GameStateManagement { /// <summary> /// Enum describes the screen transition state. /// </summary> public enum ScreenState { TransitionOn, Active, TransitionOff, Hidden, } /// <summary> /// A screen is a single layer that has update and draw logic, and which /// can be combined with other layers to build up a complex menu system. /// For instance the main menu, the options menu, the "are you sure you /// want to quit" message box, and the main game itself are all implemented /// as screens. /// </summary> public abstract class GameScreen { #region Properties /// <summary> /// Normally when one screen is brought up over the top of another, /// the first screen will transition off to make room for the new /// one. This property indicates whether the screen is only a small /// popup, in which case screens underneath it do not need to bother /// transitioning off. /// </summary> public bool IsPopup { get { return isPopup; } protected set { isPopup = value; } } bool isPopup = false; /// <summary> /// Indicates how long the screen takes to /// transition on when it is activated. /// </summary> public TimeSpan TransitionOnTime { get { return transitionOnTime; } protected set { transitionOnTime = value; } } TimeSpan transitionOnTime = TimeSpan.Zero; /// <summary> /// Indicates how long the screen takes to /// transition off when it is deactivated. /// </summary> public TimeSpan TransitionOffTime { get { return transitionOffTime; } protected set { transitionOffTime = value; } } TimeSpan transitionOffTime = TimeSpan.Zero; /// <summary> /// Gets the current position of the screen transition, ranging /// from zero (fully active, no transition) to one (transitioned /// fully off to nothing). /// </summary> public float TransitionPosition { get { return transitionPosition; } protected set { transitionPosition = value; } } float transitionPosition = 1; /// <summary> /// Gets the current alpha of the screen transition, ranging /// from 1 (fully active, no transition) to 0 (transitioned /// fully off to nothing). /// </summary> public float TransitionAlpha { get { return 1f - TransitionPosition; } } /// <summary> /// Gets the current screen transition state. /// </summary> public ScreenState ScreenState { get { return screenState; } protected set { screenState = value; } } ScreenState screenState = ScreenState.TransitionOn; /// <summary> /// There are two possible reasons why a screen might be transitioning /// off. It could be temporarily going away to make room for another /// screen that is on top of it, or it could be going away for good. /// This property indicates whether the screen is exiting for real: /// if set, the screen will automatically remove itself as soon as the /// transition finishes. /// </summary> public bool IsExiting { get { return isExiting; } protected internal set { isExiting = value; } } bool isExiting = false; /// <summary> /// Checks whether this screen is active and can respond to user input. /// </summary> public bool IsActive { get { return !otherScreenHasFocus && (screenState == ScreenState.TransitionOn || screenState == ScreenState.Active); } } bool otherScreenHasFocus; /// <summary> /// Gets the manager that this screen belongs to. /// </summary> public ScreenManager ScreenManager { get { return screenManager; } internal set { screenManager = value; } } ScreenManager screenManager; public KeyboardState KeyboardState { get {return Keyboard.GetState();} } /// <summary> /// Gets the index of the player who is currently controlling this screen, /// or null if it is accepting input from any player. This is used to lock /// the game to a specific player profile. The main menu responds to input /// from any connected gamepad, but whichever player makes a selection from /// this menu is given control over all subsequent screens, so other gamepads /// are inactive until the controlling player returns to the main menu. /// </summary> public PlayerIndex? ControllingPlayer { get { return controllingPlayer; } internal set { controllingPlayer = value; } } PlayerIndex? controllingPlayer; /// <summary> /// Gets whether or not this screen is serializable. If this is true, /// the screen will be recorded into the screen manager's state and /// its Serialize and Deserialize methods will be called as appropriate. /// If this is false, the screen will be ignored during serialization. /// By default, all screens are assumed to be serializable. /// </summary> public bool IsSerializable { get { return isSerializable; } protected set { isSerializable = value; } } bool isSerializable = true; #endregion #region Initialization /// <summary> /// Load graphics content for the screen. /// </summary> public virtual void LoadContent() { } /// <summary> /// Unload content for the screen. /// </summary> public virtual void UnloadContent() { } #endregion #region Update and Draw /// <summary> /// Allows the screen to run logic, such as updating the transition position. /// Unlike HandleInput, this method is called regardless of whether the screen /// is active, hidden, or in the middle of a transition. /// </summary> public virtual void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) { this.otherScreenHasFocus = otherScreenHasFocus; if (isExiting) { // If the screen is going away to die, it should transition off. screenState = ScreenState.TransitionOff; if (!UpdateTransition(gameTime, transitionOffTime, 1)) { // When the transition finishes, remove the screen. ScreenManager.RemoveScreen(this); } } else if (coveredByOtherScreen) { // If the screen is covered by another, it should transition off. if (UpdateTransition(gameTime, transitionOffTime, 1)) { // Still busy transitioning. screenState = ScreenState.TransitionOff; } else { // Transition finished! screenState = ScreenState.Hidden; } } else { // Otherwise the screen should transition on and become active. if (UpdateTransition(gameTime, transitionOnTime, -1)) { // Still busy transitioning. screenState = ScreenState.TransitionOn; } else { // Transition finished! screenState = ScreenState.Active; } } } /// <summary> /// Helper for updating the screen transition position. /// </summary> bool UpdateTransition(GameTime gameTime, TimeSpan time, int direction) { // How much should we move by? float transitionDelta; if (time == TimeSpan.Zero) transitionDelta = 1; else transitionDelta = (float)(gameTime.ElapsedGameTime.TotalMilliseconds / time.TotalMilliseconds); // Update the transition position. transitionPosition += transitionDelta * direction; // Did we reach the end of the transition? if (((direction < 0) && (transitionPosition <= 0)) || ((direction > 0) && (transitionPosition >= 1))) { transitionPosition = MathHelper.Clamp(transitionPosition, 0, 1); return false; } // Otherwise we are still busy transitioning. return true; } /// <summary> /// Allows the screen to handle user input. Unlike Update, this method /// is only called when the screen is active, and not when some other /// screen has taken the focus. /// </summary> public virtual void HandleInput(InputState input) { } public KeyboardState currentKeyState; public KeyboardState lastKeyState; public bool IsKeyHit(Keys key) { if (currentKeyState.IsKeyDown(key) && lastKeyState.IsKeyUp(key)) return true; return false; } /// <summary> /// This is called when the screen should draw itself. /// </summary> public virtual void Draw(GameTime gameTime) { } #endregion #region Public Methods /// <summary> /// Tells the screen to serialize its state into the given stream. /// </summary> public virtual void Serialize(Stream stream) { } /// <summary> /// Tells the screen to deserialize its state from the given stream. /// </summary> public virtual void Deserialize(Stream stream) { } /// <summary> /// Tells the screen to go away. Unlike ScreenManager.RemoveScreen, which /// instantly kills the screen, this method respects the transition timings /// and will give the screen a chance to gradually transition off. /// </summary> public void ExitScreen() { if (TransitionOffTime == TimeSpan.Zero) { // If the screen has a zero transition time, remove it immediately. ScreenManager.RemoveScreen(this); } else { // Otherwise flag that it should transition off and then exit. isExiting = true; } } #endregion #region Helper Methods /// <summary> /// A helper method which loads assets using the screen manager's /// associated game content loader. /// </summary> /// <typeparam name="T">Type of asset.</typeparam> /// <param name="assetName">Asset name, relative to the loader root /// directory, and not including the .xnb extension.</param> /// <returns></returns> public T Load<T>(string assetName) { return ScreenManager.Game.Content.Load<T>(assetName); } #endregion } }

    Read the article

  • Skinny controller in ASP.NET MVC 4

    - by thangchung
    Rails community are always inspire a lot of best ideas. I really love this community by the time. One of these is "Fat models and skinny controllers". I have spent a lot of time on ASP.NET MVC, and really I did some miss-takes, because I made the controller so fat. That make controller is really dirty and very hard to maintain in the future. It is violate seriously SRP principle and KISS as well. But how can we achieve that in ASP.NET MVC? That question is really clear after I read "Manning ASP.NET MVC 4 in Action". It is simple that we can separate it into ActionResult, and try to implementing logic and persistence data inside this. In last 2 years, I have read this from Jimmy Bogard blog, but in that time I never had a consideration about it. That's enough for talking now. I just published a sample on ASP.NET MVC 4, implemented on Visual Studio 2012 RC at here. I used EF framework at here for implementing persistence layer, and also use 2 free templates from internet to make the UI for this sample. In this sample, I try to implementing the simple magazine website that managing all articles, categories and news. It is not finished at all in this time, but no problems, because I just show you about how can we make the controller skinny. And I wanna hear more about your ideas. The first thing, I am abstract the base ActionResult class like this:    public abstract class MyActionResult : ActionResult, IEnsureNotNull     {         public abstract void EnsureAllInjectInstanceNotNull();     }     public abstract class ActionResultBase<TController> : MyActionResult where TController : Controller     {         protected readonly Expression<Func<TController, ActionResult>> ViewNameExpression;         protected readonly IExConfigurationManager ConfigurationManager;         protected ActionResultBase (Expression<Func<TController, ActionResult>> expr)             : this(DependencyResolver.Current.GetService<IExConfigurationManager>(), expr)         {         }         protected ActionResultBase(             IExConfigurationManager configurationManager,             Expression<Func<TController, ActionResult>> expr)         {             Guard.ArgumentNotNull(expr, "ViewNameExpression");             Guard.ArgumentNotNull(configurationManager, "ConfigurationManager");             ViewNameExpression = expr;             ConfigurationManager = configurationManager;         }         protected ViewResult GetViewResult<TViewModel>(TViewModel viewModel)         {             var m = (MethodCallExpression)ViewNameExpression.Body;             if (m.Method.ReturnType != typeof(ActionResult))             {                 throw new ArgumentException("ControllerAction method '" + m.Method.Name + "' does not return type ActionResult");             }             var result = new ViewResult             {                 ViewName = m.Method.Name             };             result.ViewData.Model = viewModel;             return result;         }         public override void ExecuteResult(ControllerContext context)         {             EnsureAllInjectInstanceNotNull();         }     } I also have an interface for validation all inject objects. This interface make sure all inject objects that I inject using Autofac container are not null. The implementation of this as below public interface IEnsureNotNull     {         void EnsureAllInjectInstanceNotNull();     } Afterwards, I am just simple implementing the HomePageViewModelActionResult class like this public class HomePageViewModelActionResult<TController> : ActionResultBase<TController> where TController : Controller     {         #region variables & ctors         private readonly ICategoryRepository _categoryRepository;         private readonly IItemRepository _itemRepository;         private readonly int _numOfPage;         public HomePageViewModelActionResult(Expression<Func<TController, ActionResult>> viewNameExpression)             : this(viewNameExpression,                    DependencyResolver.Current.GetService<ICategoryRepository>(),                    DependencyResolver.Current.GetService<IItemRepository>())         {         }         public HomePageViewModelActionResult(             Expression<Func<TController, ActionResult>> viewNameExpression,             ICategoryRepository categoryRepository,             IItemRepository itemRepository)             : base(viewNameExpression)         {             _categoryRepository = categoryRepository;             _itemRepository = itemRepository;             _numOfPage = ConfigurationManager.GetAppConfigBy("NumOfPage").ToInteger();         }         #endregion         #region implementation         public override void ExecuteResult(ControllerContext context)         {             base.ExecuteResult(context);             var cats = _categoryRepository.GetCategories();             var mainViewModel = new HomePageViewModel();             var headerViewModel = new HeaderViewModel();             var footerViewModel = new FooterViewModel();             var mainPageViewModel = new MainPageViewModel();             headerViewModel.SiteTitle = "Magazine Website";             if (cats != null && cats.Any())             {                 headerViewModel.Categories = cats.ToList();                 footerViewModel.Categories = cats.ToList();             }             mainPageViewModel.LeftColumn = BindingDataForMainPageLeftColumnViewModel();             mainPageViewModel.RightColumn = BindingDataForMainPageRightColumnViewModel();             mainViewModel.Header = headerViewModel;             mainViewModel.DashBoard = new DashboardViewModel();             mainViewModel.Footer = footerViewModel;             mainViewModel.MainPage = mainPageViewModel;             GetViewResult(mainViewModel).ExecuteResult(context);         }         public override void EnsureAllInjectInstanceNotNull()         {             Guard.ArgumentNotNull(_categoryRepository, "CategoryRepository");             Guard.ArgumentNotNull(_itemRepository, "ItemRepository");             Guard.ArgumentMustMoreThanZero(_numOfPage, "NumOfPage");         }         #endregion         #region private functions         private MainPageRightColumnViewModel BindingDataForMainPageRightColumnViewModel()         {             var mainPageRightCol = new MainPageRightColumnViewModel();             mainPageRightCol.LatestNews = _itemRepository.GetNewestItem(_numOfPage).ToList();             mainPageRightCol.MostViews = _itemRepository.GetMostViews(_numOfPage).ToList();             return mainPageRightCol;         }         private MainPageLeftColumnViewModel BindingDataForMainPageLeftColumnViewModel()         {             var mainPageLeftCol = new MainPageLeftColumnViewModel();             var items = _itemRepository.GetNewestItem(_numOfPage);             if (items != null && items.Any())             {                 var firstItem = items.First();                 if (firstItem == null)                     throw new NoNullAllowedException("First Item".ToNotNullErrorMessage());                 if (firstItem.ItemContent == null)                     throw new NoNullAllowedException("First ItemContent".ToNotNullErrorMessage());                 mainPageLeftCol.FirstItem = firstItem;                 if (items.Count() > 1)                 {                     mainPageLeftCol.RemainItems = items.Where(x => x.ItemContent != null && x.Id != mainPageLeftCol.FirstItem.Id).ToList();                 }             }             return mainPageLeftCol;         }         #endregion     }  Final step, I get into HomeController and add some line of codes like this [Authorize]     public class HomeController : BaseController     {         [AllowAnonymous]         public ActionResult Index()         {             return new HomePageViewModelActionResult<HomeController>(x=>x.Index());         }         [AllowAnonymous]         public ActionResult Details(int id)         {             return new DetailsViewModelActionResult<HomeController>(x => x.Details(id), id);         }         [AllowAnonymous]         public ActionResult Category(int id)         {             return new CategoryViewModelActionResult<HomeController>(x => x.Category(id), id);         }     } As you see, the code in controller is really skinny, and all the logic I move to the custom ActionResult class. Some people said, it just move the code out of controller and put it to another class, so it is still hard to maintain. Look like it just move the complicate codes from one place to another place. But if you have a look and think it in details, you have to find out if you have code for processing all logic that related to HttpContext or something like this. You can do it on Controller, and try to delegating another logic  (such as processing business requirement, persistence data,...) to custom ActionResult class. Tell me more your thinking, I am really willing to hear all of its from you guys. All source codes can be find out at here. Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="http://weblogs.asp.net//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Internet doesn't work by default

    - by Adam Martinez
    After upgrading to Precise, I am required to run 'sudo dhclient eth0' in a terminal in order to get the internet to work. Everything worked perfectly fine on Oneiric, so It's really puzzling me. I'm thinking it could possibly be something with the kernel, but who knows. Output of dmesg: [ 0.247891] system 00:01: [io 0x0290-0x030f] has been reserved [ 0.247896] system 00:01: [io 0x0290-0x0297] has been reserved [ 0.247901] system 00:01: [io 0x0880-0x088f] has been reserved [ 0.247908] system 00:01: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.247931] pnp 00:02: [dma 4] [ 0.247935] pnp 00:02: [io 0x0000-0x000f] [ 0.247939] pnp 00:02: [io 0x0080-0x0090] [ 0.247943] pnp 00:02: [io 0x0094-0x009f] [ 0.247947] pnp 00:02: [io 0x00c0-0x00df] [ 0.248033] pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active) [ 0.248125] pnp 00:03: [io 0x0070-0x0073] [ 0.248187] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active) [ 0.248205] pnp 00:04: [io 0x0061] [ 0.248260] pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active) [ 0.248277] pnp 00:05: [io 0x00f0-0x00ff] [ 0.248292] pnp 00:05: [irq 13] [ 0.248348] pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active) [ 0.248583] pnp 00:06: [io 0x03f0-0x03f5] [ 0.248588] pnp 00:06: [io 0x03f7] [ 0.248597] pnp 00:06: [irq 6] [ 0.248601] pnp 00:06: [dma 2] [ 0.248690] pnp 00:06: Plug and Play ACPI device, IDs PNP0700 (active) [ 0.248998] pnp 00:07: [io 0x03f8-0x03ff] [ 0.249008] pnp 00:07: [irq 4] [ 0.249122] pnp 00:07: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.249479] pnp 00:08: [io 0x0400-0x04bf] [ 0.249584] system 00:08: [io 0x0400-0x04bf] has been reserved [ 0.249591] system 00:08: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.249628] pnp 00:09: [mem 0xffb80000-0xffbfffff] [ 0.249690] pnp 00:09: Plug and Play ACPI device, IDs INT0800 (active) [ 0.250049] pnp 00:0a: [mem 0xe0000000-0xefffffff] [ 0.250167] system 00:0a: [mem 0xe0000000-0xefffffff] has been reserved [ 0.250173] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.250302] pnp 00:0b: [mem 0x000f0000-0x000fffff] [ 0.250307] pnp 00:0b: [mem 0x7ff00000-0x7fffffff] [ 0.250311] pnp 00:0b: [mem 0xfed00000-0xfed000ff] [ 0.250316] pnp 00:0b: [mem 0x0000046e-0x0000056d] [ 0.250320] pnp 00:0b: [mem 0x7fee0000-0x7fefffff] [ 0.250324] pnp 00:0b: [mem 0x00000000-0x0009ffff] [ 0.250328] pnp 00:0b: [mem 0x00100000-0x7fedffff] [ 0.250332] pnp 00:0b: [mem 0xfec00000-0xfec00fff] [ 0.250336] pnp 00:0b: [mem 0xfed14000-0xfed1dfff] [ 0.250341] pnp 00:0b: [mem 0xfed20000-0xfed9ffff] [ 0.250345] pnp 00:0b: [mem 0xfee00000-0xfee00fff] [ 0.250349] pnp 00:0b: [mem 0xffb00000-0xffb7ffff] [ 0.250353] pnp 00:0b: [mem 0xfff00000-0xffffffff] [ 0.250357] pnp 00:0b: [mem 0x000e0000-0x000effff] [ 0.250409] pnp 00:0b: disabling [mem 0x0000046e-0x0000056d] because it overlaps 0000:01:00.0 BAR 6 [mem 0x00000000-0x0007ffff pref] [ 0.250419] pnp 00:0b: disabling [mem 0x0000046e-0x0000056d disabled] because it overlaps 0000:03:00.0 BAR 6 [mem 0x00000000-0x0000ffff pref] [ 0.250430] pnp 00:0b: disabling [mem 0x0000046e-0x0000056d disabled] because it overlaps 0000:04:00.0 BAR 6 [mem 0x00000000-0x0001ffff pref] [ 0.250524] system 00:0b: [mem 0x000f0000-0x000fffff] could not be reserved [ 0.250530] system 00:0b: [mem 0x7ff00000-0x7fffffff] has been reserved [ 0.250536] system 00:0b: [mem 0xfed00000-0xfed000ff] has been reserved [ 0.250541] system 00:0b: [mem 0x7fee0000-0x7fefffff] could not be reserved [ 0.250547] system 00:0b: [mem 0x00000000-0x0009ffff] could not be reserved [ 0.250552] system 00:0b: [mem 0x00100000-0x7fedffff] could not be reserved [ 0.250558] system 00:0b: [mem 0xfec00000-0xfec00fff] could not be reserved [ 0.250563] system 00:0b: [mem 0xfed14000-0xfed1dfff] has been reserved [ 0.250568] system 00:0b: [mem 0xfed20000-0xfed9ffff] has been reserved [ 0.250574] system 00:0b: [mem 0xfee00000-0xfee00fff] has been reserved [ 0.250579] system 00:0b: [mem 0xffb00000-0xffb7ffff] has been reserved [ 0.250585] system 00:0b: [mem 0xfff00000-0xffffffff] has been reserved [ 0.250590] system 00:0b: [mem 0x000e0000-0x000effff] has been reserved [ 0.250596] system 00:0b: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.250614] pnp: PnP ACPI: found 12 devices [ 0.250617] ACPI: ACPI bus type pnp unregistered [ 0.250624] PnPBIOS: Disabled by ACPI PNP [ 0.288725] PCI: max bus depth: 1 pci_try_num: 2 [ 0.288786] pci 0000:01:00.0: BAR 6: assigned [mem 0xfb000000-0xfb07ffff pref] [ 0.288792] pci 0000:00:01.0: PCI bridge to [bus 01-01] [ 0.288797] pci 0000:00:01.0: bridge window [io 0xa000-0xafff] [ 0.288804] pci 0000:00:01.0: bridge window [mem 0xf8000000-0xfbffffff] [ 0.288811] pci 0000:00:01.0: bridge window [mem 0xd0000000-0xdfffffff 64bit pref] [ 0.288820] pci 0000:00:1c.0: PCI bridge to [bus 02-02] [ 0.288825] pci 0000:00:1c.0: bridge window [io 0x9000-0x9fff] [ 0.288833] pci 0000:00:1c.0: bridge window [mem 0xfdb00000-0xfdbfffff] [ 0.288840] pci 0000:00:1c.0: bridge window [mem 0xfd800000-0xfd8fffff 64bit pref] [ 0.288851] pci 0000:03:00.0: BAR 6: assigned [mem 0xfde00000-0xfde0ffff pref] [ 0.288856] pci 0000:00:1c.4: PCI bridge to [bus 03-03] [ 0.288861] pci 0000:00:1c.4: bridge window [io 0xd000-0xdfff] [ 0.288869] pci 0000:00:1c.4: bridge window [mem 0xfd700000-0xfd7fffff] [ 0.288876] pci 0000:00:1c.4: bridge window [mem 0xfde00000-0xfdefffff 64bit pref] [ 0.288887] pci 0000:04:00.0: BAR 6: assigned [mem 0xfdc00000-0xfdc1ffff pref] [ 0.288891] pci 0000:00:1c.5: PCI bridge to [bus 04-04] [ 0.288897] pci 0000:00:1c.5: bridge window [io 0xb000-0xbfff] [ 0.288904] pci 0000:00:1c.5: bridge window [mem 0xfdd00000-0xfddfffff] [ 0.288911] pci 0000:00:1c.5: bridge window [mem 0xfdc00000-0xfdcfffff 64bit pref] [ 0.288920] pci 0000:00:1e.0: PCI bridge to [bus 05-05] [ 0.288926] pci 0000:00:1e.0: bridge window [io 0xc000-0xcfff] [ 0.288933] pci 0000:00:1e.0: bridge window [mem 0xfda00000-0xfdafffff] [ 0.288940] pci 0000:00:1e.0: bridge window [mem 0xfd900000-0xfd9fffff 64bit pref] [ 0.288971] pci 0000:00:01.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.288979] pci 0000:00:01.0: setting latency timer to 64 [ 0.288991] pci 0000:00:1c.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.288998] pci 0000:00:1c.0: setting latency timer to 64 [ 0.289008] pci 0000:00:1c.4: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.289014] pci 0000:00:1c.4: setting latency timer to 64 [ 0.289030] pci 0000:00:1c.5: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 0.289037] pci 0000:00:1c.5: setting latency timer to 64 [ 0.289047] pci 0000:00:1e.0: setting latency timer to 64 [ 0.289054] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7] [ 0.289058] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff] [ 0.289063] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff] [ 0.289067] pci_bus 0000:00: resource 7 [mem 0x000c0000-0x000dffff] [ 0.289072] pci_bus 0000:00: resource 8 [mem 0x7ff00000-0xfebfffff] [ 0.289077] pci_bus 0000:01: resource 0 [io 0xa000-0xafff] [ 0.289081] pci_bus 0000:01: resource 1 [mem 0xf8000000-0xfbffffff] [ 0.289086] pci_bus 0000:01: resource 2 [mem 0xd0000000-0xdfffffff 64bit pref] [ 0.289092] pci_bus 0000:02: resource 0 [io 0x9000-0x9fff] [ 0.289096] pci_bus 0000:02: resource 1 [mem 0xfdb00000-0xfdbfffff] [ 0.289101] pci_bus 0000:02: resource 2 [mem 0xfd800000-0xfd8fffff 64bit pref] [ 0.289106] pci_bus 0000:03: resource 0 [io 0xd000-0xdfff] [ 0.289110] pci_bus 0000:03: resource 1 [mem 0xfd700000-0xfd7fffff] [ 0.289115] pci_bus 0000:03: resource 2 [mem 0xfde00000-0xfdefffff 64bit pref] [ 0.289120] pci_bus 0000:04: resource 0 [io 0xb000-0xbfff] [ 0.289124] pci_bus 0000:04: resource 1 [mem 0xfdd00000-0xfddfffff] [ 0.289129] pci_bus 0000:04: resource 2 [mem 0xfdc00000-0xfdcfffff 64bit pref] [ 0.289134] pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] [ 0.289138] pci_bus 0000:05: resource 1 [mem 0xfda00000-0xfdafffff] [ 0.289143] pci_bus 0000:05: resource 2 [mem 0xfd900000-0xfd9fffff 64bit pref] [ 0.289148] pci_bus 0000:05: resource 4 [io 0x0000-0x0cf7] [ 0.289152] pci_bus 0000:05: resource 5 [io 0x0d00-0xffff] [ 0.289157] pci_bus 0000:05: resource 6 [mem 0x000a0000-0x000bffff] [ 0.289161] pci_bus 0000:05: resource 7 [mem 0x000c0000-0x000dffff] [ 0.289166] pci_bus 0000:05: resource 8 [mem 0x7ff00000-0xfebfffff] [ 0.289233] NET: Registered protocol family 2 [ 0.289360] IP route cache hash table entries: 32768 (order: 5, 131072 bytes) [ 0.289754] TCP established hash table entries: 131072 (order: 8, 1048576 bytes) [ 0.290351] TCP bind hash table entries: 65536 (order: 7, 524288 bytes) [ 0.290670] TCP: Hash tables configured (established 131072 bind 65536) [ 0.290674] TCP reno registered [ 0.290680] UDP hash table entries: 512 (order: 2, 16384 bytes) [ 0.290703] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes) [ 0.290868] NET: Registered protocol family 1 [ 0.290911] pci 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.290932] pci 0000:00:1a.0: PCI INT A disabled [ 0.290956] pci 0000:00:1a.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 [ 0.290975] pci 0000:00:1a.1: PCI INT B disabled [ 0.290992] pci 0000:00:1a.2: PCI INT D -> GSI 19 (level, low) -> IRQ 19 [ 0.291012] pci 0000:00:1a.2: PCI INT D disabled [ 0.291031] pci 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.291068] pci 0000:00:1a.7: PCI INT C disabled [ 0.291104] pci 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.291123] pci 0000:00:1d.0: PCI INT A disabled [ 0.291135] pci 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.291155] pci 0000:00:1d.1: PCI INT B disabled [ 0.291166] pci 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.291185] pci 0000:00:1d.2: PCI INT C disabled [ 0.291198] pci 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.291219] pci 0000:00:1d.7: PCI INT A disabled [ 0.291258] pci 0000:01:00.0: Boot video device [ 0.291273] PCI: CLS 4 bytes, default 64 [ 0.291857] audit: initializing netlink socket (disabled) [ 0.291876] type=2000 audit(1336753420.284:1): initialized [ 0.337724] highmem bounce pool size: 64 pages [ 0.337734] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 0.349241] VFS: Disk quotas dquot_6.5.2 [ 0.349365] Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) [ 0.350418] fuse init (API version 7.17) [ 0.350611] msgmni has been set to 1685 [ 0.351179] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.351229] io scheduler noop registered [ 0.351233] io scheduler deadline registered [ 0.351247] io scheduler cfq registered (default) [ 0.351450] pcieport 0000:00:01.0: setting latency timer to 64 [ 0.351502] pcieport 0000:00:01.0: irq 40 for MSI/MSI-X [ 0.351585] pcieport 0000:00:1c.0: setting latency timer to 64 [ 0.351639] pcieport 0000:00:1c.0: irq 41 for MSI/MSI-X [ 0.351728] pcieport 0000:00:1c.4: setting latency timer to 64 [ 0.351779] pcieport 0000:00:1c.4: irq 42 for MSI/MSI-X [ 0.351875] pcieport 0000:00:1c.5: setting latency timer to 64 [ 0.351927] pcieport 0000:00:1c.5: irq 43 for MSI/MSI-X [ 0.352094] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.352143] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 0.352311] intel_idle: MWAIT substates: 0x22220 [ 0.352315] intel_idle: does not run on family 6 model 23 [ 0.352446] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0 [ 0.352455] ACPI: Power Button [PWRB] [ 0.352556] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1 [ 0.352562] ACPI: Power Button [PWRF] [ 0.352650] ACPI: Fan [FAN] (on) [ 0.355667] thermal LNXTHERM:00: registered as thermal_zone0 [ 0.355673] ACPI: Thermal Zone [THRM] (26 C) [ 0.355750] ERST: Table is not found! [ 0.355753] GHES: HEST is not enabled! [ 0.355898] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled [ 0.376332] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 0.376582] isapnp: Scanning for PnP cards... [ 0.709133] Freeing initrd memory: 13792k freed [ 0.729743] isapnp: No Plug & Play device found [ 0.816786] 00:07: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 0.832385] Linux agpgart interface v0.103 [ 0.835605] brd: module loaded [ 0.837138] loop: module loaded [ 0.837452] ata_piix 0000:00:1f.2: version 2.13 [ 0.837473] ata_piix 0000:00:1f.2: PCI INT A -> GSI 19 (level, low) -> IRQ 19 [ 0.837480] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] [ 0.837546] ata_piix 0000:00:1f.2: setting latency timer to 64 [ 0.838099] scsi0 : ata_piix [ 0.838253] scsi1 : ata_piix [ 0.839183] ata1: SATA max UDMA/133 cmd 0xf900 ctl 0xf800 bmdma 0xf500 irq 19 [ 0.839192] ata2: SATA max UDMA/133 cmd 0xf700 ctl 0xf600 bmdma 0xf508 irq 19 [ 0.839239] ata_piix 0000:00:1f.5: PCI INT A -> GSI 19 (level, low) -> IRQ 19 [ 0.839246] ata_piix 0000:00:1f.5: MAP [ P0 -- P1 -- ] [ 0.839300] ata_piix 0000:00:1f.5: setting latency timer to 64 [ 0.839708] scsi2 : ata_piix [ 0.839841] scsi3 : ata_piix [ 0.840301] ata3: SATA max UDMA/133 cmd 0xf200 ctl 0xf100 bmdma 0xee00 irq 19 [ 0.840308] ata4: SATA max UDMA/133 cmd 0xf000 ctl 0xef00 bmdma 0xee08 irq 19 [ 0.840429] pata_acpi 0000:03:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.840467] pata_acpi 0000:03:00.0: setting latency timer to 64 [ 0.840488] pata_acpi 0000:03:00.0: PCI INT A disabled [ 0.841159] Fixed MDIO Bus: probed [ 0.841205] tun: Universal TUN/TAP device driver, 1.6 [ 0.841210] tun: (C) 1999-2004 Max Krasnyansky <[email protected]> [ 0.841322] PPP generic driver version 2.4.2 [ 0.841515] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 0.841542] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.841567] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [ 0.841573] ehci_hcd 0000:00:1a.7: EHCI Host Controller [ 0.841658] ehci_hcd 0000:00:1a.7: new USB bus registered, assigned bus number 1 [ 0.845582] ehci_hcd 0000:00:1a.7: cache line size of 4 is not supported [ 0.845610] ehci_hcd 0000:00:1a.7: irq 18, io mem 0xfdfff000 [ 0.860022] ehci_hcd 0000:00:1a.7: USB 2.0 started, EHCI 1.00 [ 0.860264] hub 1-0:1.0: USB hub found [ 0.860272] hub 1-0:1.0: 6 ports detected [ 0.860404] ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.860424] ehci_hcd 0000:00:1d.7: setting latency timer to 64 [ 0.860430] ehci_hcd 0000:00:1d.7: EHCI Host Controller [ 0.860512] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 2 [ 0.864413] ehci_hcd 0000:00:1d.7: cache line size of 4 is not supported [ 0.864438] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xfdffe000 [ 0.880021] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 0.880227] hub 2-0:1.0: USB hub found [ 0.880234] hub 2-0:1.0: 6 ports detected [ 0.880369] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 0.880396] uhci_hcd: USB Universal Host Controller Interface driver [ 0.880431] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.880443] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [ 0.880449] uhci_hcd 0000:00:1a.0: UHCI Host Controller [ 0.880529] uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 3 [ 0.880574] uhci_hcd 0000:00:1a.0: irq 16, io base 0x0000ff00 [ 0.880803] hub 3-0:1.0: USB hub found [ 0.880811] hub 3-0:1.0: 2 ports detected [ 0.880929] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 [ 0.880940] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [ 0.880946] uhci_hcd 0000:00:1a.1: UHCI Host Controller [ 0.881039] uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 4 [ 0.881081] uhci_hcd 0000:00:1a.1: irq 21, io base 0x0000fe00 [ 0.881302] hub 4-0:1.0: USB hub found [ 0.881310] hub 4-0:1.0: 2 ports detected [ 0.881427] uhci_hcd 0000:00:1a.2: PCI INT D -> GSI 19 (level, low) -> IRQ 19 [ 0.881438] uhci_hcd 0000:00:1a.2: setting latency timer to 64 [ 0.881443] uhci_hcd 0000:00:1a.2: UHCI Host Controller [ 0.881523] uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 5 [ 0.881551] uhci_hcd 0000:00:1a.2: irq 19, io base 0x0000fd00 [ 0.881774] hub 5-0:1.0: USB hub found [ 0.881781] hub 5-0:1.0: 2 ports detected [ 0.881899] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.881910] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 0.881915] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 0.881993] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 6 [ 0.882021] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000fc00 [ 0.882244] hub 6-0:1.0: USB hub found [ 0.882252] hub 6-0:1.0: 2 ports detected [ 0.882370] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.882381] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 0.882386] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 0.882467] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 7 [ 0.882495] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000fb00 [ 0.882735] hub 7-0:1.0: USB hub found [ 0.882742] hub 7-0:1.0: 2 ports detected [ 0.882858] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.882869] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 0.882875] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 0.882954] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 8 [ 0.882982] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000fa00 [ 0.883205] hub 8-0:1.0: USB hub found [ 0.883213] hub 8-0:1.0: 2 ports detected [ 0.883435] usbcore: registered new interface driver libusual [ 0.883535] i8042: PNP: No PS/2 controller found. Probing ports directly. [ 0.883926] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 0.883936] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 0.884187] mousedev: PS/2 mouse device common for all mice [ 0.884433] rtc_cmos 00:03: RTC can wake from S4 [ 0.884582] rtc_cmos 00:03: rtc core: registered rtc_cmos as rtc0 [ 0.884612] rtc0: alarms up to one month, 242 bytes nvram, hpet irqs [ 0.884719] device-mapper: uevent: version 1.0.3 [ 0.884854] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised: [email protected] [ 0.884917] EISA: Probing bus 0 at eisa.0 [ 0.884921] EISA: Cannot allocate resource for mainboard [ 0.884925] Cannot allocate resource for EISA slot 1 [ 0.884929] Cannot allocate resource for EISA slot 2 [ 0.884932] Cannot allocate resource for EISA slot 3 [ 0.884936] Cannot allocate resource for EISA slot 4 [ 0.884940] Cannot allocate resource for EISA slot 5 [ 0.884943] Cannot allocate resource for EISA slot 6 [ 0.884947] Cannot allocate resource for EISA slot 7 [ 0.884950] Cannot allocate resource for EISA slot 8 [ 0.884954] EISA: Detected 0 cards. [ 0.884969] cpufreq-nforce2: No nForce2 chipset. [ 0.884973] cpuidle: using governor ladder [ 0.884976] cpuidle: using governor menu [ 0.884980] EFI Variables Facility v0.08 2004-May-17 [ 0.885476] TCP cubic registered [ 0.885708] NET: Registered protocol family 10 [ 0.886771] NET: Registered protocol family 17 [ 0.886799] Registering the dns_resolver key type [ 0.886837] Using IPI No-Shortcut mode [ 0.887028] PM: Hibernation image not present or could not be loaded. [ 0.887047] registered taskstats version 1 [ 0.902579] Magic number: 12:339:388 [ 0.902592] usb usb6: hash matches [ 0.902687] rtc_cmos 00:03: setting system clock to 2012-05-11 16:23:41 UTC (1336753421) [ 0.903185] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found [ 0.903189] EDD information not available. [ 1.170710] ata3: SATA link down (SStatus 0 SControl 300) [ 1.181439] ata4: SATA link down (SStatus 0 SControl 300) [ 1.288020] Refined TSC clocksource calibration: 2499.999 MHz. [ 1.288028] Switching to clocksource tsc [ 1.292016] usb 1-5: new high-speed USB device number 3 using ehci_hcd [ 1.486745] ata2.00: SATA link down (SStatus 0 SControl 300) [ 1.486762] ata2.01: SATA link down (SStatus 0 SControl 300) [ 1.640115] ata1.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 1.640130] ata1.01: SATA link down (SStatus 0 SControl 300) [ 1.648342] ata1.00: ATA-7: Maxtor 7Y250M0, YAR511W0, max UDMA/133 [ 1.648348] ata1.00: 490234752 sectors, multi 0: LBA48 [ 1.664325] ata1.00: configured for UDMA/133 [ 1.664531] scsi 0:0:0:0: Direct-Access ATA Maxtor 7Y250M0 YAR5 PQ: 0 ANSI: 5 [ 1.664745] sd 0:0:0:0: [sda] 490234752 512-byte logical blocks: (251 GB/233 GiB) [ 1.664809] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 1.664838] sd 0:0:0:0: [sda] Write Protect is off [ 1.664843] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.664884] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.691699] sda: sda1 sda2 sda3 sda4 [ 1.692348] sd 0:0:0:0: [sda] Attached SCSI disk [ 1.692461] Freeing unused kernel memory: 740k freed [ 1.692820] Write protecting the kernel text: 5828k [ 1.692851] Write protecting the kernel read-only data: 2376k [ 1.692854] NX-protecting the kernel data: 4412k [ 1.723980] udevd[92]: starting version 175 [ 1.865339] Floppy drive(s): fd0 is 1.44M [ 1.865429] pata_jmicron 0000:03:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 1.865478] pata_jmicron 0000:03:00.0: setting latency timer to 64 [ 1.867875] sky2: driver version 1.30 [ 1.867926] sky2 0000:04:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 1.867942] sky2 0000:04:00.0: setting latency timer to 64 [ 1.867979] sky2 0000:04:00.0: Yukon-2 EC chip revision 2 [ 1.868111] sky2 0000:04:00.0: irq 44 for MSI/MSI-X [ 1.868174] scsi4 : pata_jmicron [ 1.869802] sky2 0000:04:00.0: eth0: addr 00:01:29:a4:16:0a [ 1.869828] scsi5 : pata_jmicron [ 1.869943] ata5: PATA max UDMA/100 cmd 0xdf00 ctl 0xde00 bmdma 0xdb00 irq 16 [ 1.869949] ata6: PATA max UDMA/100 cmd 0xdd00 ctl 0xdc00 bmdma 0xdb08 irq 16 [ 1.880053] usb 4-1: new full-speed USB device number 2 using uhci_hcd [ 1.884052] FDC 0 is a post-1991 82077 [ 2.032611] ata5.00: ATAPI: _NEC DVD+/-RW ND-3450A, 103C, max UDMA/33 [ 2.048585] ata5.00: configured for UDMA/33 [ 2.049777] scsi 4:0:0:0: CD-ROM _NEC DVD+-RW ND-3450A 103C PQ: 0 ANSI: 5 [ 2.051048] sr0: scsi3-mmc drive: 48x/48x writer cd/rw xa/form2 cdda tray [ 2.051054] cdrom: Uniform CD-ROM driver Revision: 3.20 [ 2.051283] sr 4:0:0:0: Attached scsi CD-ROM sr0 [ 2.051483] sr 4:0:0:0: Attached scsi generic sg1 type 5 [ 2.079838] usbcore: registered new interface driver usbhid [ 2.079844] usbhid: USB HID core driver [ 2.236660] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) [ 12.150230] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 12.177342] udevd[333]: starting version 175 [ 12.195524] Adding 417684k swap on /dev/sda2. Priority:-1 extents:1 across:417684k [ 12.278032] lp: driver loaded but no devices found [ 12.516456] logitech-djreceiver 0003:046D:C52B.0003: hiddev0,hidraw0: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:1a.1-1/input2 [ 12.520297] input: Logitech Unifying Device. Wireless PID:1024 as /devices/pci0000:00/0000:00:1a.1/usb4/4-1/4-1:1.2/0003:046D:C52B.0003/input/input2 [ 12.520753] logitech-djdevice 0003:046D:C52B.0004: input,hidraw1: USB HID v1.11 Mouse [Logitech Unifying Device. Wireless PID:1024] on usb-0000:00:1a.1-1:1 [ 12.523286] input: Logitech Unifying Device. Wireless PID:2011 as /devices/pci0000:00/0000:00:1a.1/usb4/4-1/4-1:1.2/0003:046D:C52B.0003/input/input3 [ 12.524439] logitech-djdevice 0003:046D:C52B.0005: input,hidraw2: USB HID v1.11 Keyboard [Logitech Unifying Device. Wireless PID:2011] on usb-0000:00:1a.1-1:2 [ 12.545746] type=1400 audit(1336771433.137:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=502 comm="apparmor_parser" [ 12.546574] type=1400 audit(1336771433.137:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=502 comm="apparmor_parser" [ 12.547034] type=1400 audit(1336771433.137:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=502 comm="apparmor_parser" [ 12.626869] Linux video capture interface: v2.00 [ 12.649104] uvcvideo: Found UVC 1.00 device <unnamed> (046d:081a) [ 12.668665] input: UVC Camera (046d:081a) as /devices/pci0000:00/0000:00:1a.7/usb1/1-5/1-5:1.0/input/input4 [ 12.668909] usbcore: registered new interface driver uvcvideo [ 12.668914] USB Video Class driver (1.1.1) [ 12.697645] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22 [ 12.697721] snd_hda_intel 0000:00:1b.0: irq 45 for MSI/MSI-X [ 12.697760] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [ 12.706772] nvidia: module license 'NVIDIA' taints kernel. [ 12.706778] Disabling lock debugging due to kernel taint [ 12.735428] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro [ 13.350252] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 13.350267] nvidia 0000:01:00.0: setting latency timer to 64 [ 13.350275] vgaarb: device changed decodes: PCI:0000:01:00.0,olddecodes=io+mem,decodes=none:owns=io+mem [ 13.351464] NVRM: loading NVIDIA UNIX x86 Kernel Module 295.40 Thu Apr 5 21:28:09 PDT 2012 [ 13.356785] hda_codec: ALC889A: BIOS auto-probing. [ 13.357267] init: failsafe main process (658) killed by TERM signal [ 13.372756] input: HDA Intel Line as /devices/pci0000:00/0000:00:1b.0/sound/card0/input5 [ 13.373173] input: HDA Intel Front Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input6 [ 13.373568] input: HDA Intel Rear Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [ 13.373954] input: HDA Intel Front Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [ 13.374339] input: HDA Intel Line-Out Side as /devices/pci0000:00/0000:00:1b.0/sound/card0/input9 [ 13.374715] input: HDA Intel Line-Out CLFE as /devices/pci0000:00/0000:00:1b.0/sound/card0/input10 [ 13.375109] input: HDA Intel Line-Out Surround as /devices/pci0000:00/0000:00:1b.0/sound/card0/input11 [ 13.375724] input: HDA Intel Line-Out Front as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12 [ 13.475252] type=1400 audit(1336771434.065:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=735 comm="apparmor_parser" [ 13.477026] type=1400 audit(1336771434.069:6): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=735 comm="apparmor_parser" [ 13.477695] type=1400 audit(1336771434.069:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=735 comm="apparmor_parser" [ 13.479048] type=1400 audit(1336771434.069:8): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=734 comm="apparmor_parser" [ 13.488994] type=1400 audit(1336771434.081:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=738 comm="apparmor_parser" [ 13.489972] type=1400 audit(1336771434.081:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=738 comm="apparmor_parser" [ 13.

    Read the article

  • Virtual host is not working in Ubuntu 14 VPS using XAMPP 1.8.3

    - by viral4ever
    I am using XAMPP as server in ubuntu 14.04 VPS of digitalocean. I tried to setup virtual hosts. But it is not working and I am getting 403 error of access denied. I changed files too. My files with changes are /opt/lampp/etc/httpd.conf # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/trunk/> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/trunk/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so 'log/access_log' # with ServerRoot set to '/www' will be interpreted by the # server as '/www/log/access_log', where as '/log/access_log' will be # interpreted as '/log/access_log'. # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to specify a local disk on the # Mutex directive, if file-based mutexes are used. If you wish to share the # same ServerRoot for multiple httpd daemons, you will need to change at # least PidFile. # ServerRoot "/opt/lampp" # # Mutex: Allows you to set the mutex mechanism and mutex file directory # for individual mutexes, or change the global defaults # # Uncomment and change the directory if mutexes are file-based and the default # mutex file directory is not on a local disk or is not appropriate for some # other reason. # # Mutex default:logs # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule authn_socache_module modules/mod_authn_socache.so LoadModule authn_core_module modules/mod_authn_core.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_dbd_module modules/mod_authz_dbd.so LoadModule authz_core_module modules/mod_authz_core.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule access_compat_module modules/mod_access_compat.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_form_module modules/mod_auth_form.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule allowmethods_module modules/mod_allowmethods.so LoadModule file_cache_module modules/mod_file_cache.so LoadModule cache_module modules/mod_cache.so LoadModule cache_disk_module modules/mod_cache_disk.so LoadModule socache_shmcb_module modules/mod_socache_shmcb.so LoadModule socache_dbm_module modules/mod_socache_dbm.so LoadModule socache_memcache_module modules/mod_socache_memcache.so LoadModule dbd_module modules/mod_dbd.so LoadModule bucketeer_module modules/mod_bucketeer.so LoadModule dumpio_module modules/mod_dumpio.so LoadModule echo_module modules/mod_echo.so LoadModule case_filter_module modules/mod_case_filter.so LoadModule case_filter_in_module modules/mod_case_filter_in.so LoadModule buffer_module modules/mod_buffer.so LoadModule ratelimit_module modules/mod_ratelimit.so LoadModule reqtimeout_module modules/mod_reqtimeout.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule request_module modules/mod_request.so LoadModule include_module modules/mod_include.so LoadModule filter_module modules/mod_filter.so LoadModule substitute_module modules/mod_substitute.so LoadModule sed_module modules/mod_sed.so LoadModule charset_lite_module modules/mod_charset_lite.so LoadModule deflate_module modules/mod_deflate.so LoadModule mime_module modules/mod_mime.so LoadModule ldap_module modules/mod_ldap.so LoadModule log_config_module modules/mod_log_config.so LoadModule log_debug_module modules/mod_log_debug.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule cern_meta_module modules/mod_cern_meta.so LoadModule expires_module modules/mod_expires.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule unique_id_module modules/mod_unique_id.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule version_module modules/mod_version.so LoadModule remoteip_module modules/mod_remoteip.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so LoadModule proxy_scgi_module modules/mod_proxy_scgi.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_express_module modules/mod_proxy_express.so LoadModule session_module modules/mod_session.so LoadModule session_cookie_module modules/mod_session_cookie.so LoadModule session_dbd_module modules/mod_session_dbd.so LoadModule slotmem_shm_module modules/mod_slotmem_shm.so LoadModule ssl_module modules/mod_ssl.so LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so LoadModule unixd_module modules/mod_unixd.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule suexec_module modules/mod_suexec.so LoadModule cgi_module modules/mod_cgi.so LoadModule cgid_module modules/mod_cgid.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so <IfDefine JUSTTOMAKEAPXSHAPPY> LoadModule php4_module modules/libphp4.so LoadModule php5_module modules/libphp5.so </IfDefine> <IfModule unixd_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User root Group www </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # #ServerName www.example.com:@@Port@@ # XAMPP ServerName localhost # # Deny access to the entirety of your server's filesystem. You must # explicitly permit access to web content directories in other # <Directory> blocks below. # <Directory /> AllowOverride none Require all denied </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/opt/lampp/htdocs" <Directory "/opt/lampp/htdocs"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/trunk/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks # XAMPP Options Indexes FollowSymLinks ExecCGI Includes # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride None # since XAMPP 1.4: AllowOverride All # # Controls who can get stuff from this server. # Require all granted </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> #DirectoryIndex index.html # XAMPP DirectoryIndex index.html index.html.var index.php index.php3 index.php4 </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ".ht*"> Require all denied </Files> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "logs/error_log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "logs/access_log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "logs/access_log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAlias /cgi-bin/ "/opt/lampp/cgi-bin/" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock logs/cgisock </IfModule> # # "/opt/lampp/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/opt/lampp/cgi-bin"> AllowOverride None Options None Require all granted </Directory> <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig etc/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # XAMPP, since LAMPP 0.9.8: AddHandler cgi-script .cgi .pl # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # # XAMPP AddType text/html .shtml AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile etc/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # MaxRanges: Maximum number of Ranges in a request before # returning the entire resource, or one of the special # values 'default', 'none' or 'unlimited'. # Default setting is to accept 200 Ranges. #MaxRanges unlimited # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall may be used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # Defaults: EnableMMAP On, EnableSendfile Off # EnableMMAP off EnableSendfile off # Supplemental configuration # # The configuration files in the etc/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) #Include etc/extra/httpd-mpm.conf # Multi-language error messages Include etc/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include etc/extra/httpd-autoindex.conf # Language settings #Include etc/extra/httpd-languages.conf # User home directories #Include etc/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include etc/extra/httpd-info.conf # Virtual hosts Include etc/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include etc/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include etc/extra/httpd-dav.conf # Various default settings Include etc/extra/httpd-default.conf # Configure mod_proxy_html to understand HTML4/XHTML1 <IfModule proxy_html_module> Include etc/extra/proxy-html.conf </IfModule> # Secure (SSL/TLS) connections <IfModule ssl_module> # XAMPP <IfDefine SSL> Include etc/extra/httpd-ssl.conf </IfDefine> </IfModule> # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> # XAMPP Include etc/extra/httpd-xampp.conf Include "/opt/lampp/apache2/conf/httpd.conf" I used command shown in this example. I used below lines to change and add group Add group "groupadd www" Add user to group "usermod -aG www root" Change htdocs group "chgrp -R www /opt/lampp/htdocs" Change sitedir group "chgrp -R www /opt/lampp/htdocs/mysite" Change htdocs chmod "chmod 2775 /opt/lampp/htdocs" Change sitedir chmod "chmod 2775 /opt/lampp/htdocs/mysite" And then I changed my vhosts.conf file # Virtual Hosts # # Required modules: mod_log_config # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.4/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host.example.com" ServerName dummy-host.example.com ServerAlias www.dummy-host.example.com ErrorLog "logs/dummy-host.example.com-error_log" CustomLog "logs/dummy-host.example.com-access_log" common </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host2.example.com" ServerName dummy-host2.example.com ErrorLog "logs/dummy-host2.example.com-error_log" CustomLog "logs/dummy-host2.example.com-access_log" common </VirtualHost> NameVirtualHost * <VirtualHost *> ServerAdmin [email protected] DocumentRoot "/opt/lampp/htdocs/mysite" ServerName mysite.com ServerAlias mysite.com ErrorLog "/opt/lampp/htdocs/mysite/errorlogs" CustomLog "/opt/lampp/htdocs/mysite/customlog" common <Directory "/opt/lampp/htdocs/mysite"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order Allow,Deny Allow from all Require all granted </Directory> </VirtualHost> but still its not working and I am getting 403 error on my ip and domain however I can access phpmyadmin. If anyone can help me, please help me.

    Read the article

  • CodePlex Daily Summary for Saturday, October 29, 2011

    CodePlex Daily Summary for Saturday, October 29, 2011Popular Releasespatterns & practices: Enterprise Library Contrib: Enterprise Library Contrib - 5.0 (Oct 2011): This release of Enterprise Library Contrib is based on the Microsoft patterns & practices Enterprise Library 5.0 core and contains the following: Common extensionsTypeConfigurationElement<T> - A Polymorphic Configuration Element without having to be part of a PolymorphicConfigurationElementCollection. AnonymousConfigurationElement - A Configuration element that can be uniquely identified without having to define its name explicitly. Data Access Application Block extensionsMySql Provider - ...Network Monitor Open Source Parsers: Network Monitor Parsers 3.4.2748: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, NetowrkMonitor_Parsers.msi continues to improve quality and fix bugs. It has included the fo...Duckworth Lewis Professional Edition Calculator: DLcalc 3.0: DLcalc 3.0 can perform Duckworth/Lewis Professional Edition calculations 100% accurately. It also produces over-by-over and ball-by-ball PAR score tables.Folder Bookmarks: Folder Bookmarks 2.2.0.1: In this version: Custom Icons - now you can change the icons of the bookmarks. By default, whenever an image is added, the icon is automatically changed to a thumbnail of the picture. This can be turned off in the settings (Options... > Settings) Ability to remove items from the 'Recent' category Bugfixes - 'Choose' button in 'Edit Bookmark' now works Another bug fix: another problem in the 'Edit Bookmark' windowMedia Companion: MC 3.420b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movies Fixed: Fanart and poster scraping issues TV Shows (Re)Added: Rebuild single show Fixed: Issue when shows are moved from original location Ability to handle " for actor nicknames Crash when episode name contains "<" (does not scrape yet) Clears fanart when switch...patterns & practices - Unity: Unity 3.0 for .NET4.5 Preview: The Unity 3.0.1026.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. The major changes include: Unity projects updated to target .NET 4.5. Dynamic build plans modified to use compiled lambda expressions instead of Reflection.Emit Converting reflection to use the new TypeInfo for reflection. Projects updated to work with the Microsoft Visual Studio 2011 Preview Notes/Known Issues: The Microsoft.Practices.Unity.UnityServiceLocator class cannot be use...Managed Extensibility Framework: MEF 2 Preview 4: Detailed information on this release is available on the BCL team blog.Image Converter: Image Converter 0.3: New Features: - English and German support Technical Improvements: - Microsoft All Rules using Code Analysis Planned Features for future release: 1. Unit testing 2. Command line interface 3. Automatic UpdatesAcDown????? - Anime&Comic Downloader: AcDown????? v3.6: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6?? ??“????”...DotNetNuke® Events: 05.02.01: This release fixes any know bugs from any previous version. Events 05.02.01 will work for any DNN version 5.5.0 and up. Full details on the changes can be found at http://dnnevents.codeplex.com/workitem/list/basic Please review and rate this release... (stars are welcome)BUG FIXESAdded validation around category cookie RSS feed was missing an explicit close of the file when writing. Fixed. Added extra security into detail view .ICS Files did not include correct line folding. Fixed Cha...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.33: Add JSParser.ParseExpression method to parse JavaScript expressions rather than source-elements. Add -strict switch (CodeSettings.StrictMode) to force input code to ECMA5 Strict-mode (extra error-checking, "use strict" at top). Fixed bug when MinifyCode setting was set to false but RemoveUnneededCode was left it's default value of true.Path Copy Copy: 8.0: New version that mostly adds lots of requested features: 11340 11339 11338 11337 This version also features a more elaborate Settings UI that has several tabs. I tried to add some notes to better explain the use and purpose of the various options. The Path Copy Copy documentation is also on the way, both to explain how to develop custom plugins and to explain how to pre-configure options if you're a network admin. Stay tuned.MVC Controls Toolkit: Mvc Controls Toolkit 1.5.0: Added: The new Client Blocks feaure of Views A new "move" js method for the TreeViews The NewHtmlCreated js event to the DataGrid Improved the ChoiceList structure that now allows also the selection list of a dropdown to be chosen with a lambda expression Improved the AcceptViewHintAttribute controller filter. Now a client can specify not only the name of a View or Partial View it prefers, but also to receive just the rough data in Json format. Fixed: Issue with partial thrust Cl...Free SharePoint Master Pages: Buried Alive (Halloween) Theme: Release Notes *Created for Halloween, you will find theme file, custom css file and images. *Created by Al Roome @AlstarRoome Features: Custom styling for web part Custom background *Screenshot https://s3.amazonaws.com/kkhipple/post/sharepoint-showcase-halloween.pngDevForce Application Framework: DevForce AF 2.0.3 RTW: PrerequisitesWPF 4.0 Silverlight 4.0 DevForce 2010 6.1.3.1 Download ContentsDebug and Release Assemblies API Documentation Source code License.txt Requirements.txt Release HighlightsNew: EventAggregator event forwarding New: EntityManagerInterceptor<T> to intercept EntityManger events New: IHarnessAware to allow for ViewModel setup when executed inside of the Development Harness New: Improved design time stability New: Support for add-in development New: CoroutineFns.To...NicAudio: NicAudio 2.0.5: Minor change to accept special DTS stereo modes (LtRt, AB,...)NDepend TFS 2010 integration: version 0.5.0 beta 1: Only the activity and the VS plugin are avalaible right now. They basically work. Data types that are logged into tfs reports are subject to change. This is no big deal since data is not yet sent into the warehouse.Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...DotNetNuke® FAQ: 05.00.00: FAQ (Frequently Asked Questions) 05.00.00 will work for any DNN version 5.6.1 and up. It is the first version which is rewritten in C#. The scope of this update is to fix all known issues and improve user interface. Please review and rate this release... (stars are welcome)BUG FIXESManage Categories button text was not localized Edit/Add FAQ Entry: button text was not localized ENHANCEMENTSAdded an option to select the control for category display: Listbox with checkboxes (flat category ...SiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.921.340): Added CodePlex and PayPal links New iconNew ProjectsAsynk: Asynk is a framework/application that allows existing applications to easily be extended with an offloaded asynchronous worker layer. Asynk is developed using C#.Blob Tower Defense: 3D tower defense game for Windows Phone 7. School project for Brno University of Technology, computer graphics class.Booz: Booz is... An extended version of the boo shell (booish2 to be precise). Offers additional commands like cd, md, ls etc. I hope this shell can be used to take the position of/surpass the native windows shell in the near future.CIMS: a sanction infomation system for sencience and technology of hustCrystalDot - Icon Collection / Pack (LGPL): .Net / Mono freundliche Varainte der Crystal-Icons von Everaldo Icon collection / pack for .NET and Mono designed by Everaldo - KDE style http://www.everaldo.com/crystal/dotetes: dotetes adalah teka teki silang tool dikembangkan dengan bahasa c#Emoe': This Project is a Windows Phone 7.1 application.Equation Inversion: Visual Studion 2008 Add-in for equation inversions.Exploring VMR Features on WEC7: This is the sample application helps you to do alpha blending the bitmap on camera streaming in Windows Embedded Compact 7 using Directshow video Renderer (VMR). It is a VS2008 based smart device project developed on C++. I have explained the sample application in the following blog link. http://www.e-consystems.com/blog/windowsce/?p=759 EzValidation: Custom validation extensions for ASP.NET MVC 3. Includes server and client side model based validation attributes for: -- Equal To -- Not Equal To -- Greater Than -- Greater Than or Equal To -- Less Than -- Less Than or Equal To Supports validating against: -- Another Model Field -- A Specific Value -- Current Date/Yesterday/Tomorrow (for Dates and Strings) Download & Install via NuGet "package-install ezvalidation"Flu.net: Flu.net is a tool that helps you creating your own fluent syntax for .NET Framework applications in a declarative fashion. It is aimed for infrastructures and other open-source projects use.For Chess Endgames: King vs. King Opposition Calculator: You must input the locations of 2 kings on a chessboard, and whose turn it is to move. The calculator will display which king has the opposition, and how it can be used or maintained.GameTrakXNA: This project aims to create a simple library to use the unique GameTrak controller within XNA and Flash.Google Speech Recognition Example: Google Speech Recognition contains a working example of application that uses google speech recognition API. App contains all necessary dlls to record, decode and send your voice request to google service and recieve a text representation of what you've said. It's developed in C#Interval Mandelbrot Explorer: Explore the Mandelbrot set using interval arithmetic.ISD training tasks: ISD training examples and tasksiTunesControlBar: The iTunesControlBar helps user control their iTunes Application while it is minimized. iTunesControlBar resides at the top of the screen, invisible when not used, and allows playback and volume control, library searches and media information without the need to bring up iTunes.iTurtle: A bunch of Powerscripts to automate server management in AD environment.M26WC - Mono 2.6 Wizard Control: Wizard which runs under Mono2.6 A fork of: http://aerowizard.codeplex.com/Microsoft Help Viewer 2: Help Viewer 2 is the help runtime for both Visual Studio 11 help and Windows 8 help. The code in this project will help you use and understand the HV2 runtime API.MONTRASEC: Monitoring Trafficking in human beings and Sexual Exploitation of Children: benchmarking for member state and EU reporting, turning the SIAMSECT templates into a user-friendly interface and reporting tool. MTF.NET Runtime: Managed Task Framework .NET Runtime The MTF.NET runtime software and resulting assemblies are required to run applications built using the Managed Task Framework.NET Professional (Visual Studio 2010 extension) software design editor. The MTF.NET team are committed to continuously improving the core MTF.NET runtime and ensuring it is always available free and fully transparent. Pandoras Box: A greenfield inversion of control project utilising the power and flexibility of expressions and preferring convention over configuration.Pass the Puzzle: Pass the Puzzle is a frantic word-guessing party game. The game displays a few letters, and the players must come up with words containing those letters. But beware: if the timer goes off, you lose! It is based on the folk party game Pass the Parcel and is written in C#.PerCiGal: Percigal is a project for the development of applications for managing your personal media library. It consists in - a windows application to use at home to catalog movies, TV series, cast and books, with the support of the Internet for information retrieval; - a web interface for viewing and cataloging everywhere your media; - an application for smartphones. Project Flying Carpet: Este jogo é um projeto para a cadeira Projeto de Jogos: Motores Jogos do curso de Jogos Digitais da Unisinos.proxy browser: sed leo Latin's Butterfly....Python Multiple Dispatch: Multiple dispatch (AKA multimethods) for Python 3 via a metaclass and type annotations.reDune: ?????????? ???? ? ????? «????????? ? ???????? ???????». ???????? ?? Dune2000 ?? Westwood ? Electronic Arts.Rereadable: Keep page from internet for read it latter.ServStop: ServStop is a .NET application that makes it easy to stop several system services at once. Now you don't have to change startup types or stop them one at a time. It has a simple list-based interface with the ability to save and load lists of user services to stop. Written in C#.SharePoint 2010 Audience Membership Workflow Activity (Full Trust): A simple SharePoint 2010 workflow activity / workflow condition to check whether the user initiating the workflow is a member of a specified audience. Farm-level .wsp solution, written in C#. Once installed, the workflow activity can be used in SharePoint Designer 2010 declarative workflows.SQL Server® to Firebird DB converter: Converts Microsoft SQL Server® database into Firebird database including entire structure and datastegitest: test projectSystem.Threading.Joins: The Joins project provides asynchronous concurrency semantics based on join calculus and modeled after the Microsoft Research C? (C Omega) project.TestAndroidGame: try dev a TestAndroidGametetribricks: block game Topographic Explorer: A project to import, convert, explore, manipulate, and save topographical maps. Looking to use C# and WPF.Trading: Under construction!!!Trombone: Trombone makes it easier for Windows Mobile Professional users to automate status reply through SMS. It's developed in Visual C# 2008.Tulsa SharePoint Interest Group: Repository for source code for the Tulsa SharePoint Interest Group's web site. The Tulsa SharePoint Interest Group is using the Community Kit for SharePoint. This project will house any modifications that are specific to our user group.World of Tanks RU tiny stats collection utilty.: Tiny utility to load players stats for World of Tanks RU server. Results saved to comma separated file.WS-Discovery Proxy: Attempt at creating general purpose WS-Discovery Proxy.Yamaha Tu?n Tr?c: This application is used to manage information for Yamaha Tu?n Tr?c

    Read the article

  • FreeBSD 8.1 unstable network connection

    - by frankcheong
    I have three FreeBSD 8.1 running on three different hardware and therefore consist of different network adapter as well (bce, bge and igb). I found that the network connection is kind of unstable which I have tried to scp some 10MB file and found that I cannot always get the files completed successfully. I have further checked with my network admin and he claim that the problem is being caused by the network driver which cannot support the load whereby he tried to ping using huge packet size (around 15k) and my server will drop packet consistently at a regular interval. I found that this statement may not be valid since the three server is using three different network drive and it would be quite impossible that the same problem is being caused by three different network adapter and thus different network driver. Since then I have tried to tune up the performance by playing around with the /etc/sysctl.conf figures with no luck. kern.ipc.somaxconn=1024 kern.ipc.shmall=3276800 kern.ipc.shmmax=1638400000 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # Required by pf net.inet.ip.forwarding=1 #Network Performance Tuning kern.ipc.maxsockbuf=16777216 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.recvbuf_max=16777216 # Setting specifically for 1 or even 10Gbps network net.local.stream.sendspace=262144 net.local.stream.recvspace=262144 net.inet.tcp.local_slowstart_flightsize=10 net.inet.tcp.nolocaltimewait=1 net.inet.tcp.mssdflt=1460 net.inet.tcp.sendbuf_auto=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.sendspace=262144 net.inet.tcp.recvspace=262144 net.inet.udp.recvspace=262144 kern.ipc.maxsockbuf=16777216 kern.ipc.nmbclusters=32768 net.inet.tcp.delayed_ack=1 net.inet.tcp.delacktime=100 net.inet.tcp.slowstart_flightsize=179 net.inet.tcp.inflight.enable=1 net.inet.tcp.inflight.min=6144 # Reduce the cache size of slow start connection net.inet.tcp.hostcache.expire=1 Our network admin also claim that they see quite a lot of network up and down from their cisco switch log while I cannot find any up down message inside the dmesg. Have further checked the netstat -s but dont have concrete idea. tcp: 133695291 packets sent 39408539 data packets (3358837321 bytes) 61868 data packets (89472844 bytes) retransmitted 24 data packets unnecessarily retransmitted 0 resends initiated by MTU discovery 50756141 ack-only packets (2148 delayed) 0 URG only packets 0 window probe packets 4372385 window update packets 39781869 control packets 134898031 packets received 72339403 acks (for 3357601899 bytes) 190712 duplicate acks 0 acks for unsent data 59339201 packets (3647021974 bytes) received in-sequence 114 completely duplicate packets (135202 bytes) 27 old duplicate packets 0 packets with some dup. data (0 bytes duped) 42090 out-of-order packets (60817889 bytes) 0 packets (0 bytes) of data after window 0 window probes 3953896 window update packets 64181 packets received after close 0 discarded for bad checksums 0 discarded for bad header offset fields 0 discarded because packet too short 45192 discarded due to memory problems 19945391 connection requests 1323420 connection accepts 0 bad connection attempts 0 listen queue overflows 0 ignored RSTs in the windows 21133581 connections established (including accepts) 21268724 connections closed (including 32737 drops) 207874 connections updated cached RTT on close 207874 connections updated cached RTT variance on close 132439 connections updated cached ssthresh on close 42392 embryonic connections dropped 72339338 segments updated rtt (of 69477829 attempts) 390871 retransmit timeouts 0 connections dropped by rexmit timeout 0 persist timeouts 0 connections dropped by persist timeout 0 Connections (fin_wait_2) dropped because of timeout 13990 keepalive timeouts 2 keepalive probes sent 13988 connections dropped by keepalive 173044 correct ACK header predictions 36947371 correct data packet header predictions 1323420 syncache entries added 0 retransmitted 0 dupsyn 0 dropped 1323420 completed 0 bucket overflow 0 cache overflow 0 reset 0 stale 0 aborted 0 badack 0 unreach 0 zone failures 1323420 cookies sent 0 cookies received 1864 SACK recovery episodes 18005 segment rexmits in SACK recovery episodes 26066896 byte rexmits in SACK recovery episodes 147327 SACK options (SACK blocks) received 87473 SACK options (SACK blocks) sent 0 SACK scoreboard overflow 0 packets with ECN CE bit set 0 packets with ECN ECT(0) bit set 0 packets with ECN ECT(1) bit set 0 successful ECN handshakes 0 times ECN reduced the congestion window udp: 5141258 datagrams received 0 with incomplete header 0 with bad data length field 0 with bad checksum 1 with no checksum 0 dropped due to no socket 129616 broadcast/multicast datagrams undelivered 0 dropped due to full socket buffers 0 not for hashed pcb 5011642 delivered 5016050 datagrams output 0 times multicast source filter matched sctp: 0 input packets 0 datagrams 0 packets that had data 0 input SACK chunks 0 input DATA chunks 0 duplicate DATA chunks 0 input HB chunks 0 HB-ACK chunks 0 input ECNE chunks 0 input AUTH chunks 0 chunks missing AUTH 0 invalid HMAC ids received 0 invalid secret ids received 0 auth failed 0 fast path receives all one chunk 0 fast path multi-part data 0 output packets 0 output SACKs 0 output DATA chunks 0 retransmitted DATA chunks 0 fast retransmitted DATA chunks 0 FR's that happened more than once to same chunk 0 intput HB chunks 0 output ECNE chunks 0 output AUTH chunks 0 ip_output error counter Packet drop statistics: 0 from middle box 0 from end host 0 with data 0 non-data, non-endhost 0 non-endhost, bandwidth rep only 0 not enough for chunk header 0 not enough data to confirm 0 where process_chunk_drop said break 0 failed to find TSN 0 attempt reverse TSN lookup 0 e-host confirms zero-rwnd 0 midbox confirms no space 0 data did not match TSN 0 TSN's marked for Fast Retran Timeouts: 0 iterator timers fired 0 T3 data time outs 0 window probe (T3) timers fired 0 INIT timers fired 0 sack timers fired 0 shutdown timers fired 0 heartbeat timers fired 0 a cookie timeout fired 0 an endpoint changed its cookiesecret 0 PMTU timers fired 0 shutdown ack timers fired 0 shutdown guard timers fired 0 stream reset timers fired 0 early FR timers fired 0 an asconf timer fired 0 auto close timer fired 0 asoc free timers expired 0 inp free timers expired 0 packet shorter than header 0 checksum error 0 no endpoint for port 0 bad v-tag 0 bad SID 0 no memory 0 number of multiple FR in a RTT window 0 RFC813 allowed sending 0 RFC813 does not allow sending 0 times max burst prohibited sending 0 look ahead tells us no memory in interface 0 numbers of window probes sent 0 times an output error to clamp down on next user send 0 times sctp_senderrors were caused from a user 0 number of in data drops due to chunk limit reached 0 number of in data drops due to rwnd limit reached 0 times a ECN reduced the cwnd 0 used express lookup via vtag 0 collision in express lookup 0 times the sender ran dry of user data on primary 0 same for above 0 sacks the slow way 0 window update only sacks sent 0 sends with sinfo_flags !=0 0 unordered sends 0 sends with EOF flag set 0 sends with ABORT flag set 0 times protocol drain called 0 times we did a protocol drain 0 times recv was called with peek 0 cached chunks used 0 cached stream oq's used 0 unread messages abandonded by close 0 send burst avoidance, already max burst inflight to net 0 send cwnd full avoidance, already max burst inflight to net 0 number of map array over-runs via fwd-tsn's ip: 137814085 total packets received 0 bad header checksums 0 with size smaller than minimum 0 with data size < data length 0 with ip length > max ip packet size 0 with header length < data size 0 with data length < header length 0 with bad options 0 with incorrect version number 1200 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 300 packets reassembled ok 137813009 packets for this host 530 packets for unknown/unsupported protocol 0 packets forwarded (0 packets fast forwarded) 61 packets not forwardable 0 packets received for unknown multicast group 0 redirects sent 137234598 packets sent from this host 0 packets sent with fabricated ip header 685307 output packets dropped due to no bufs, etc. 52 output packets discarded due to no route 300 output datagrams fragmented 1200 fragments created 0 datagrams that can't be fragmented 0 tunneling packets that can't find gif 0 datagrams with bad address in header icmp: 0 calls to icmp_error 0 errors not generated in response to an icmp message Output histogram: echo reply: 305 0 messages with bad code fields 0 messages less than the minimum length 0 messages with bad checksum 0 messages with bad length 0 multicast echo requests ignored 0 multicast timestamp requests ignored Input histogram: destination unreachable: 530 echo: 305 305 message responses generated 0 invalid return addresses 0 no return routes ICMP address mask responses are disabled igmp: 0 messages received 0 messages received with too few bytes 0 messages received with wrong TTL 0 messages received with bad checksum 0 V1/V2 membership queries received 0 V3 membership queries received 0 membership queries received with invalid field(s) 0 general queries received 0 group queries received 0 group-source queries received 0 group-source queries dropped 0 membership reports received 0 membership reports received with invalid field(s) 0 membership reports received for groups to which we belong 0 V3 reports received without Router Alert 0 membership reports sent arp: 376748 ARP requests sent 3207 ARP replies sent 245245 ARP requests received 80845 ARP replies received 326090 ARP packets received 267712 total packets dropped due to no ARP entry 108876 ARP entrys timed out 0 Duplicate IPs seen ip6: 2226633 total packets received 0 with size smaller than minimum 0 with data size < data length 0 with bad options 0 with incorrect version number 0 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 0 fragments that exceeded limit 0 packets reassembled ok 2226633 packets for this host 0 packets forwarded 0 packets not forwardable 0 redirects sent 2226633 packets sent from this host 0 packets sent with fabricated ip header 0 output packets dropped due to no bufs, etc. 8 output packets discarded due to no route 0 output datagrams fragmented 0 fragments created 0 datagrams that can't be fragmented 0 packets that violated scope rules 0 multicast packets which we don't join Input histogram: UDP: 2226633 Mbuf statistics: 962679 one mbuf 1263954 one ext mbuf 0 two or more ext mbuf 0 packets whose headers are not continuous 0 tunneling packets that can't find gif 0 packets discarded because of too many headers 0 failures of source address selection Source addresses selection rule applied: icmp6: 0 calls to icmp6_error 0 errors not generated in response to an icmp6 message 0 errors not generated because of rate limitation 0 messages with bad code fields 0 messages < minimum length 0 bad checksums 0 messages with bad length Histogram of error messages to be generated: 0 no route 0 administratively prohibited 0 beyond scope 0 address unreachable 0 port unreachable 0 packet too big 0 time exceed transit 0 time exceed reassembly 0 erroneous header field 0 unrecognized next header 0 unrecognized option 0 redirect 0 unknown 0 message responses generated 0 messages with too many ND options 0 messages with bad ND options 0 bad neighbor solicitation messages 0 bad neighbor advertisement messages 0 bad router solicitation messages 0 bad router advertisement messages 0 bad redirect messages 0 path MTU changes rip6: 0 messages received 0 checksum calculations on inbound 0 messages with bad checksum 0 messages dropped due to no socket 0 multicast messages dropped due to no socket 0 messages dropped due to full socket buffers 0 delivered 0 datagrams output netstat -m 516/5124/5640 mbufs in use (current/cache/total) 512/1634/2146/32768 mbuf clusters in use (current/cache/total/max) 512/1536 mbuf+clusters out of packet secondary zone in use (current/cache) 0/1303/1303/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 1153K/9761K/10914K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/8/6656 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines Anyone got an idea what might be the possible cause?

    Read the article

  • Cannot open root device xvda1 or unknown-block(0,0)

    - by svoop
    I'm putting together a Dom0 and three DomU (all Gentoo) with kernel 3.5.7 and Xen 4.1.1. Each Dom has it's own md (md0 for Dom0, md1 for Dom1 etc). Dom0 works fine so far, however, I'm stuck trying to create DomUs. It appears the xvda1 device on DomU is not created or accessible: Parsing config file dom1 domainbuilder: detail: xc_dom_allocate: cmdline="root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3", features="(null)" domainbuilder: detail: xc_dom_kernel_mem: called domainbuilder: detail: xc_dom_boot_xen_init: ver 4.1, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 domainbuilder: detail: xc_dom_parse_image: called domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... domainbuilder: detail: loader probe failed domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ... domainbuilder: detail: xc_dom_malloc : 10530 kB domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x2f7a4f -> 0xa48888 domainbuilder: detail: loader probe OK xc: detail: elf_parse_binary: phdr: paddr=0x1000000 memsz=0x558000 xc: detail: elf_parse_binary: phdr: paddr=0x1558000 memsz=0x690e8 xc: detail: elf_parse_binary: phdr: paddr=0x15c2000 memsz=0x127c0 xc: detail: elf_parse_binary: phdr: paddr=0x15d5000 memsz=0x533000 xc: detail: elf_parse_binary: memory: 0x1000000 -> 0x1b08000 xc: detail: elf_xen_parse_note: GUEST_OS = "linux" xc: detail: elf_xen_parse_note: GUEST_VERSION = "2.6" xc: detail: elf_xen_parse_note: XEN_VERSION = "xen-3.0" xc: detail: elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000 xc: detail: elf_xen_parse_note: ENTRY = 0xffffffff815d5210 xc: detail: elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000 xc: detail: elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb" xc: detail: elf_xen_parse_note: PAE_MODE = "yes" xc: detail: elf_xen_parse_note: LOADER = "generic" xc: detail: elf_xen_parse_note: unknown xen elf note (0xd) xc: detail: elf_xen_parse_note: SUSPEND_CANCEL = 0x1 xc: detail: elf_xen_parse_note: HV_START_LOW = 0xffff800000000000 xc: detail: elf_xen_parse_note: PADDR_OFFSET = 0x0 xc: detail: elf_xen_addr_calc_check: addresses: xc: detail: virt_base = 0xffffffff80000000 xc: detail: elf_paddr_offset = 0x0 xc: detail: virt_offset = 0xffffffff80000000 xc: detail: virt_kstart = 0xffffffff81000000 xc: detail: virt_kend = 0xffffffff81b08000 xc: detail: virt_entry = 0xffffffff815d5210 xc: detail: p2m_base = 0xffffffffffffffff domainbuilder: detail: xc_dom_parse_elf_kernel: xen-3.0-x86_64: 0xffffffff81000000 -> 0xffffffff81b08000 domainbuilder: detail: xc_dom_mem_init: mem 5000 MB, pages 0x138800 pages, 4k each domainbuilder: detail: xc_dom_mem_init: 0x138800 pages domainbuilder: detail: xc_dom_boot_mem_init: called domainbuilder: detail: x86_compat: guest xen-3.0-x86_64, address size 64 domainbuilder: detail: xc_dom_malloc : 10000 kB domainbuilder: detail: xc_dom_build_image: called domainbuilder: detail: xc_dom_alloc_segment: kernel : 0xffffffff81000000 -> 0xffffffff81b08000 (pfn 0x1000 + 0xb08 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x1000+0xb08 at 0x7fdec9b85000 xc: detail: elf_load_binary: phdr 0 at 0x0x7fdec9b85000 -> 0x0x7fdeca0dd000 xc: detail: elf_load_binary: phdr 1 at 0x0x7fdeca0dd000 -> 0x0x7fdeca1460e8 xc: detail: elf_load_binary: phdr 2 at 0x0x7fdeca147000 -> 0x0x7fdeca1597c0 xc: detail: elf_load_binary: phdr 3 at 0x0x7fdeca15a000 -> 0x0x7fdeca1cd000 domainbuilder: detail: xc_dom_alloc_segment: phys2mach : 0xffffffff81b08000 -> 0xffffffff824cc000 (pfn 0x1b08 + 0x9c4 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x1b08+0x9c4 at 0x7fdec91c1000 domainbuilder: detail: xc_dom_alloc_page : start info : 0xffffffff824cc000 (pfn 0x24cc) domainbuilder: detail: xc_dom_alloc_page : xenstore : 0xffffffff824cd000 (pfn 0x24cd) domainbuilder: detail: xc_dom_alloc_page : console : 0xffffffff824ce000 (pfn 0x24ce) domainbuilder: detail: nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffff827fffff, 20 table(s) domainbuilder: detail: xc_dom_alloc_segment: page tables : 0xffffffff824cf000 -> 0xffffffff824e6000 (pfn 0x24cf + 0x17 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x24cf+0x17 at 0x7fdece676000 domainbuilder: detail: xc_dom_alloc_page : boot stack : 0xffffffff824e6000 (pfn 0x24e6) domainbuilder: detail: xc_dom_build_image : virt_alloc_end : 0xffffffff824e7000 domainbuilder: detail: xc_dom_build_image : virt_pgtab_end : 0xffffffff82800000 domainbuilder: detail: xc_dom_boot_image: called domainbuilder: detail: arch_setup_bootearly: doing nothing domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64 <= matches domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64 domainbuilder: detail: xc_dom_update_guest_p2m: dst 64bit, pages 0x138800 domainbuilder: detail: clear_page: pfn 0x24ce, mfn 0x37ddee domainbuilder: detail: clear_page: pfn 0x24cd, mfn 0x37ddef domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x24cc+0x1 at 0x7fdece675000 domainbuilder: detail: start_info_x86_64: called domainbuilder: detail: setup_hypercall_page: vaddr=0xffffffff81001000 pfn=0x1001 domainbuilder: detail: domain builder memory footprint domainbuilder: detail: allocated domainbuilder: detail: malloc : 20658 kB domainbuilder: detail: anon mmap : 0 bytes domainbuilder: detail: mapped domainbuilder: detail: file mmap : 0 bytes domainbuilder: detail: domU mmap : 21392 kB domainbuilder: detail: arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xbaa6f domainbuilder: detail: shared_info_x86_64: called domainbuilder: detail: vcpu_x86_64: called domainbuilder: detail: vcpu_x86_64: cr3: pfn 0x24cf mfn 0x37dded domainbuilder: detail: launch_vm: called, ctxt=0x7fff224e4ea0 domainbuilder: detail: xc_dom_release: called Daemon running with PID 4639 [ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version 3.5.7-gentoo (root@majordomo) (gcc version 4.5.4 (Gentoo 4.5.4 p1.0, pie-0.4.7) ) #1 SMP Tue Nov 20 10:49:51 CET 2012 [ 0.000000] Command line: root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3 [ 0.000000] ACPI in unprivileged domain disabled [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable [ 0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved [ 0.000000] Xen: [mem 0x0000000000100000-0x0000000138ffffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] MPS support code is not built-in. [ 0.000000] Using acpi=off or acpi=noirq or pci=noacpi may have problem [ 0.000000] DMI not present or invalid. [ 0.000000] No AGP bridge found [ 0.000000] e820: last_pfn = 0x139000 max_arch_pfn = 0x400000000 [ 0.000000] e820: last_pfn = 0x100000 max_arch_pfn = 0x400000000 [ 0.000000] init_memory_mapping: [mem 0x00000000-0xffffffff] [ 0.000000] init_memory_mapping: [mem 0x100000000-0x138ffffff] [ 0.000000] NUMA turned off [ 0.000000] Faking a node at [mem 0x0000000000000000-0x0000000138ffffff] [ 0.000000] Initmem setup node 0 [mem 0x00000000-0x138ffffff] [ 0.000000] NODE_DATA [mem 0x1387fc000-0x1387fffff] [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00010000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x138ffffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00010000-0x0009ffff] [ 0.000000] node 0: [mem 0x00100000-0x138ffffff] [ 0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs [ 0.000000] No local APIC present [ 0.000000] APIC: disable apic facility [ 0.000000] APIC: switched to apic NOOP [ 0.000000] e820: cannot find a gap in the 32bit address range [ 0.000000] e820: PCI devices with unassigned 32bit BARs may break! [ 0.000000] e820: [mem 0x139100000-0x1394fffff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on Xen [ 0.000000] Xen version: 4.1.1 (preserve-AD) [ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 nr_node_ids:1 [ 0.000000] PERCPU: Embedded 26 pages/cpu @ffff880138400000 s75712 r8192 d22592 u2097152 [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1259871 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3 [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] __ex_table already sorted, skipping sort [ 0.000000] Checking aperture... [ 0.000000] No AGP bridge found [ 0.000000] Memory: 4943980k/5128192k available (3937k kernel code, 448k absent, 183764k reserved, 1951k data, 524k init) [ 0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] NR_IRQS:4352 nr_irqs:256 16 [ 0.000000] Console: colour dummy device 80x25 [ 0.000000] console [tty0] enabled [ 0.000000] console [hvc0] enabled [ 0.000000] installing Xen timer for CPU 0 [ 0.000000] Detected 3411.602 MHz processor. [ 0.000999] Calibrating delay loop (skipped), value calculated using timer frequency.. 6823.20 BogoMIPS (lpj=3411602) [ 0.000999] pid_max: default: 32768 minimum: 301 [ 0.000999] Security Framework initialized [ 0.001355] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes) [ 0.002974] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.003441] Mount-cache hash table entries: 256 [ 0.003595] Initializing cgroup subsys cpuacct [ 0.003599] Initializing cgroup subsys freezer [ 0.003637] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ 0.003637] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8) [ 0.003643] CPU: Physical Processor ID: 0 [ 0.003645] CPU: Processor Core ID: 0 [ 0.003702] SMP alternatives: switching to UP code [ 0.011791] Freeing SMP alternatives: 12k freed [ 0.011835] Performance Events: unsupported p6 CPU model 42 no PMU driver, software events only. [ 0.011886] Brought up 1 CPUs [ 0.011998] Grant tables using version 2 layout. [ 0.012009] Grant table initialized [ 0.012034] NET: Registered protocol family 16 [ 0.012328] PCI: setting up Xen PCI frontend stub [ 0.015089] bio: create slab <bio-0> at 0 [ 0.015158] ACPI: Interpreter disabled. [ 0.015180] xen/balloon: Initialising balloon driver. [ 0.015180] xen-balloon: Initialising balloon driver. [ 0.015180] vgaarb: loaded [ 0.016126] SCSI subsystem initialized [ 0.016314] PCI: System does not support PCI [ 0.016320] PCI: System does not support PCI [ 0.016435] NetLabel: Initializing [ 0.016438] NetLabel: domain hash size = 128 [ 0.016440] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.016447] NetLabel: unlabeled traffic allowed by default [ 0.016475] Switching to clocksource xen [ 0.017434] pnp: PnP ACPI: disabled [ 0.017501] NET: Registered protocol family 2 [ 0.017864] IP route cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.019322] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) [ 0.020376] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) [ 0.020497] TCP: Hash tables configured (established 524288 bind 65536) [ 0.020500] TCP: reno registered [ 0.020525] UDP hash table entries: 4096 (order: 5, 131072 bytes) [ 0.020564] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes) [ 0.020624] NET: Registered protocol family 1 [ 0.020658] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 0.020662] software IO TLB [mem 0xfb632000-0xff631fff] (64MB) mapped at [ffff8800fb632000-ffff8800ff631fff] [ 0.020750] platform rtc_cmos: registered platform RTC device (no PNP device found) [ 0.021378] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 0.023378] msgmni has been set to 9656 [ 0.023544] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.023549] io scheduler noop registered [ 0.023551] io scheduler deadline registered [ 0.023580] io scheduler cfq registered (default) [ 0.023650] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.023845] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 0.024082] Non-volatile memory driver v1.3 [ 0.024085] Linux agpgart interface v0.103 [ 0.024207] Event-channel device installed. [ 0.024265] [drm] Initialized drm 1.1.0 20060810 [ 0.024268] [drm:i915_init] *ERROR* drm/i915 can't work without intel_agp module! [ 0.025145] brd: module loaded [ 0.025565] loop: module loaded [ 0.045646] Initialising Xen virtual ethernet driver. [ 0.198264] i8042: PNP: No PS/2 controller found. Probing ports directly. [ 0.199096] i8042: No controller found [ 0.199139] mousedev: PS/2 mouse device common for all mice [ 0.259303] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0 [ 0.259353] rtc_cmos: probe of rtc_cmos failed with error -38 [ 0.259440] md: raid1 personality registered for level 1 [ 0.259542] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 0.259732] ip_tables: (C) 2000-2006 Netfilter Core Team [ 0.259747] TCP: cubic registered [ 0.259886] NET: Registered protocol family 10 [ 0.260031] ip6_tables: (C) 2000-2006 Netfilter Core Team [ 0.260070] sit: IPv6 over IPv4 tunneling driver [ 0.260194] NET: Registered protocol family 17 [ 0.260213] Bridge firewalling registered [ 5.360075] XENBUS: Waiting for devices to initialise: 25s...20s...15s...10s...5s...0s...235s...230s...225s...220s...215s...210s...205s...200s...195s...190s...185s...180s...175s...170s...165s...160s...155s...150s...145s...140s...135s...130s...125s...120s...115s...110s...105s...100s...95s...90s...85s...80s...75s...70s...65s...60s...55s...50s...45s...40s...35s...30s...25s...20s...15s...10s...5s...0s... [ 270.360180] XENBUS: Timeout connecting to device: device/vbd/51713 (local state 3, remote state 1) [ 270.360273] md: Waiting for all devices to be available before autodetect [ 270.360277] md: If you don't use raid, use raid=noautodetect [ 270.360388] md: Autodetecting RAID arrays. [ 270.360392] md: Scanned 0 and added 0 devices. [ 270.360394] md: autorun ... [ 270.360395] md: ... autorun DONE. [ 270.360431] VFS: Cannot open root device "xvda1" or unknown-block(0,0): error -6 [ 270.360435] Please append a correct "root=" boot option; here are the available partitions: [ 270.360440] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 270.360444] Pid: 1, comm: swapper/0 Not tainted 3.5.7-gentoo #1 [ 270.360446] Call Trace: [ 270.360454] [<ffffffff813d2205>] ? panic+0xbe/0x1c5 [ 270.360459] [<ffffffff813d2358>] ? printk+0x4c/0x51 [ 270.360464] [<ffffffff815d5fb7>] ? mount_block_root+0x24f/0x26d [ 270.360469] [<ffffffff815d62b6>] ? prepare_namespace+0x168/0x192 [ 270.360474] [<ffffffff815d5ca7>] ? kernel_init+0x1b0/0x1c2 [ 270.360477] [<ffffffff815d5500>] ? loglevel+0x34/0x34 [ 270.360482] [<ffffffff813d5a64>] ? kernel_thread_helper+0x4/0x10 [ 270.360486] [<ffffffff813d4038>] ? retint_restore_args+0x5/0x6 [ 270.360490] [<ffffffff813d5a60>] ? gs_change+0x13/0x13 The config: name = "dom1" bootloader = "/usr/bin/pygrub" root = "/dev/xvda1 ro" extra = "3" # runlevel memory = 5000 disk = [ 'phy:/dev/md1,xvda1,w' ] # vif = [ 'ip=..., vifname=veth1' ] # none for now Here are some details on the Dom0 kernel (grepping for "xen"): CONFIG_XEN=y CONFIG_XEN_DOM0=y CONFIG_XEN_PRIVILEGED_GUEST=y CONFIG_XEN_PVHVM=y CONFIG_XEN_MAX_DOMAIN_MEMORY=500 CONFIG_XEN_SAVE_RESTORE=y CONFIG_PCI_XEN=y CONFIG_XEN_PCIDEV_FRONTEND=y # CONFIG_XEN_BLKDEV_FRONTEND is not set CONFIG_XEN_BLKDEV_BACKEND=y # CONFIG_XEN_NETDEV_FRONTEND is not set CONFIG_XEN_NETDEV_BACKEND=y CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y CONFIG_HVC_XEN=y CONFIG_HVC_XEN_FRONTEND=y # CONFIG_XEN_WDT is not set # CONFIG_XEN_FBDEV_FRONTEND is not set # Xen driver support CONFIG_XEN_BALLOON=y # CONFIG_XEN_SELFBALLOONING is not set CONFIG_XEN_SCRUB_PAGES=y CONFIG_XEN_DEV_EVTCHN=y CONFIG_XEN_BACKEND=y CONFIG_XENFS=y CONFIG_XEN_COMPAT_XENFS=y CONFIG_XEN_SYS_HYPERVISOR=y CONFIG_XEN_XENBUS_FRONTEND=y CONFIG_XEN_GNTDEV=m CONFIG_XEN_GRANT_DEV_ALLOC=m CONFIG_SWIOTLB_XEN=y CONFIG_XEN_TMEM=y CONFIG_XEN_PCIDEV_BACKEND=m CONFIG_XEN_PRIVCMD=y CONFIG_XEN_ACPI_PROCESSOR=m And the DomU kernel (grepping for "xen"): CONFIG_XEN=y CONFIG_XEN_DOM0=y CONFIG_XEN_PRIVILEGED_GUEST=y CONFIG_XEN_PVHVM=y CONFIG_XEN_MAX_DOMAIN_MEMORY=500 CONFIG_XEN_SAVE_RESTORE=y CONFIG_PCI_XEN=y CONFIG_XEN_PCIDEV_FRONTEND=y CONFIG_XEN_BLKDEV_FRONTEND=y CONFIG_XEN_NETDEV_FRONTEND=y CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y CONFIG_HVC_XEN=y CONFIG_HVC_XEN_FRONTEND=y # CONFIG_XEN_WDT is not set # CONFIG_XEN_FBDEV_FRONTEND is not set # Xen driver support CONFIG_XEN_BALLOON=y # CONFIG_XEN_SELFBALLOONING is not set CONFIG_XEN_SCRUB_PAGES=y CONFIG_XEN_DEV_EVTCHN=y # CONFIG_XEN_BACKEND is not set CONFIG_XENFS=y CONFIG_XEN_COMPAT_XENFS=y CONFIG_XEN_SYS_HYPERVISOR=y CONFIG_XEN_XENBUS_FRONTEND=y CONFIG_XEN_GNTDEV=m CONFIG_XEN_GRANT_DEV_ALLOC=m CONFIG_SWIOTLB_XEN=y CONFIG_XEN_TMEM=y CONFIG_XEN_PRIVCMD=y CONFIG_XEN_ACPI_PROCESSOR=m Any ideas what I'm doing wrong here? Thanks a lot!

    Read the article

  • MonoTouch App Crashes When Returning From MFMailComposeViewController

    - by Richard Khan
    My MonoTouch Version Info: Release ID: 20401003 Git revision: 2f1746af36f421d262dcd2b0542ce86b12158f02 Build date: 2010-12-23 23:13:38+0000 The MFMailComposeViewController is displayed and works correctly as a dialog using the following code: if (MFMailComposeViewController.CanSendMail) { MFMailComposeViewController mail; mail = new MFMailComposeViewController (); mail.SetSubject ("Subject Test"); mail.SetMessageBody ("Body Test", false); mail.Finished += HandleMailFinished; this.navigationController.PresentModalViewController (mail, true); } else { new UIAlertView ("Mail Failed", "Mail Failed", null, "OK", null).Show (); } However, once the user selects Cancel | Delete Draft or Cancel | Save Draft or Send, the App throws a run-time error like the following: Stacktrace: at (wrapper managed-to-native) MonoTouch.UIKit.UIApplication.UIApplicationMain (int,string[],intptr,intptr) <0x00004 at (wrapper managed-to-native) MonoTouch.UIKit.UIApplication.UIApplicationMain (int,string[],intptr,intptr) <0x00004 at MonoTouch.UIKit.UIApplication.Main (string[],string,string) [0x00038] in /Users/plasma/Source/iphone/monotouch/UIKit/UIApplication.cs:26 at MonoTouch.UIKit.UIApplication.Main (string[]) [0x00000] in /Users/plasma/Source/iphone/monotouch/UIKit/UIApplication.cs:31 at MailDialog.Application.Main (string[]) [0x00000] in /Users/rrkhan/Projects/Sandbox/MailDialog/Main.cs:15 at (wrapper runtime-invoke) .runtime_invoke_void_object (object,intptr,intptr,intptr) Native stacktrace: 0 MailDialog 0x000be66f mono_handle_native_sigsegv + 343 1 MailDialog 0x0000e43e mono_sigsegv_signal_handler + 313 2 libSystem.B.dylib 0x903e946b _sigtramp + 43 3 ??? 0xffffffff 0x0 + 4294967295 4 MessageUI 0x01a9f6b7 -[MFMailComposeController _close] + 284 5 UIKit 0x01f682f1 -[UIActionSheet(Private) _buttonClicked:] + 258 6 UIKit 0x01be1a6e -[UIApplication sendAction:to:from:forEvent:] + 119 7 UIKit 0x01c701b5 -[UIControl sendAction:to:forEvent:] + 67 8 UIKit 0x01c72647 -[UIControl(Internal) _sendActionsForEvents:withEvent:] + 527 9 UIKit 0x01c711f4 -[UIControl touchesEnded:withEvent:] + 458 10 UIKit 0x01c060d1 -[UIWindow _sendTouchesForEvent:] + 567 11 UIKit 0x01be737a -[UIApplication sendEvent:] + 447 12 UIKit 0x01bec732 _UIApplicationHandleEvent + 7576 13 GraphicsServices 0x03eb7a36 PurpleEventCallback + 1550 14 CoreFoundation 0x00df9064 CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION + 52 15 CoreFoundation 0x00d596f7 __CFRunLoopDoSource1 + 215 16 CoreFoundation 0x00d56983 __CFRunLoopRun + 979 17 CoreFoundation 0x00d56240 CFRunLoopRunSpecific + 208 18 CoreFoundation 0x00d56161 CFRunLoopRunInMode + 97 19 GraphicsServices 0x03eb6268 GSEventRunModal + 217 20 GraphicsServices 0x03eb632d GSEventRun + 115 21 UIKit 0x01bf042e UIApplicationMain + 1160 22 ??? 0x0a1e4bd9 0x0 + 169757657 23 ??? 0x0a1e4b12 0x0 + 169757458 24 ??? 0x0a1e4515 0x0 + 169755925 25 ??? 0x0a1e4451 0x0 + 169755729 26 ??? 0x0a1e44ac 0x0 + 169755820 27 MailDialog 0x0000e202 mono_jit_runtime_invoke + 1360 28 MailDialog 0x001c92af mono_runtime_invoke + 137 29 MailDialog 0x001caf6b mono_runtime_exec_main + 714 30 MailDialog 0x001ca891 mono_runtime_run_main + 812 31 MailDialog 0x00094fe8 mono_jit_exec + 200 32 MailDialog 0x0027cf05 main + 3494 33 MailDialog 0x00002ca1 _start + 208 34 MailDialog 0x00002bd0 start + 40 Debug info from gdb: warning: Could not find object file "/var/folders/Ny/NyElTwhDGD8kZMqIEeLGXE+++TI/-Tmp-//cc6F1tBs.o" - no debug information available for "template.m". warning: .o file "/Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(zlib-helper.x86.42.o)" more recent than executable timestamp in "/Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog" warning: Could not open OSO file /Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(zlib-helper.x86.42.o) to scan for pubtypes for objfile /Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog warning: .o file "/Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(monotouch-glue.x86.42.o)" more recent than executable timestamp in "/Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog" warning: Could not open OSO file /Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(monotouch-glue.x86.42.o) to scan for pubtypes for objfile /Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog warning: .o file "/Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(gc.x86.42.o)" more recent than executable timestamp in "/Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog" warning: Could not open OSO file /Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(gc.x86.42.o) to scan for pubtypes for objfile /Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog Error connecting stdout and stderr (127.0.0.1:10001) warning: .o file "/Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(monotouch-glue.x86.42.o)" more recent than executable timestamp in "/Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog" warning: Couldn't open object file '/Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(monotouch-glue.x86.42.o)' Attaching to process 9992. Reading symbols for shared libraries . done Reading symbols for shared libraries ....................................................................................................................... done 0x9038e459 in read$UNIX2003 () 8 0x903a8a12 in __workq_kernreturn () 7 "WebThread" 0x903830fa in mach_msg_trap () 6 0x903b10a6 in __semwait_signal () 5 0x90383136 in semaphore_wait_trap () 4 0x903830fa in mach_msg_trap () 3 0x903a8a12 in __workq_kernreturn () 2 "com.apple.libdispatch-manager" 0x903a9982 in kevent () * 1 "com.apple.main-thread" 0x9038e459 in read$UNIX2003 () Thread 8 (process 9992): 0 0x903a8a12 in __workq_kernreturn () 1 0x903a8fa8 in _pthread_wqthread () 2 0x903a8bc6 in start_wqthread () Thread 7 (process 9992): 0 0x903830fa in mach_msg_trap () 1 0x90383867 in mach_msg () 2 0x00df94a6 in __CFRunLoopServiceMachPort () 3 0x00d56874 in __CFRunLoopRun () 4 0x00d56240 in CFRunLoopRunSpecific () 5 0x00d56161 in CFRunLoopRunInMode () 6 0x04f7c423 in RunWebThread () 7 0x903b085d in _pthread_start () 8 0x903b06e2 in thread_start () Thread 6 (process 9992): 0 0x903b10a6 in __semwait_signal () 1 0x903dcee5 in nanosleep$UNIX2003 () 2 0x903dce23 in usleep$UNIX2003 () 3 0x0027714c in monotouch_pump_gc () 4 0x903b085d in _pthread_start () 5 0x903b06e2 in thread_start () Thread 5 (process 9992): 0 0x90383136 in semaphore_wait_trap () 1 0x0015ae1d in finalizer_thread (unused=0x0) at ../../../../mono/metadata/gc.c:1026 2 0x002034a3 in start_wrapper (data=0x7b16ba0) at ../../../../mono/metadata/threads.c:661 3 0x002448e2 in thread_start_routine (args=0x8037e34) at ../../../../mono/io-layer/wthreads.c:286 4 0x00274357 in GC_start_routine (arg=0x6ff7f60) at ../../../libgc/pthread_support.c:1390 5 0x903b085d in _pthread_start () 6 0x903b06e2 in thread_start () Thread 4 (process 9992): 0 0x903830fa in mach_msg_trap () 1 0x90383867 in mach_msg () 2 0x0011cc46 in mach_exception_thread (arg=0x0) at ../../../../mono/mini/mini-darwin.c:138 3 0x903b085d in _pthread_start () 4 0x903b06e2 in thread_start () Thread 3 (process 9992): 0 0x903a8a12 in __workq_kernreturn () 1 0x903a8fa8 in _pthread_wqthread () 2 0x903a8bc6 in start_wqthread () Thread 2 (process 9992): 0 0x903a9982 in kevent () 1 0x903aa09c in _dispatch_mgr_invoke () 2 0x903a9559 in _dispatch_queue_invoke () 3 0x903a92fe in _dispatch_worker_thread2 () 4 0x903a8d81 in _pthread_wqthread () 5 0x903a8bc6 in start_wqthread () Thread 1 (process 9992): 0 0x9038e459 in read$UNIX2003 () 1 0x000be81f in mono_handle_native_sigsegv (signal=11, ctx=0xbfffd238) at ../../../../mono/mini/mini-exceptions.c:1826 2 0x0000e43e in mono_sigsegv_signal_handler (_dummy=10, info=0xbfffd1f8, context=0xbfffd238) at ../../../../mono/mini/mini.c:4846 3 4 0x028d6a63 in objc_msgSend () 5 0x01ad469f in func.24012 () 6 0x01a9f6b7 in -[MFMailComposeController _close] () 7 0x01f682f1 in -[UIActionSheet(Private) _buttonClicked:] () 8 0x01be1a6e in -[UIApplication sendAction:to:from:forEvent:] () 9 0x01c701b5 in -[UIControl sendAction:to:forEvent:] () 10 0x01c72647 in -[UIControl(Internal) _sendActionsForEvents:withEvent:] () 11 0x01c711f4 in -[UIControl touchesEnded:withEvent:] () 12 0x01c060d1 in -[UIWindow _sendTouchesForEvent:] () 13 0x01be737a in -[UIApplication sendEvent:] () 14 0x01bec732 in _UIApplicationHandleEvent () 15 0x03eb7a36 in PurpleEventCallback () 16 0x00df9064 in CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION () 17 0x00d596f7 in __CFRunLoopDoSource1 () 18 0x00d56983 in __CFRunLoopRun () 19 0x00d56240 in CFRunLoopRunSpecific () 20 0x00d56161 in CFRunLoopRunInMode () 21 0x03eb6268 in GSEventRunModal () 22 0x03eb632d in GSEventRun () 23 0x01bf042e in UIApplicationMain () 24 0x0a1e4bd9 in ?? () 25 0x0a1e4b12 in ?? () 26 0x0a1e4515 in ?? () 27 0x0a1e4451 in ?? () 28 0x0a1e44ac in ?? () 29 0x0000e202 in mono_jit_runtime_invoke (method=0xa806e6c, obj=0x0, params=0xbfffedbc, exc=0x0) at ../../../../mono/mini/mini.c:4733 30 0x001c92af in mono_runtime_invoke (method=0xa806e6c, obj=0x0, params=0xbfffedbc, exc=0x0) at ../../../../mono/metadata/object.c:2615 31 0x001caf6b in mono_runtime_exec_main (method=0xa806e6c, args=0xa6a34e0, exc=0x0) at ../../../../mono/metadata/object.c:3581 32 0x001ca891 in mono_runtime_run_main (method=0xa806e6c, argc=0, argv=0xbfffeef4, exc=0x0) at ../../../../mono/metadata/object.c:3355 33 0x00094fe8 in mono_jit_exec (domain=0x6f8fe58, assembly=0xa200730, argc=1, argv=0xbfffeef0) at ../../../../mono/mini/driver.c:1094 34 0x0027cf05 in main () ================================================================= Got a SIGSEGV while executing native code. This usually indicates a fatal error in the mono runtime or one of the native libraries used by your application. Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object at (wrapper managed-to-native) MonoTouch.UIKit.UIApplication:UIApplicationMain (int,string[],intptr,intptr) at MonoTouch.UIKit.UIApplication.Main (System.String[] args, System.String principalClassName, System.String delegateClassName) [0x00038] in /Users/plasma/Source/iphone/monotouch/UIKit/UIApplication.cs:26 at MonoTouch.UIKit.UIApplication.Main (System.String[] args) [0x00000] in /Users/plasma/Source/iphone/monotouch/UIKit/UIApplication.cs:31 at MailDialog.Application.Main (System.String[] args) [0x00000] in /Users/rrkhan/Projects/Sandbox/MailDialog/Main.cs:15 I have a very simple sample project illustrating the problem. I can sent you if required.

    Read the article

  • With a jquery modular dialog how do I stop the form values from persisting?

    - by stormist
    (Citing source at: http://jqueryui.com/demos/dialog/#modal-form) As an example, this works great but each time the form is subsequently opened the user entered values remain. How can I stop this behavior? (the form will be used multiple times on the same page. <style type="text/css"> body { font-size: 62.5%; } label, input { display:block; } input.text { margin-bottom:12px; width:95%; padding: .4em; } fieldset { padding:0; border:0; margin-top:25px; } h1 { font-size: 1.2em; margin: .6em 0; } div#users-contain { width: 350px; margin: 20px 0; } div#users-contain table { margin: 1em 0; border-collapse: collapse; width: 100%; } div#users-contain table td, div#users-contain table th { border: 1px solid #eee; padding: .6em 10px; text-align: left; } .ui-dialog .ui-state-error { padding: .3em; } .validateTips { border: 1px solid transparent; padding: 0.3em; } </style> <script type="text/javascript"> $(function() { // a workaround for a flaw in the demo system (http://dev.jqueryui.com/ticket/4375), ignore! $("#dialog").dialog("destroy"); var name = $("#name"), email = $("#email"), password = $("#password"), allFields = $([]).add(name).add(email).add(password), tips = $(".validateTips"); function updateTips(t) { tips .text(t) .addClass('ui-state-highlight'); setTimeout(function() { tips.removeClass('ui-state-highlight', 1500); }, 500); } function checkLength(o,n,min,max) { if ( o.val().length > max || o.val().length < min ) { o.addClass('ui-state-error'); updateTips("Length of " + n + " must be between "+min+" and "+max+"."); return false; } else { return true; } } function checkRegexp(o,regexp,n) { if ( !( regexp.test( o.val() ) ) ) { o.addClass('ui-state-error'); updateTips(n); return false; } else { return true; } } $("#dialog-form").dialog({ autoOpen: false, height: 300, width: 350, modal: true, buttons: { 'Create an account': function() { var bValid = true; allFields.removeClass('ui-state-error'); bValid = bValid && checkLength(name,"username",3,16); bValid = bValid && checkLength(email,"email",6,80); bValid = bValid && checkLength(password,"password",5,16); bValid = bValid && checkRegexp(name,/^[a-z]([0-9a-z_])+$/i,"Username may consist of a-z, 0-9, underscores, begin with a letter."); // From jquery.validate.js (by joern), contributed by Scott Gonzalez: http://projects.scottsplayground.com/email_address_validation/ bValid = bValid && checkRegexp(email,/^((([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+(\.([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+)*)|((\x22)((((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(([\x01-\x08\x0b\x0c\x0e-\x1f\x7f]|\x21|[\x23-\x5b]|[\x5d-\x7e]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(\\([\x01-\x09\x0b\x0c\x0d-\x7f]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))))*(((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(\x22)))@((([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.)+(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.?$/i,"eg. [email protected]"); bValid = bValid && checkRegexp(password,/^([0-9a-zA-Z])+$/,"Password field only allow : a-z 0-9"); if (bValid) { $('#users tbody').append('<tr>' + '<td>' + name.val() + '</td>' + '<td>' + email.val() + '</td>' + '<td>' + password.val() + '</td>' + '</tr>'); $(this).dialog('close'); } }, Cancel: function() { $(this).dialog('close'); } }, close: function() { allFields.val('').removeClass('ui-state-error'); } }); $('#create-user') .button() .click(function() { $('#dialog-form').dialog('open'); }); }); </script> <div class="demo"> <div id="dialog-form" title="Create new user"> <p class="validateTips">All form fields are required.</p> <form> <fieldset> <label for="name">Name</label> <input type="text" name="name" id="name" class="text ui-widget-content ui-corner-all" /> <label for="email">Email</label> <input type="text" name="email" id="email" value="" class="text ui-widget-content ui-corner-all" /> <label for="password">Password</label> <input type="password" name="password" id="password" value="" class="text ui-widget-content ui-corner-all" /> </fieldset> </form> </div> <div id="users-contain" class="ui-widget"> <h1>Existing Users:</h1> <table id="users" class="ui-widget ui-widget-content"> <thead> <tr class="ui-widget-header "> <th>Name</th> <th>Email</th> <th>Password</th> </tr> </thead> <tbody> <tr> <td>John Doe</td> <td>[email protected]</td> <td>johndoe1</td> </tr> </tbody> </table> </div> <button id="create-user">Create new user</button> </div><!-- End demo --> <div class="demo-description"> <p>Use a modal dialog to require that the user enter data during a multi-step process. Embed form markup in the content area, set the <code>modal</code> option to true, and specify primary and secondary user actions with the <code>buttons</code> option.</p> </div><!-- End demo-description -->

    Read the article

  • Storage model for various user setting and attributes in database?

    - by dvd
    I'm currently trying to upgrade a user management system for one web application. This web application is used to provide remote access to various networking equipment for educational purposes. All equipment is assigned to various pods, which users can book for periods of time. The current system is very simple - just 2 user types: administrators and students. All their security and other attributes are mostly hardcoded. I want to change this to something like the following model: user <-- (1..n)profile <--- (1..n) attributes. I.e. user can be assigned several profiles and each profile can have multiple attributes. At runtime all profiles and attributes are merged into single active profile. Some examples of attributes i'm planning to implement: EXPIRATION_DATE - single value, value type: date, specifies when user account will expire; ACCESS_POD - single value, value type: ref to object of Pod class, specifies which pod the user is allowed to book, user profile can have multiple such attributes with different values; TIME_QUOTA - single value, value type: integer, specifies maximum length of time for which student can reserve equipment. CREDIT_CHARGING - multi valued, specifies how much credits will be assigned to user over period of time. (Reservation of devices will cost credits, which will regenerate over time); Security permissions and user preferences can end up as profile or user attributes too: i.e CAN_CREATE_USERS, CAN_POST_NEWS, CAN_EDIT_DEVICES, FONT_SIZE, etc.. This way i could have, for example: students of course A will have profiles STUDENT (with basic attributes) and PROFILE A (wich grants acces to pod A). Students of course B will have profiles: STUDENT, PROFILE B(wich grants to pod B and have increased time quotas). I'm using Spring and Hibernate frameworks for this application and MySQL for database. For this web application i would like to stay within boundaries of these tools. The problem is, that i can't figure out how to best represent all these attributes in database. I also want to create some kind of unified way of retrieveing these attributes and their values. Here is the model i've come up with. Base classes. public abstract class Attribute{ private Long id; Attribute() {} abstract public String getName(); public Long getId() {return id; } void setId(Long id) {this.id = id;} } public abstract class SimpleAttribute extends Attribute{ public abstract Serializable getValue(); abstract void setValue(Serializable s); @Override public boolean equals(Object obj) { ... } @Override public int hashCode() { ... } } Simple attributes can have only one value of any type (including object and enum). Here are more specific attributes: public abstract class IntAttribute extends SimpleAttribute { private Integer value; public Integer getValue() { return value; } void setValue(Integer value) { this.value = value;} void setValue(Serializable s) { setValue((Integer)s); } } public class MaxOrdersAttribute extends IntAttribute { public String getName() { return "Maximum outstanding orders"; } } public final class CreditRateAttribute extends IntAttribute { public String getName() { return "Credit Regeneration Rate"; } } All attributes stored stored using Hibenate variant "table per class hierarchy". Mapping: <class name="ru.mirea.rea.model.abac.Attribute" table="ATTRIBUTES" abstract="true" > <id name="id" column="id"> <generator class="increment" /> </id> <discriminator column="attributeType" type="string"/> <subclass name="ru.mirea.rea.model.abac.SimpleAttribute" abstract="true"> <subclass name="ru.mirea.rea.model.abac.IntAttribute" abstract="true" > <property name="value" column="intVal" type="integer"/> <subclass name="ru.mirea.rea.model.abac.CreditRateAttribute" discriminator-value="CreditRate" /> <subclass name="ru.mirea.rea.model.abac.MaxOrdersAttribute" discriminator-value="MaxOrders" /> </subclass> <subclass name="ru.mirea.rea.model.abac.DateAttribute" abstract="true" > <property name="value" column="dateVal" type="timestamp"/> <subclass name="ru.mirea.rea.model.abac.ExpirationDateAttribute" discriminator-value="ExpirationDate" /> </subclass> <subclass name="ru.mirea.rea.model.abac.PodAttribute" abstract="true" > <many-to-one name="value" column="podVal" class="ru.mirea.rea.model.pods.Pod"/> <subclass name="ru.mirea.rea.model.abac.PodAccessAttribute" discriminator-value="PodAccess" lazy="false"/> </subclass> <subclass name="ru.mirea.rea.model.abac.SecurityPermissionAttribute" discriminator-value="SecurityPermission" lazy="false"> <property name="value" column="spVal" type="ru.mirea.rea.db.hibernate.customTypes.SecurityPermissionType"/> </subclass> </subclass> </class> SecurityPermissionAttribute uses enumeration of various permissions as it's value. Several types of attributes imlement GrantedAuthority interface and can be used with Spring Security for authentication and authorization. Attributes can be created like this: public final class AttributeManager { public <T extends SimpleAttribute> T createSimpleAttribute(Class<T> c, Serializable value) { Session session = HibernateUtil.getCurrentSession(); T att = null; ... att = c.newInstance(); att.setValue(value); session.save(att); session.flush(); ... return att; } public <T extends SimpleAttribute> List<T> findSimpleAttributes(Class<T> c) { List<T> result = new ArrayList<T>(); Session session = HibernateUtil.getCurrentSession(); List<T> temp = session.createCriteria(c).list(); result.addAll(temp); return result; } } And retrieved through User Profiles to which they are assigned. I do not expect that there would be very large amount of rows in the ATTRIBUTES table, but are there any serious drawbacks of such design?

    Read the article

  • problem with tinymce textarea in dynamically added jquery tabs

    - by kranthi
    I have an aspx page(Default1.aspx),in which i have a static jquery tab and anchor tag upon clicking the anchor tag(Add Tab) I am adding new tab dynamically,which gets its contents loaded from another aspx page(Default2.aspx).This second page contains some text inside a tag,a textarea with 'tinymce' class which is placed inside a div with 'style="display:none" ' and this textarea gets displayed only upon clicking the edit button on that page. The HTML of Default1.aspx page looks like this. <head runat="server"> <title>Untitled Page</title> <script src="js/jquery-1.3.2.min.js" type="text/javascript"></script> <script src="js/jquery-ui-1.7.2.custom.min.js" type="text/javascript"></script> <link href="css/custom-theme/jquery-ui-1.7.2.custom.css" rel="stylesheet" type="text/css" /> <link href="css/widgets.css" rel="stylesheet" type="text/css" /> <link href="css/print.css" rel="stylesheet" type="text/css" /> <link href="css/reset.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="js/tiny_mce/jquery.tinymce.js"></script> <script type="text/javascript"> $(function() { //DECLARE FUNCTION: removetab var removetab = function(tabselector, index) { $(".removetab").click(function(){ $(tabselector).tabs('remove',index); }); }; //create tabs $("#tabs").tabs({ add: function(event, ui) { //select newely opened tab $(this).tabs('select',ui.index); //load function to close tab removetab($(this), ui.index); }, show: function(event, ui) { if($.fn.tinymce) { $('textarea.tinymce').tinymce({ // Location of TinyMCE script script_url : 'js/tiny_mce/tiny_mce.js', // General options theme : "advanced", plugins : "safari,style,layer,table,advhr,advimage,advlink,inlinepopups,insertdatetime,preview,media,searchreplace,print,contextmenu,paste,directionality,fullscreen,noneditable,visualchars,nonbreaking,xhtmlxtras,template", // Theme options theme_advanced_buttons1 : "bold,italic,underline,strikethrough,|,bullist,numlist,|,justifyleft,justifycenter,justifyright,justifyfull,styleselect,formatselect,fontselect,fontsizeselect", theme_advanced_buttons2 : "outdent,indent,blockquote,|,undo,redo,|,link,unlink,anchor,image,cleanup,help,code,|,insertdate,inserttime,preview,|,forecolor,backcolor", theme_advanced_buttons3 : "sub,sup,|,ltr,rtl,|,fullscreen", theme_advanced_toolbar_location : "top", theme_advanced_toolbar_align : "left" /*theme_advanced_statusbar_location : "bottom",*/ /*theme_advanced_resizing : true,*/ }); } //load function to close selected tabs removetab($(this), ui.index); } }); //load new tab $(".addtab").click(function(){ var href=$(this).attr("href"); var title=$(this).attr("title"); $("#tabs").tabs( 'add' , href , title+' <span class="removetab ui-icon ui-icon-circle-close" style="float:right; margin: -2px -10px 0px 3px; cursor:pointer;"></span>'); return false; }); }); function showEditFields(){ $('.edit').css('display','inline'); } </script> </head> <body> <form id="form1" runat="server"> <div> <a class="addtab" title="Tab Label" href="HTMLPage.htm">Add Tab</a> <div id="tabs"> <ul> <li><a href="#tabs-1">Default Tab</a></li> </ul> <div id="tabs-1"> <p>Etiam aliquet massa et lorem. Mauris dapibus lacus auctor risus. Aenean tempor ullamcorper leo. Vivamus sed magna quis ligula eleifend adipiscing. Duis orci. Aliquam Proin elit arcu, rutrum commodo, vehicula tempus, commodo a, risus. Curabitur nec arcu. Donec sollicitudin mi sit amet mauris. Nam elementum quam ullamcorper ante.sodales tortor vitae ipsum. Aliquam nulla. Duis aliquam molestie erat. Ut et mauris vel pede varius sollicitudin. Sed ut dolor nec orci tincidunt interdum. Phasellus ipsum. Nunc tristique tempus lectus.</p> </div> </div> </div> </form> </body> and the HTML of Default2.aspx looks like this. <head> </head> <body> <form id="form1" runat="server"> <div class="demo"> <p>Proin elit arcu, rutrum commodo, vehicula tempus, commodo a, risus. Curabitur nec arcu. Donec sollicitudin mi sit amet mauris. Nam elementum quam ullamcorper ante. Etiam aliquet massa et lorem. Mauris dapibus lacus auctor risus. Aenean tempor ullamcorper leo. Vivamus sed magna quis ligula eleifend adipiscing. Duis orci. Aliquam sodales tortor vitae ipsum. Aliquam nulla. Duis aliquam molestie erat. Ut et mauris vel pede varius sollicitudin. Sed ut dolor nec orci tincidunt interdum. Phasellus ipsum. Nunc tristique tempus lectus. <div class="edit" style="display:none"> <textarea style="height:80px; width:100%" class="tinymce" name="" rows="8" runat="server" id="txtans">answer text goes here </textarea> </div> <input id="Button1" type="button" value="edit" onclick="showEditFields();" /> </p> </form> </body> so when I click on the "edit" button available on Default2.aspx ,the textarea with tinymce should appear and I can add as many tabs as I want from Default1.aspx by clicking on Add Tab(anchor) which loads multiple tabs with content from Default2.aspx.After adding these multiple tabs ,if I check to see whether all the textareas are with tinymce,I noticed that only the 1st tab contains textarea with tinymce and in all the other tabs tinymce doesnt show up ,simply the normal text area appears. Could someone please help me with this? Thanks.

    Read the article

  • Where is this System.MissingMethodException occurring? How can I tell?

    - by Jeremy Holovacs
    I am a newbie to ASP.NET MVC (v2), and I am trying to use a strongly-typed view tied to a model object that contains two optional multi-select listbox objects. Upon clicking the submit button, these objects may have 0 or more values selected for them. My model class looks like this: using System; using System.Web.Mvc; using System.Collections.Generic; namespace ModelClasses.Messages { public class ComposeMessage { public bool is_html { get; set; } public bool is_urgent { get; set; } public string message_subject { get; set; } public string message_text { get; set; } public string action { get; set; } public MultiSelectList recipients { get; set; } public MultiSelectList recipient_roles { get; set; } public ComposeMessage() { this.is_html = false; this.is_urgent = false; this.recipients = new MultiSelectList(new Dictionary<int, string>(), "Key", "Value"); this.recipient_roles = new MultiSelectList(new Dictionary<int, string>(), "Key", "Value"); } } } My view looks like this: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<ModelClasses.Messages.ComposeMessage>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">Compose A Message </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2> Compose A New Message:</h2> <br /> <span id="navigation_top"> <%= Html.ActionLink("\\Home", "Index", "Home") %><%= Html.ActionLink("\\Messages", "Home") %></span> <% using (Html.BeginForm()) { %> <fieldset> <legend>Message Headers</legend> <label for="message_subject"> Subject:</label> <%= Html.TextBox("message_subject")%> <%= Html.ValidationMessage("message_subject")%> <label for="selected_recipients"> Recipient Users:</label> <%= Html.ListBox("recipients") %> <%= Html.ValidationMessage("selected_recipients")%> <label for="selected_recipient_roles"> Recipient Roles:</label> <%= Html.ListBox("recipient_roles") %> <%= Html.ValidationMessage("selected_recipient_roles")%> <label for="is_urgent"> Urgent?</label> <%= Html.CheckBox("is_urgent") %> <%= Html.ValidationMessage("is_urgent")%> </fieldset> <fieldset> <legend>Message Text</legend> <%= Html.TextArea("message_text") %> <%= Html.ValidationMessage("message_text")%> </fieldset> <input type="reset" name="reset" id="reset" value="Reset" /> <input type="submit" name="action" id="send_message" value="Send" /> <% } %> <span id="navigation_bottom"> <%= Html.ActionLink("\\Home", "Index", "Home") %><%= Html.ActionLink("\\Messages", "Home") %></span> </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="Scripts" runat="server"> </asp:Content> I have a parameterless ActionResult in my MessagesController like this: [Authorize] public ActionResult ComposeMessage() { ModelClasses.Messages.ComposeMessage FormData = new ModelClasses.Messages.ComposeMessage(); Common C = (Common)Session["Common"]; FormData.recipients = new MultiSelectList(C.AvailableUsers, "Key", "Value"); FormData.recipient_roles = new MultiSelectList(C.AvailableRoles, "Key", "Value"); return View(FormData); } ...and my model-based controller looks like this: [Authorize, AcceptVerbs(HttpVerbs.Post)] public ActionResult ComposeMessage(DCASS3.Classes.Messages.ComposeMessage FormData) { DCASSUser CurrentUser = (DCASSUser)Session["CurrentUser"]; Common C = (Common)Session["Common"]; //... (business logic) return View(FormData); } Problem is, I can access the page fine before a submit. When I actually make selections and press the submit button, however, I get: Server Error in '/' Application. No parameterless constructor defined for this object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.MissingMethodException: No parameterless constructor defined for this object. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [MissingMethodException: No parameterless constructor defined for this object.] System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandle& ctor, Boolean& bNeedSecurityCheck) +0 System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean fillCache) +86 System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) +230 System.Activator.CreateInstance(Type type, Boolean nonPublic) +67 System.Activator.CreateInstance(Type type) +6 System.Web.Mvc.DefaultModelBinder.CreateModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Type modelType) +307 System.Web.Mvc.DefaultModelBinder.BindSimpleModel(ControllerContext controllerContext, ModelBindingContext bindingContext, ValueProviderResult valueProviderResult) +495 System.Web.Mvc.DefaultModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +473 System.Web.Mvc.DefaultModelBinder.GetPropertyValue(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor, IModelBinder propertyBinder) +45 System.Web.Mvc.DefaultModelBinder.BindProperty(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor) +642 System.Web.Mvc.DefaultModelBinder.BindProperties(ControllerContext controllerContext, ModelBindingContext bindingContext) +144 System.Web.Mvc.DefaultModelBinder.BindComplexElementalModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Object model) +95 System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +2386 System.Web.Mvc.DefaultModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +539 System.Web.Mvc.ControllerActionInvoker.GetParameterValue(ControllerContext controllerContext, ParameterDescriptor parameterDescriptor) +447 System.Web.Mvc.ControllerActionInvoker.GetParameterValues(ControllerContext controllerContext, ActionDescriptor actionDescriptor) +173 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +801 System.Web.Mvc.Controller.ExecuteCore() +151 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +105 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +36 System.Web.Mvc.<c_DisplayClass8.b_4() +65 System.Web.Mvc.Async.<c_DisplayClass1.b_0() +44 System.Web.Mvc.Async.<c__DisplayClass81.<BeginSynchronous>b__7(IAsyncResult _) +42 System.Web.Mvc.Async.WrappedAsyncResult1.End() +140 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +54 System.Web.Mvc.Async.AsyncResultWrapper.End(IAsyncResult asyncResult, Object tag) +40 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +52 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +36 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8677678 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3082 This error shows up before I can trap it. I have no idea where it's choking, or what it's choking on. I don't see any point of this model that cannot be created with a parameterless constructor, and I can't find out where it's dying... Help is appreciated, thanks. -Jeremy

    Read the article

  • Passing XML markers to Google Map

    - by djmadscribbler
    I've been creating a V3 Google map based on this example from Mike Williams http://www.geocodezip.com/v3_MW_example_map3.html I've run into a bit of a problem though. If I have no parameters in my URL then I get the error "id is undefined idmarkers [id.toLowerCase()] = marker;" in Firebug and only one marker will show up. If I have a parameter (?id=105 for example) then all the sidebar links say 105 (or whatever the parameter in the URL was) instead of their respective label as listed in the XML file and a random infowindow will be opened instead of the window for the id in the URL. Here is my javascript: var map = null; var lastmarker = null; // ========== Read paramaters that have been passed in ========== // Before we go looking for the passed parameters, set some defaults // in case there are no parameters var id; var index = -1; // these set the initial center, zoom and maptype for the map // if it is not specified in the query string var lat = 42.194741; var lng = -121.700301; var zoom = 18; var maptype = google.maps.MapTypeId.HYBRID; function MapTypeId2UrlValue(maptype) { var urlValue = 'm'; switch (maptype) { case google.maps.MapTypeId.HYBRID: urlValue = 'h'; break; case google.maps.MapTypeId.SATELLITE: urlValue = 'k'; break; case google.maps.MapTypeId.TERRAIN: urlValue = 't'; break; default: case google.maps.MapTypeId.ROADMAP: urlValue = 'm'; break; } return urlValue; } // If there are any parameters at eh end of the URL, they will be in location.search // looking something like "?marker=3" // skip the first character, we are not interested in the "?" var query = location.search.substring(1); // split the rest at each "&" character to give a list of "argname=value" pairs var pairs = query.split("&"); for (var i = 0; i < pairs.length; i++) { // break each pair at the first "=" to obtain the argname and value var pos = pairs[i].indexOf("="); var argname = pairs[i].substring(0, pos).toLowerCase(); var value = pairs[i].substring(pos + 1).toLowerCase(); // process each possible argname - use unescape() if theres any chance of spaces if (argname == "id") { id = unescape(value); } if (argname == "marker") { index = parseFloat(value); } if (argname == "lat") { lat = parseFloat(value); } if (argname == "lng") { lng = parseFloat(value); } if (argname == "zoom") { zoom = parseInt(value); } if (argname == "type") { // from the v3 documentation 8/24/2010 // HYBRID This map type displays a transparent layer of major streets on satellite images. // ROADMAP This map type displays a normal street map. // SATELLITE This map type displays satellite images. // TERRAIN This map type displays maps with physical features such as terrain and vegetation. if (value == "m") { maptype = google.maps.MapTypeId.ROADMAP; } if (value == "k") { maptype = google.maps.MapTypeId.SATELLITE; } if (value == "h") { maptype = google.maps.MapTypeId.HYBRID; } if (value == "t") { maptype = google.maps.MapTypeId.TERRAIN; } } } // this variable will collect the html which will eventually be placed in the side_bar var side_bar_html = ""; // arrays to hold copies of the markers and html used by the side_bar // because the function closure trick doesnt work there var gmarkers = []; var idmarkers = []; // global "map" variable var map = null; // A function to create the marker and set up the event window function function createMarker(point, icon, label, html) { var contentString = html; var marker = new google.maps.Marker({ position: point, map: map, title: label, icon: icon, zIndex: Math.round(point.lat() * -100000) << 5 }); marker.id = id; marker.index = gmarkers.length; google.maps.event.addListener(marker, 'click', function () { lastmarker = new Object; lastmarker.id = marker.id; lastmarker.index = marker.index; infowindow.setContent(contentString); infowindow.open(map, marker); }); // save the info we need to use later for the side_bar gmarkers.push(marker); idmarkers[id.toLowerCase()] = marker; // add a line to the side_bar html side_bar_html += '<a href="javascript:myclick(' + (gmarkers.length - 1) + ')">' + id + '<\/a><br>'; } // This function picks up the click and opens the corresponding info window function myclick(i) { google.maps.event.trigger(gmarkers[i], "click"); } function makeLink() { var mapinfo = "lat=" + map.getCenter().lat().toFixed(6) + "&lng=" + map.getCenter().lng().toFixed(6) + "&zoom=" + map.getZoom() + "&type=" + MapTypeId2UrlValue(map.getMapTypeId()); if (lastmarker) { var a = "/about/map/default.aspx?id=" + lastmarker.id + "&" + mapinfo; var b = "/about/map/default.aspx?marker=" + lastmarker.index + "&" + mapinfo; } else { var a = "/about/map/default.aspx?" + mapinfo; var b = a; } document.getElementById("idlink").innerHTML = '<a href="' + a + '" id=url target=_new>- Link directly to this page by id</a> (id in xml file also entry &quot;name&quot; in sidebar menu)'; document.getElementById("indexlink").innerHTML = '<a href="' + b + '" id=url target=_new>- Link directly to this page by index</a> (position in gmarkers array)'; } function initialize() { // create the map var myOptions = { zoom: zoom, center: new google.maps.LatLng(lat, lng), mapTypeId: maptype, mapTypeControlOptions: { style: google.maps.MapTypeControlStyle.DROPDOWN_MENU }, navigationControl: true, mapTypeId: google.maps.MapTypeId.HYBRID }; map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); var stylesarray = [ { featureType: "poi", elementType: "labels", stylers: [ { visibility: "off" } ] }, { featureType: "landscape.man_made", elementType: "labels", stylers: [ { visibility: "off" } ] } ]; var options = map.setOptions({ styles: stylesarray }); // Make the link the first time when the page opens makeLink(); // Make the link again whenever the map changes google.maps.event.addListener(map, 'maptypeid_changed', makeLink); google.maps.event.addListener(map, 'center_changed', makeLink); google.maps.event.addListener(map, 'bounds_changed', makeLink); google.maps.event.addListener(map, 'zoom_changed', makeLink); google.maps.event.addListener(map, 'click', function () { lastmarker = null; makeLink(); infowindow.close(); }); // Read the data from example.xml downloadUrl("example.xml", function (doc) { var xmlDoc = xmlParse(doc); var markers = xmlDoc.documentElement.getElementsByTagName("marker"); for (var i = 0; i < markers.length; i++) { // obtain the attribues of each marker var lat = parseFloat(markers[i].getAttribute("lat")); var lng = parseFloat(markers[i].getAttribute("lng")); var point = new google.maps.LatLng(lat, lng); var html = markers[i].getAttribute("html"); var label = markers[i].getAttribute("label"); var icon = markers[i].getAttribute("icon"); // create the marker var marker = createMarker(point, icon, label, html); } // put the assembled side_bar_html contents into the side_bar div document.getElementById("side_bar").innerHTML = side_bar_html; // ========= If a parameter was passed, open the info window ========== if (id) { if (idmarkers[id]) { google.maps.event.trigger(idmarkers[id], "click"); } else { alert("id " + id + " does not match any marker"); } } if (index > -1) { if (index < gmarkers.length) { google.maps.event.trigger(gmarkers[index], "click"); } else { alert("marker " + index + " does not exist"); } } }); } var infowindow = new google.maps.InfoWindow( { size: new google.maps.Size(150, 50) }); google.maps.event.addDomListener(window, "load", initialize); And here is an example of my XML formatting <marker lat="42.196175" lng="-121.699224" html="This is the information about 104" iconimage="/about/map/images/104.png" label="104" />

    Read the article

  • Dojo - How to position tooltip close to text?

    - by user244394
    Like the title says i want to be able to display the tooltip close to the text, currently it is displayed far away in the cell. Tobe noted the tooltip positions correctly for large text, only fails for small text. In DOJO How can i position the tooltip close to the text? I have this bit of code snippet that display the tooltip in the grid cells. Screenshot attached, html <div class="some_app claro"></div> ... com.c.widget.EnhancedGrid = function ( aParent, options ) { var grid, options; this.theParentApp = aParent; dojo.require("dojox.grid.EnhancedGrid"); dojo.require("dojox.grid.enhanced.plugins.Menu"); dojo.require("dojox.grid.enhanced.plugins.Selector"); dojo.require("dojox.grid.enhanced.plugins.Pagination"); dojo.require("dojo.store.Memory"); dojo.require("dojo.data.ObjectStore"); dojo.require("dojo._base.xhr"); dojo.require("dojo.domReady!"); dojo.require("dojo.date.locale"); dojo.require("dojo._base.connect"); dojo.require("dojox.data.JsonRestStore"); dojo.require("dojo.data.ItemFileReadStore"); dojo.require("dijit.Menu"); dojo.require("dijit.MenuItem"); dojo.require('dijit.MenuSeparator'); dojo.require('dijit.CheckedMenuItem'); dojo.require('dijit.Tooltip'); dojo.require('dojo/query'); dojo.require("dojox.data.QueryReadStore"); // main initialization function this.init = function( options ) { var me = this; // default options var defaultOptions = { widgetName: ' Enhancedgrid', render: true, // immediately render the grid draggable: true, // disables column dragging containerNode: false, // the Node to hold the Grid (optional) mashupUrl: false, // the URL of the mashup (required) rowsPerPage: 20, //Default number of items per page columns: false, // columns (required) width: "100%", // width of grid height: "100%", // height of grid rowClass: function (rowData) {}, onClick: function () {}, headerMenu: false, // adding a menu pop-up for the header. selectedRegionMenu: false, // adding a menu pop-up for the rows. menusObject: false, //object to start-up the menus using the plug-in. sortInfo: false, // The column default sort infiniteScrolling: false //If true, the enhanced grid will have an infinite scrolling. }; // merge user provided options me.options = jQuery.extend( {}, defaultOptions, options ); // check we have minimum required options if ( ! me.options.mashupUrl ){ throw ("You must supply a mashupUrl"); } if ( ! me.options.columns ){ throw ("You must supply columns"); } // make the column for formatting based on its data type. me.preProcessColumns(); // create the Contextual Menu me.createMenu(); // create the grid object and return me.createGrid(); }; // Loading the data to the grid. this.loadData = function () { var me = this; if (!me.options.infiniteScrolling) { var xhrArgs = { url: me.options.mashupUrl, handleAs: "json", load: function( data ){ var store = new dojo.data.ItemFileReadStore({ data : {items : eval( "data."+me.options.dataRoot)}}); store.fetch({ onComplete : function(items, request) { if (me.grid.selection !== null) { me.grid.selection.clear(); } me.grid.setStore(store); }, onError : function(error) { me.onError(error); } }); }, error: function (error) { me.onError(error); } }; dojo.xhrGet(xhrArgs); } else { dojo.declare('NotificationQueryReadStore', dojox.data.QueryReadStore, { // // hacked -- override to map to proper data structure // from mashup // _xhrFetchHandler : function(data, request, fetchHandler, errorHandler) { // // TODO: need to have error handling here when // data has "error" data structure // // // remap data object before process by super method // var dataRoot = eval ("data."+me.options.dataRoot); var dataTotal = eval ("data."+me.options.dataTotal); data = { numRows : dataTotal, items : dataRoot }; // call to super method to process mapped data and // set rowcount // for proper display this.inherited(arguments); } }); var queryStore = new NotificationQueryReadStore({ url : me.options.mashupUrl, urlPreventCache: true, requestMethod : "get", onError: function (error) { me.onError(error); } }); me.grid.setStore(queryStore); } }; this.preProcessColumns = function () { var me = this; var options = me.options; for (i=0;i<this.options.columns.length;i++) { if (this.options.columns[i].formatter==null) { switch (this.options.columns[i].datatype) { case "string": this.options.columns[i].formatter = me.formatString; break; case "date": this.options.columns[i].formatter = me.formatDate; var todayDate = new Date(); var gmtTime = c.util.Date.parseDate(todayDate.toString()).toString(); var gmtval = gmtTime.substring(gmtTime.indexOf('GMT'),(gmtTime.indexOf('(')-1)); this.options.columns[i].name = this.options.columns[i].name + " ("+gmtval+")"; } } if (this.options.columns[i].sortDefault) { me.options.sortInfo = i+1; } } }; // create GRID object using supplied options this.createGrid = function () { var me = this; var options = me.options; // create a new grid this.grid = new dojox.grid.EnhancedGrid ({ width: options.width, height: options.height, query: { id: "*" }, keepSelection: true, formatterScope: this, structure: options.columns, columnReordering: options.draggable, rowsPerPage: options.rowsPerPage, //sortInfo: options.sortInfo, plugins : { menus: options.menusObject, selector: {"row":"multi", "cell": "disabled" }, }, //Allow the user to decide if a column is sortable by setting sortable = true / false canSort: function(col) { if (options.columns[Math.abs(col)-1].sortable) return true; else return false; }, //Change the row colors depending on severity column. onStyleRow: function (row) { var grid = me.grid; var item = grid.getItem(row.index); if (item && options.rowClass(item)) { row.customClasses += " " +options.rowClass(item); if (grid.selection.selectedIndex == row.index) { row.customClasses += " dojoxGridRowSelected"; } grid.focus.styleRow(row); grid.edit.styleRow(row); } }, onCellMouseOver: function (e){ // var pos = dojo.position(this, true); // alert(pos); console.log( e.rowIndex +" cell node :"+ e.cellNode.innerHTML); // var pos = dojo.position(this, true); console.log( " pos :"+ e.pos); if (e.cellNode.innerHTML!="") { dijit.showTooltip(e.cellNode.innerHTML, e.cellNode); } }, onCellMouseOut: function (e){ dijit.hideTooltip(e.cellNode); }, onHeaderCellMouseOver: function (e){ if (e.cellNode.innerHTML!="") { dijit.showTooltip(e.cellNode.innerHTML, e.cellNode); } }, onHeaderCellMouseOut: function (e){ dijit.hideTooltip(e.cellNode); }, }); // ADDED CODE FOR TOOLTIP var gridTooltip = new Tooltip({ connectId: "grid1", selector: "td", position: ["above"], getContent: function(matchedNode){ var childNode = matchedNode.childNodes[0]; if(childNode.nodeType == 1 && childNode.className == "user") { this.position = ["after"]; this.open(childNode); return false; } if(matchedNode.className && matchedNode.className == "user") { this.position = ["after"]; } else { this.position = ["above"]; } return matchedNode.textContent; } }); ... //Construct the grid this.buildGrid = function(){ var datagrid = new com.emc.widget.EnhancedGrid(this,{ Url: "/dge/api/-resultFormat=json&id="+encodeURIComponent(idUrl), dataRoot: "Root.ATrail", height: '100%', columns: [ { name: 'Time', field: 'Time', width: '20%', datatype: 'date', sortable: true, searchable: true, hidden: false}, { name: 'Type', field: 'Type', width: '20%', datatype: 'string', sortable: true, searchable: true, hidden: false}, { name: 'User ID', field: 'UserID', width: '20%', datatype: 'string', sortable: true, searchable: true, hidden: false } ] }); this.grid = datagrid; };

    Read the article

  • how to debug "deep" crashes in Android?

    - by eerok512
    Hi All, I've been trying to debug an android crash that is occurring without a Java Stack Trace... Java Stack Trace bugs are very easy for me to fix... but this bug I'm getting seems to be crashing inside the "NDK" or whatever it is the deep internals of Android are called... I've made no modifications to the NDK btw... I just dunno what else to call that layer hehe. Anyway I'm mainly looking for advice on deep-debug methods, rather than help with this specific problem... because I doubt I can post all the source code involved... so really I just need to know how to set breakpoints at the deep layers or whatever other methods there are to trace deep-crashes to their source... so I will briefly describe the bug and then post a LogCat. I have an app with 7 Activities Activity_INTRO Activity_EULA Activity_MAIN Activity_Contact Activity_News Activity_Library Activity_More INTRO is the initiating one... it fades in some company logos... after displaying them for a set time it jumps to the EULA activity... after the user accepts the EULA, it jumps to MAIN... MAIN then creates a TabHost and populates it with the 4 remaining activities now heres the thing... when I click on say, the More tab of the TabHost, the app pauses for a few seconds and then hard-crashes... no java stack trace, but an actual ASM level trace with the registers and IP and stack... the same thing occurs no matter which tab I select, Contact, News, Library, More... all of them crash with the same hard-crash if however I set the manifest to start the app at Activity_MAIN, bypassing the INTRO and EULA, then these crashes do not occur... so something is lingering from those opening activities that is somehow hosing the TabHost'ed Activities... and I'm wondering what the hell that could be... because I'm using finish() on those activites when they need to jump... in fact here is how I'm doing it let me know if you see any bugs: when jumping from INTRO to EULA I do: //Display the EULA Intent newIntent = new Intent (avi, Activity_EULA.class); startActivity (newIntent); finish(); and EULA to MAIN: Intent newIntent = new Intent (this, Activity_Main.class); startActivity (newIntent); finish(); anyway, here is the hard crash log... please let me know if there is some way I can reverse engineer either /system/lib/libcutils.so or /system/lib/libandroid_runtime.so, because I think the crash is happening in one of them... i think its happening in the libandroid_runtime in fact.... anyway on to the log: 12-25 00:56:07.322: INFO/DEBUG(551): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 12-25 00:56:07.332: INFO/DEBUG(551): Build fingerprint: 'generic/sdk/generic/:1.5/CUPCAKE/150240:eng/test-keys' 12-25 00:56:07.362: INFO/DEBUG(551): pid: 722, tid: 723 >>> com.killerapps.chokes <<< 12-25 00:56:07.362: INFO/DEBUG(551): signal 11 (SIGSEGV), fault addr 00000004 12-25 00:56:07.362: INFO/DEBUG(551): r0 00000004 r1 40021800 r2 00000004 r3 ad3296c5 12-25 00:56:07.372: INFO/DEBUG(551): r4 00000000 r5 00000000 r6 ad342da5 r7 41039fb8 12-25 00:56:07.372: INFO/DEBUG(551): r8 100ffcb0 r9 41039fb0 10 41e014a0 fp 00001071 12-25 00:56:07.382: INFO/DEBUG(551): ip ad35b874 sp 100ffc98 lr ad3296cf pc afb045a8 cpsr 00000010 12-25 00:56:07.552: INFO/DEBUG(551): #00 pc 000045a8 /system/lib/libcutils.so 12-25 00:56:07.572: INFO/DEBUG(551): #01 lr ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.582: INFO/DEBUG(551): stack: 12-25 00:56:07.582: INFO/DEBUG(551): 100ffc58 00000000 12-25 00:56:07.592: INFO/DEBUG(551): 100ffc5c 001c5278 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc60 000000da 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc64 0016c778 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc68 100ffcc8 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc6c 001c5278 [heap] 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc70 427d1ac0 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc74 000000c1 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc78 40021800 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc7c 000000c2 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc80 00000000 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc84 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc88 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc8c 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc90 df002777 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc94 e3a070ad 12-25 00:56:07.632: INFO/DEBUG(551): #00 100ffc98 00000000 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc9c ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.632: INFO/DEBUG(551): 100ffca0 100ffcd0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca4 ad342db5 /system/lib/libandroid_runtime.so 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca8 410a79d0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffcac ad00e3b8 /system/lib/libdvm.so 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb0 410a79d0 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb4 0016bac0 [heap] 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcb8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcbc 40021800 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc0 410a79d0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc4 afe39dd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc8 100ffcd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffccc ad040a8d /system/lib/libdvm.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd0 41039fb0 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd4 420000f8 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcdc 100ffd48 12-25 00:56:07.852: DEBUG/dalvikvm(722): GC freed 367 objects / 15144 bytes in 210ms 12-25 00:56:08.081: DEBUG/InetAddress(722): www.akillerapp.com: 74.86.47.202 (family 2, proto 6) 12-25 00:56:08.242: DEBUG/dalvikvm(722): GC freed 62 objects / 2328 bytes in 122ms 12-25 00:56:08.771: DEBUG/dalvikvm(722): GC freed 245 objects / 11744 bytes in 179ms 12-25 00:56:09.131: INFO/ActivityManager(577): Process com.killerapps.chokes (pid 722) has died. 12-25 00:56:09.171: INFO/WindowManager(577): WIN DEATH: Window{43719320 com.killerapps.chokes/com.killerapps.chokes.Activity_Main paused=false} 12-25 00:56:09.251: INFO/DEBUG(551): debuggerd committing suicide to free the zombie! 12-25 00:56:09.291: DEBUG/Zygote(553): Process 722 terminated by signal (11) 12-25 00:56:09.311: INFO/DEBUG(781): debuggerd: Jun 30 2009 17:00:51 12-25 00:56:09.331: WARN/InputManagerService(577): Got RemoteException sending setActive(false) notification to pid 722 uid 10020

    Read the article

  • Floats will not align, stay staggered, can't find a solution?

    - by Sarah Proper
    What I am trying to do is build a multi column layout. The main two sections are divided 2/3 to 1/3 and inside the 2/3 column is divided 2/3 1/3 as well. My problem is that my floats will not align nicely with each other, choosing instead to stagger like stairs. I have tried declaring the widths smaller, floating them individually, including in the float sections display:block,inline, or inline-block and nothing seems to be working. I am getting really frustrated and would appreciate any help! Thanks! <div class="wrapper"> <div class="width50" style="float:left;"> <h1>Our Mission:</h1> <p> Bacon ipsum dolor sit amet swine spare ribs pork meatloaf pancetta filet mignon. Rump frankfurter pork belly prosciutto beef boudin andouille pig pork chop meatball ham drumstick filet mignon. Strip steak flank shank pig, tongue tri-tip jowl leberkas sirloin brisket t-bone. Ground round spare ribs salami capicola filet mignon. Capicola turkey t-bone corned beef sausage ham hock. Corned beef capicola leberkas pork chop, swine pastrami drumstick. Frankfurter fatback bacon jowl short loin, jerky pancetta bresaola corned beef shoulder drumstick ball tip tri-tip.</p> <div class="width50 float-left"> <img src="@Url.StaticContent(Links.Content.images.map_homepage_png)" alt="Map" /> </div> <div class="width33 float-right"> <img src="@Url.StaticContent(Links.Content.images.address_line_text_png)" alt="addressline" /> <br /> <h3>address</h3> <b>405 Empire Boulevard<br /> Rochester, NY 14609 </b> </div> </div> <div class="width33" style="float:right;"> <h1>Events</h1> <ul class="events"> <li> <h2>Fall Volunteer Festival</h2> <p> <b>october 6<br /> 10 am to 3pm </b> </p> <p> come to our town location for some fun activities for family and friends! </p> </li> <li> <h2>Fall Volunteer Festival</h2> <p> <b>october 6<br /> 10 am to 3pm </b> </p> <p> come to our town location for some fun activities for family and friends! </p> </li> <li> <h2>Fall Volunteer Festival</h2> <p> <b>october 6<br /> 10 am to 3pm </b> </p> <p> come to our town location for some fun activities for family and friends! </p> </li> </ul> </div> </div> </div> and the css: .clearfix:before, .clearfix:after, .grid-block:before, .grid-block:after, .deepest:before, .deepest:after { content: ""; display: table; } .clearfix:after, .grid-block:after, .deepest:after { clear: both; } .grid-box { float: left; } /* Grid Units */ .width16 { width: 16.666%; } .width20 { width: 20%; } .width25 { width: 25%; } .width33 { width: 39.333%; } .width40 { width: 40%; } .width50 { width: 50%; } .width60 { width: 60%; } .width66 { width: 66.666%; } .width75 { width: 75%; } .width80 { width: 80%; } .width100 { width: 100%; } .width16, .width20, .width25, .width33, .width40, .width50, .width60, .width66, .width75, .width80, .width100 { -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box; padding: 5px 10px 5px 10px; } /* Create new Block Formatting Contexts */ .bfc-o { overflow: hidden; } .bfc-f { -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box; width: 100%; float: left; } /* Align Boxes */ .float-left { float: left; } .float-right { float: right; } /* Grid Gutter */ .grid-gutter.grid-block { margin: 0 -15px; } .grid-gutter > .grid-box > * { margin: 0 15px; } .grid-gutter > .grid-box > * > :first-child { margin-top: 0; } .grid-gutter > .grid-box > * > :last-child { margin-bottom: 0; } /* Layout Defaults --------------------------------------------------------------------------------------- -------------*/ /* Center Page */ .wrapper { -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box; margin: auto; } /* Header */ #header { position: relative; padding-top: 10px; } #toolbar .float-left .module, #toolbar .float-left > time { margin: 0 15px 0 0; float: left; } #toolbar .float-right .module { margin: 0 0 0 15px; float: right; } #headerbar .module { max-width: 300px; margin-right: 0; float: right; } #logo, #logo > img, #menu { float: left; } #search { float: right; } #banner { position: absolute; top: 0; right: -200px; } /* Footer */ #footer { position: relative; text-align: center; } /* Absolute */ #absolute { position: absolute; z-index: 15; width: 100%; }

    Read the article

  • Add UIView and UILabel to UICollectionViewCell. Then Segue based on clicked cell index

    - by JetSet
    I am new to collection views in Objective-C. Can anyone tell me why I can't see my UILabel embedded in the transparent UIView and the best way to resolve. I want to also segue from the cell to several various UIViewControllers based on the selected index cell. I am using GitHub project https://github.com/mayuur/MJParallaxCollectionView Overall, in MJRootViewController.m I wanted to add a UIView with a transparency and a UILabel with details of the cell from a array. MJCollectionViewCell.h // MJCollectionViewCell.h // RCCPeakableImageSample // // Created by Mayur on 4/1/14. // Copyright (c) 2014 RCCBox. All rights reserved. // #import <UIKit/UIKit.h> #define IMAGE_HEIGHT 200 #define IMAGE_OFFSET_SPEED 25 @interface MJCollectionViewCell : UICollectionViewCell /* image used in the cell which will be having the parallax effect */ @property (nonatomic, strong, readwrite) UIImage *image; /* Image will always animate according to the imageOffset provided. Higher the value means higher offset for the image */ @property (nonatomic, assign, readwrite) CGPoint imageOffset; //@property (nonatomic,readwrite) UILabel *textLabel; @property (weak, nonatomic) IBOutlet UILabel *textLabel; @property (nonatomic,readwrite) NSString *text; @property(nonatomic,readwrite) CGFloat x,y,width,height; @property (nonatomic,readwrite) NSInteger lineSpacing; @property (nonatomic, strong) IBOutlet UIView* overlayView; @end MJCollectionViewCell.m // // MJCollectionViewCell.m // RCCPeakableImageSample // // Created by Mayur on 4/1/14. // Copyright (c) 2014 RCCBox. All rights reserved. // #import "MJCollectionViewCell.h" @interface MJCollectionViewCell() @property (nonatomic, strong, readwrite) UIImageView *MJImageView; @end @implementation MJCollectionViewCell - (instancetype)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) [self setupImageView]; return self; } - (id)initWithCoder:(NSCoder *)aDecoder { self = [super initWithCoder:aDecoder]; if (self) [self setupImageView]; return self; } /* // Only override drawRect: if you perform custom drawing. // An empty implementation adversely affects performance during animation. - (void)drawRect:(CGRect)rect { // Drawing code } */ #pragma mark - Setup Method - (void)setupImageView { // Clip subviews self.clipsToBounds = YES; // Add image subview self.MJImageView = [[UIImageView alloc] initWithFrame:CGRectMake(self.bounds.origin.x, self.bounds.origin.y, self.bounds.size.width, IMAGE_HEIGHT)]; self.MJImageView.backgroundColor = [UIColor redColor]; self.MJImageView.contentMode = UIViewContentModeScaleAspectFill; self.MJImageView.clipsToBounds = NO; [self addSubview:self.MJImageView]; } # pragma mark - Setters - (void)setImage:(UIImage *)image { // Store image self.MJImageView.image = image; // Update padding [self setImageOffset:self.imageOffset]; } - (void)setImageOffset:(CGPoint)imageOffset { // Store padding value _imageOffset = imageOffset; // Grow image view CGRect frame = self.MJImageView.bounds; CGRect offsetFrame = CGRectOffset(frame, _imageOffset.x, _imageOffset.y); self.MJImageView.frame = offsetFrame; } - (void)setText:(NSString *)text{ _text=text; if (!self.textLabel) { CGFloat realH=self.height*2/3-self.lineSpacing; CGFloat latoA=realH/3; // self.textLabel=[[UILabel alloc] initWithFrame:CGRectMake(10,latoA/2, self.width-20, realH)]; self.textLabel.layer.anchorPoint=CGPointMake(.5, .5); self.textLabel.font=[UIFont fontWithName:@"HelveticaNeue-ultralight" size:38]; self.textLabel.numberOfLines=3; self.textLabel.textColor=[UIColor whiteColor]; self.textLabel.shadowColor=[UIColor blackColor]; self.textLabel.shadowOffset=CGSizeMake(1, 1); self.textLabel.transform=CGAffineTransformMakeRotation(-(asin(latoA/(sqrt(self.width*self.width+latoA*latoA))))); [self addSubview:self.textLabel]; } self.textLabel.text=text; } @end MJViewController.h // // MJViewController.h // ParallaxImages // // Created by Mayur on 4/1/14. // Copyright (c) 2014 sky. All rights reserved. // #import <UIKit/UIKit.h> @interface MJRootViewController : UIViewController{ NSInteger choosed; } @end MJViewController.m // // MJViewController.m // ParallaxImages // // Created by Mayur on 4/1/14. // Copyright (c) 2014 sky. All rights reserved. // #import "MJRootViewController.h" #import "MJCollectionViewCell.h" @interface MJRootViewController () <UICollectionViewDataSource, UICollectionViewDelegate, UIScrollViewDelegate> @property (weak, nonatomic) IBOutlet UICollectionView *parallaxCollectionView; @property (nonatomic, strong) NSMutableArray* images; @end @implementation MJRootViewController - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. //self.navigationController.navigationBarHidden=YES; // Fill image array with images NSUInteger index; for (index = 0; index < 14; ++index) { // Setup image name NSString *name = [NSString stringWithFormat:@"image%03ld.jpg", (unsigned long)index]; if(!self.images) self.images = [NSMutableArray arrayWithCapacity:0]; [self.images addObject:name]; } [self.parallaxCollectionView reloadData]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } #pragma mark - UICollectionViewDatasource Methods - (NSInteger)collectionView:(UICollectionView *)collectionView numberOfItemsInSection:(NSInteger)section { return self.images.count; } - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath { MJCollectionViewCell* cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"MJCell" forIndexPath:indexPath]; //get image name and assign NSString* imageName = [self.images objectAtIndex:indexPath.item]; cell.image = [UIImage imageNamed:imageName]; //set offset accordingly CGFloat yOffset = ((self.parallaxCollectionView.contentOffset.y - cell.frame.origin.y) / IMAGE_HEIGHT) * IMAGE_OFFSET_SPEED; cell.imageOffset = CGPointMake(0.0f, yOffset); NSString *text; NSInteger index=choosed>=0 ? choosed : indexPath.row%5; switch (index) { case 0: text=@"I am the home cell..."; break; case 1: text=@"I am next..."; break; case 2: text=@"Cell 3..."; break; case 3: text=@"Cell 4..."; break; case 4: text=@"The last cell"; break; default: break; } cell.text=text; cell.overlayView.backgroundColor = [UIColor colorWithWhite:0.0f alpha:0.4f]; //cell.textLabel.text = @"Label showing"; cell.textLabel.font = [UIFont boldSystemFontOfSize:22.0f]; cell.textLabel.textColor = [UIColor whiteColor]; //This is another attempt to display the label by using tags. //UILabel* label = (UILabel*)[cell viewWithTag:1]; //label.text = @"Label works"; return cell; } #pragma mark - UIScrollViewdelegate methods - (void)scrollViewDidScroll:(UIScrollView *)scrollView { for(MJCollectionViewCell *view in self.parallaxCollectionView.visibleCells) { CGFloat yOffset = ((self.parallaxCollectionView.contentOffset.y - view.frame.origin.y) / IMAGE_HEIGHT) * IMAGE_OFFSET_SPEED; view.imageOffset = CGPointMake(0.0f, yOffset); } } @end

    Read the article

  • Drupal Ctools Form Wizard in a Block

    - by Iamjon
    Hi everyone I created a custom module that has a Ctools multi step form. It's basically a copy of http://www.nicklewis.org/using-chaos-tools-form-wizard-build-multistep-forms-drupal-6. The form works. I can see it if I got to the url i made for it. For the life of me I can't get the multistep form to show up in a block. Any clues? /** * Implementation of hook_block() * */ function mycrazymodule_block($op='list', $delta=0, $edit=array()) { switch ($op) { case 'list': $blocks[0]['info'] = t('SFT Getting Started'); $blocks[1]['info'] = t('SFT Contact US'); $blocks[2]['info'] = t('SFT News Letter'); return $blocks; case 'view': switch ($delta){ case '0': $block['subject'] = t('SFT Getting Started Subject'); $block['content'] = mycrazymodule_wizard(); break; case '1': $block['subject'] = t('SFT Contact US Subject'); $block['content'] = t('SFT Contact US content'); break; case '2': $block['subject'] = t('SFT News Letter Subject'); $block['content'] = t('SFT News Letter cONTENT'); break; } return $block; } } /** * Implementation of hook_menu(). */ function mycrazymodule_menu() { $items['hellocowboy'] = array( 'title' = 'Two Step Form', 'page callback' = 'mycrazymodule_wizard', 'access arguments' = array('access content') ); return $items; } /** * menu callback for the multistep form * step is whatever arg one is -- and will refer to the keys listed in * $form_info['order'], and $form_info['forms'] arrays */ function mycrazymodule_wizard() { $step = arg(1); // required includes for wizard $form_state = array(); ctools_include('wizard'); ctools_include('object-cache'); // The array that will hold the two forms and their options $form_info = array( 'id' = 'getting_started', 'path' = "hellocowboy/%step", 'show trail' = FALSE, 'show back' = FALSE, 'show cancel' = false, 'show return' =false, 'next text' = 'Submit', 'next callback' = 'getting_started_add_subtask_next', 'finish callback' = 'getting_started_add_subtask_finish', 'return callback' = 'getting_started_add_subtask_finish', 'order' = array( 'basic' = t('Step 1: Basic Info'), 'lecture' = t('Step 2: Choose Lecture'), ), 'forms' = array( 'basic' = array( 'form id' = 'basic_info_form' ), 'lecture' = array( 'form id' = 'choose_lecture_form' ), ), ); $form_state = array( 'cache name' = NULL, ); // no matter the step, you will load your values from the callback page $getstart = getting_started_get_page_cache(NULL); if (!$getstart) { // set form to first step -- we have no data $step = current(array_keys($form_info['order'])); $getstart = new stdClass(); //create cache ctools_object_cache_set('getting_started', $form_state['cache name'], $getstart); //print_r($getstart); } //THIS IS WHERE WILL STORE ALL FORM DATA $form_state['getting_started_obj'] = $getstart; // and this is the witchcraft that makes it work $output = ctools_wizard_multistep_form($form_info, $step, $form_state); return $output; } function basic_info_form(&$form, &$form_state){ $getstart = &$form_state['getting_started_obj']; $form['firstname'] = array( '#weight' = '0', '#type' = 'textfield', '#title' = t('firstname'), '#size' = 60, '#maxlength' = 255, '#required' = TRUE, ); $form['lastname'] = array( '#weight' = '1', '#type' = 'textfield', '#title' = t('lastname'), '#required' = TRUE, '#size' = 60, '#maxlength' = 255, ); $form['phone'] = array( '#weight' = '2', '#type' = 'textfield', '#title' = t('phone'), '#required' = TRUE, '#size' = 60, '#maxlength' = 255, ); $form['email'] = array( '#weight' = '3', '#type' = 'textfield', '#title' = t('email'), '#required' = TRUE, '#size' = 60, '#maxlength' = 255, ); $form['newsletter'] = array( '#weight' = '4', '#type' = 'checkbox', '#title' = t('I would like to receive the newsletter'), '#required' = TRUE, '#return_value' = 1, '#default_value' = 1, ); $form_state['no buttons'] = TRUE; } function basic_info_form_validate(&$form, &$form_state){ $email = $form_state['values']['email']; $phone = $form_state['values']['phone']; if(valid_email_address($email) != TRUE){ form_set_error('Dude you have an error', t('Where is your email?')); } //if (strlen($phone) 0 && !ereg('^[0-9]{1,3}-[0-9]{3}-[0-9]{3,4}-[0-9]{3,4}$', $phone)) { //form_set_error('Dude the phone', t('Phone number must be in format xxx-xxx-nnnn-nnnn.')); //} } function basic_info_form_submit(&$form, &$form_state){ //Grab the variables $firstname =check_plain ($form_state['values']['firstname']); $lastname = check_plain ($form_state['values']['lastname']); $email = check_plain ($form_state['values']['email']); $phone = check_plain ($form_state['values']['phone']); $newsletter = $form_state['values']['newsletter']; //Send the form and Grab the lead id $leadid = send_first_form($lastname, $firstname, $email,$phone, $newsletter); //Put into form $form_state['getting_started_obj']-firstname = $firstname; $form_state['getting_started_obj']-lastname = $lastname; $form_state['getting_started_obj']-email = $email; $form_state['getting_started_obj']-phone = $phone; $form_state['getting_started_obj']-newsletter = $newsletter; $form_state['getting_started_obj']-leadid = $leadid; } function choose_lecture_form(&$form, &$form_state){ $one = 'event 1' $two = 'event 2' $three = 'event 3' $getstart = &$form_state['getting_started_obj']; $form['lecture'] = array( '#weight' = '5', '#default_value' = 'two', '#options' = array( 'one' = $one, 'two' = $two, 'three' = $three, ), '#type' = 'radios', '#title' = t('Select Workshop'), '#required' = TRUE, ); $form['attendees'] = array( '#weight' = '6', '#default_value' = 'one', '#options' = array( 'one' = t('I will be arriving alone'), 'two' =t('I will be arriving with a guest'), ), '#type' = 'radios', '#title' = t('Attendees'), '#required' = TRUE, ); $form_state['no buttons'] = TRUE; } /** * Same idea as previous steps submit * */ function choose_lecture_form_submit(&$form, &$form_state) { $workshop = $form_state['values']['lecture']; $leadid = $form_state['getting_started_obj']-leadid; $attendees = $form_state['values']['attendees']; $form_state['getting_started_obj']-lecture = $workshop; $form_state['getting_started_obj']-attendees = $attendees; send_second_form($workshop, $attendees, $leadid); } /*----PART 3 CTOOLS CALLBACKS -- these usually don't have to be very unique ---------------------- */ /** * Callback generated when the add page process is finished. * this is where you'd normally save. */ function getting_started_add_subtask_finish(&$form_state) { dpm($form_state); $getstart = &$form_state['getting_started_obj']; drupal_set_message('mycrazymodule '.$getstart-name.' successfully deployed' ); //Get id // Clear the cache ctools_object_cache_clear('getting_started', $form_state['cache name']); $form_state['redirect'] = 'hellocowboy'; } /** * Callback for the proceed step * */ function getting_started_add_subtask_next(&$form_state) { dpm($form_state); $getstart = &$form_state['getting_started_obj']; $cache = ctools_object_cache_set('getting_started', $form_state['cache name'], $getstart); } /*----PART 4 CTOOLS FORM STORAGE HANDLERS -- these usually don't have to be very unique ---------------------- */ /** * Remove an item from the object cache. */ function getting_started_clear_page_cache($name) { ctools_object_cache_clear('getting_started', $name); } /** * Get the cached changes to a given task handler. */ function getting_started_get_page_cache($name) { $cache = ctools_object_cache_get('getting_started', $name); return $cache; } //Salesforce Functions function send_first_form($lastname, $firstname,$email,$phone, $newsletter){ $send = array("LastName" = $lastname , "FirstName" = $firstname, "Email" = $email ,"Phone" = $phone , "Newsletter__c" =$newsletter ); $sf = salesforce_api_connect(); $response = $sf-client-create(array($send), 'Lead'); dpm($response); return $response-id; } function send_second_form($workshop, $attendees, $leadid){ $send = array("Id" = $leadid , "Number_Of_Pepole__c" = "2" ); $sf = salesforce_api_connect(); $response = $sf-client-update(array($send), 'Lead'); dpm($response, 'the final response'); return $response-id; }

    Read the article

< Previous Page | 310 311 312 313 314 315 316  | Next Page >