Search Results

Search found 26297 results on 1052 pages for 'unit test'.

Page 181/1052 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • SET game odds simulation (MATLAB)

    - by yuk
    Here is an interesting problem for your weekend. :) I recently find the great card came - SET. Briefly, there are 81 cards with the four features: symbol (oval, squiggle or diamond), color (red, purple or green), number (one, two or three) or shading (solid, striped or open). The task is to find (from selected 12 cards) a SET of 3 cards, in which each of the four features is either all the same on each card or all different on each card (no 2+1 combination). In my free time I've decided to code it in MATLAB to find a solution and to estimate odds of having a set in randomly selected cards. Here is the code: %% initialization K = 12; % cards to draw NF = 4; % number of features (usually 3 or 4) setallcards = unique(nchoosek(repmat(1:3,1,NF),NF),'rows'); % all cards: rows - cards, columns - features setallcomb = nchoosek(1:K,3); % index of all combinations of K cards by 3 %% test tic NIter=1e2; % number of test iterations setexists = 0; % test results holder % C = progress('init'); % if you have progress function from FileExchange for d = 1:NIter % C = progress(C,d/NIter); % cards for current test setdrawncardidx = randi(size(setallcards,1),K,1); setdrawncards = setallcards(setdrawncardidx,:); % find all sets in current test iteration for setcombidx = 1:size(setallcomb,1) setcomb = setdrawncards(setallcomb(setcombidx,:),:); if all(arrayfun(@(x) numel(unique(setcomb(:,x))), 1:NF)~=2) % test one combination setexists = setexists + 1; break % to find only the first set end end end fprintf('Set:NoSet = %g:%g = %g:1\n', setexists, NIter-setexists, setexists/(NIter-setexists)) toc 100-1000 iterations are fast, but be careful with more. One million iterations takes about 15 hours on my home computer. Anyway, with 12 cards and 4 features I've got around 13:1 of having a set. This is actually a problem. The instruction book said this number should be 33:1. And it was recently confirmed by Peter Norvig. He provides the Python code, but I didn't test it. So can you find an error?

    Read the article

  • Smoke testing a .NET web application

    - by pdr
    I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it. Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly. To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow. The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each. On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app. Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already. But wouldn't it be nice if I could make that process simpler still? At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3). Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there? Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse? For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

    Read the article

  • Difference in DocumentBuilder.parse when using JRE 1.5 and JDK 1.6

    - by dhiller
    Recently at last we have switched our projects to Java 1.6. When executing the tests I found out that using 1.6 a SAXParseException is not thrown which has been thrown using 1.5. Below is my test code to demonstrate the problem. import java.io.StringReader; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.stream.StreamSource; import javax.xml.validation.SchemaFactory; import org.junit.Test; import org.xml.sax.InputSource; import org.xml.sax.SAXParseException; /** * Test class to demonstrate the difference between JDK 1.5 to JDK 1.6. * * Seen on Linux: * * <pre> * #java version "1.6.0_18" * Java(TM) SE Runtime Environment (build 1.6.0_18-b07) * Java HotSpot(TM) Server VM (build 16.0-b13, mixed mode) * </pre> * * Seen on OSX: * * <pre> * java version "1.6.0_17" * Java(TM) SE Runtime Environment (build 1.6.0_17-b04-248-10M3025) * Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01-101, mixed mode) * </pre> * * @author dhiller (creator) * @author $Author$ (last editor) * @version $Revision$ * @since 12.03.2010 11:32:31 */ public class TestXMLValidation { /** * Tests the schema validation of an XML against a simple schema. * * @throws Exception * Falls ein Fehler auftritt * @throws junit.framework.AssertionFailedError * Falls eine Unit-Test-Pruefung fehlschlaegt */ @Test(expected = SAXParseException.class) public void testValidate() throws Exception { final StreamSource schema = new StreamSource( new StringReader( "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" + "<xs:schema xmlns:xs=\"http://www.w3.org/2001/XMLSchema\" " + "elementFormDefault=\"qualified\" xmlns:xsd=\"undefined\">" + "<xs:element name=\"Test\"/>" + "</xs:schema>" ) ); final String xml = "<Test42/>"; final DocumentBuilderFactory newFactory = DocumentBuilderFactory.newInstance(); newFactory.setSchema( SchemaFactory.newInstance( "http://www.w3.org/2001/XMLSchema" ).newSchema( schema ) ); final DocumentBuilder documentBuilder = newFactory.newDocumentBuilder(); documentBuilder.parse( new InputSource( new StringReader( xml ) ) ); } } When using a JVM 1.5 the test passes, on 1.6 it fails with "Expected exception SAXParseException". The Javadoc of the DocumentBuilderFactory.setSchema(Schema) Method says: When errors are found by the validator, the parser is responsible to report them to the user-specified ErrorHandler (or if the error handler is not set, ignore them or throw them), just like any other errors found by the parser itself. In other words, if the user-specified ErrorHandler is set, it must receive those errors, and if not, they must be treated according to the implementation specific default error handling rules. The Javadoc of the DocumentBuilder.parse(InputSource) method says: BTW: I tried setting an error handler via setErrorHandler, but there still is no exception. Now my question: What has changed to 1.6 that prevents the schema validation to throw a SAXParseException? Is it related to the schema or to the xml that I tried to parse?

    Read the article

  • Problem adding Viewport2DVisual3D from Code

    - by Jeff
    I'm trying to add a Viewport2DVisual3D to a Viewport3D in code, but the visual isn't showing up. Any help understanding why not would be appreciated. The following is the code for the main window. Is it sufficient to just add the Viewport2DVisual3D to the children of the Viewport3D in order for it to be rendered? public partial class Window1 : System.Windows.Window { public Window1() { InitializeComponent(); this.Loaded += new RoutedEventHandler(temp); } public void temp(object sender, RoutedEventArgs e) { Viewport2DVisual3D test = new Viewport2DVisual3D(); MeshGeometry3D testGeometry = new MeshGeometry3D(); Vector3D CameraLookDirection = Main_Target_CameraOR20.LookDirection; // Calculate the Positions based on the Camera Point3DCollection myPoint3DCollection = new Point3DCollection(); myPoint3DCollection.Add(new Point3D(-1, 1, 0)); myPoint3DCollection.Add(new Point3D(-1, -1, 0)); myPoint3DCollection.Add(new Point3D(1, -1, 0)); myPoint3DCollection.Add(new Point3D(1, 1, 0)); testGeometry.Positions = myPoint3DCollection; PointCollection myPointCollection = new PointCollection(); myPointCollection.Add(new Point(0, 0)); myPointCollection.Add(new Point(0, 1)); myPointCollection.Add(new Point(1, 1)); myPointCollection.Add(new Point(1, 0)); testGeometry.TextureCoordinates = myPointCollection; Int32Collection triangleIndicesCollection = new Int32Collection(); triangleIndicesCollection.Add(0); triangleIndicesCollection.Add(1); triangleIndicesCollection.Add(2); triangleIndicesCollection.Add(2); triangleIndicesCollection.Add(3); triangleIndicesCollection.Add(0); testGeometry.TriangleIndices = triangleIndicesCollection; DiffuseMaterial myDiffuseMaterial = new DiffuseMaterial(Brushes.White); Viewport2DVisual3D.SetIsVisualHostMaterial(myDiffuseMaterial, true); Transform3DGroup myTransform3DGroup = new Transform3DGroup(); ScaleTransform3D myScaleTransform3D = new ScaleTransform3D(); myScaleTransform3D.ScaleX = 2; myScaleTransform3D.ScaleY = 2; myScaleTransform3D.ScaleZ = 2; TranslateTransform3D myTranslateTransform3D = new TranslateTransform3D(); myTranslateTransform3D.OffsetX = -27; myTranslateTransform3D.OffsetY = 13; myTranslateTransform3D.OffsetZ = 6; RotateTransform3D rotateTransform = new RotateTransform3D() { Rotation = new AxisAngleRotation3D { Angle = -50, Axis = new Vector3D(0, 1, 0) } }; myTransform3DGroup.Children.Add(myTranslateTransform3D); myTransform3DGroup.Children.Add(myScaleTransform3D); myTransform3DGroup.Children.Add(rotateTransform); test.Transform = myTransform3DGroup; Button myButton = new Button(); myButton.Content = "Test Button"; test.Material = myDiffuseMaterial; test.Geometry = testGeometry; test.Visual = myButton; ZAM3DViewport3D.Children.Add(test); } }

    Read the article

  • Help ---- SQL Script

    - by Vinoj Nambiar
    Store No Store Name Region Division Q10(response) Q21(response) 2345       ABC              North Test              1                       5 2345                            North Test              6                       3 2345       ABC              North Test              4                       6 1st calculation 1 ) Engaged(%) = Response Greater than 4.5 3 (total response greater than 4.5) / 6 (total count) * 100 = 50% Store No Store Name Region Division Q10 Q21 2345             ABC      North Test           1       5 2345             ABC      North Test           6       3 2345            ABC       North Test           4       6 2) not engaged (%) = Response less than 2 1 (total response less than 2) / 6 (total count) * 100 = 16.66% I should be able to get the table like this Store No Store Name Region Division Engaged(%) Disengaged(%) 2345            ABC     North Test                 50                 16.66

    Read the article

  • How to define an extern, C struct returning function in C++ using MSVC?

    - by DK
    The following source file will not compile with the MSVC compiler (v15.00.30729.01): /* stest.c */ #ifdef __cplusplus extern "C" { #endif struct Test; extern struct Test make_Test(int x); struct Test { int x; }; extern struct Test make_Test(int x) { struct Test r; r.x = x; return r; } #ifdef __cplusplus } #endif Compiling with cl /c /Tpstest.c produces the following error: stest.c(8) : error C2526: 'make_Test' : C linkage function cannot return C++ class 'Test' stest.c(6) : see declaration of 'Test' Compiling without /Tp (which tells cl to treat the file as C++) works fine. The file also compiles fine in DigitalMars C and GCC (from mingw) in both C and C++ modes. I also used -ansi -pedantic -Wall with GCC and it had no complaints. For reasons I will go into below, we need to compile this file as C++ for MSVC (not for the others), but with functions being compiled as C. In essence, we want a normal C compiler... except for about six lines. Is there a switch or attribute or something I can add that will allow this to work? The code in question (though not the above; that's just a reduced example) is being produced by a code generator. As part of this, we need to be able to generate floating point nans and infinities as constants (long story), meaning we have to compile with MSVC in C++ mode in order to actually do this. We only found one solution that works, and it only works in C++ mode. We're wrapping the code in extern "C" {...} because we want to control the mangling and calling convention so that we can interface with existing C code. ... also because I trust C++ compilers about as far as I could throw a smallish department store. I also tried wrapping just the reinterpret_cast line in extern "C++" {...}, but of course that doesn't work. Pity. There is a potential solution I found which requires reordering the declarations such that the full struct definition comes before the function foward decl., but this is very inconvenient due to the way the codegen is performed, so I'd really like to avoid having to go down that road if I can.

    Read the article

  • File Output using Gforth

    - by sheepez
    As a first project I have been writing a short program to render the Mandelbrot fractal. I have got to the point of trying to output my results to a file ( e.g. .bmp or .ppm ) and got stuck. I have not really found any examples of exactly what I am trying to do, but I have found two examples of code to copy from one file to another. The examples in the Gforth documentation ( Section 3.27 ) did not work for me ( winXP ) in fact they seemed to open and create files but not write to files properly. This is the Gforth documentation example that copies the contents of one file to another: 0 Value fd-in 0 Value fd-out : open-input ( addr u -- ) r/o open-file throw to fd-in ; : open-output ( addr u -- ) w/o create-file throw to fd-out ; s" foo.in" open-input s" foo.out" open-output : copy-file ( -- ) begin line-buffer max-line fd-in read-line throw while line-buffer swap fd-out write-line throw repeat ; I found this example ( http://rosettacode.org/wiki/File_IO#Forth ) which does work. The main problem is that I can't isolate the part that writes to a file and have it still work. The main confusion is that r doesn't seem to consume TOS as I might expect. : copy-file2 ( a1 n1 a2 n2 -- ) r/o open-file throw >r w/o create-file throw r> begin pad maxstring 2 pick read-file throw ?dup while pad swap 3 pick write-file throw repeat close-file throw close-file throw ; \ Invoke it like this: s" output.txt" s" input.txt" copy-file I would be very grateful if someone could explain exactly how the open, create read and write -file words actually work, as my investigation keeps resulting in somewhat bizarre stacks. Any clues as to why the Gforth examples do not work might help too. In summary, I want to output from Gforth to a file and so far have been thwarted. Can anyone offer any help? Thank you Vijay, I think that I understand the example that you gave. However when I try to use something like this ( which I think is similar ): 0 value test-file : write-test s" testfile.out" w/o create-file throw to test-file s" test text" test-file write-line ; I get ok but nothing is put into the file, have I made a mistake? It seems that the problem was due to not flushing the relevant buffers or explicitly closing the file. Adding something like test-file flush-file throw or test-file close-file throw between write-line and ; makes it work. Thanks again Vijay for helping.

    Read the article

  • Problem updating through LINQtoSQL in MVC application using StructureMap, Repository Pattern and UoW

    - by matt
    I have an ASP MVC application using LINQ to SQL for data access. I am trying to use the Repository and Unit of Work patterns, with a service layer consuming the repositories and unit of work. I am experiencing a problem when attempting to perform updates on a particular repository. My application architecture is as follows: My service class: public class MyService { private IRepositoryA _RepositoryA; private IRepositoryB _RepositoryB; private IUnitOfWork _unitOfWork; public MyService(IRepositoryA ARepositoryA, IRepositoryB ARepositoryB, IUnitOfWork AUnitOfWork) { _unitOfWork = AUnitOfWork; _RepositoryA = ARepositoryA; _RepositoryB = ARepositoryB; } public PerformActionOnObject(Guid AID) { MyObject obj = _RepositoryA.GetRecords() .WithID(AID); obj.SomeProperty = "Changed to new value"; _RepositoryA.UpdateRecord(obj); _unitOfWork.Save(); } } Repository interface: public interface IRepositoryA { IQueryable<MyObject> GetRecords(); UpdateRecord(MyObject obj); } Repository LINQtoSQL implementation: public class LINQtoSQLRepositoryA : IRepositoryA { private MyDataContext _DBContext; public LINQtoSQLRepositoryA(IUnitOfWork AUnitOfWork) { _DBConext = AUnitOfWork as MyDataContext; } public IQueryable<MyObject> GetRecords() { return from records in _DBContext.MyTable select new MyObject { ID = records.ID, SomeProperty = records.SomeProperty } } public bool UpdateRecord(MyObject AObj) { MyTableRecord record = (from u in _DB.MyTable where u.ID == AObj.ID select u).SingleOrDefault(); if (record == null) { return false; } record.SomeProperty = AObj.SomePropery; return true; } } Unit of work interface: public interface IUnitOfWork { void Save(); } Unit of work implemented in data context extension. public partial class MyDataContext : DataContext, IUnitOfWork { public void Save() { SubmitChanges(); } } StructureMap registry: public class DataServiceRegistry : Registry { public DataServiceRegistry() { // Unit of work For<IUnitOfWork>() .HttpContextScoped() .TheDefault.Is.ConstructedBy(() => new MyDataContext()); // RepositoryA For<IRepositoryA>() .Singleton() .Use<LINQtoSQLRepositoryA>(); // RepositoryB For<IRepositoryB>() .Singleton() .Use<LINQtoSQLRepositoryB>(); } } My problem is that when I call PerformActionOnObject on my service object, the update never fires any SQL. I think this is because the datacontext in the UnitofWork object is different to the one in RepositoryA where the data is changed. So when the service calls Save() on it's IUnitOfWork, the underlying datacontext does not have any updated data so no update SQL is fired. Is there something I've done wrong in the StrutureMap registry setup? Or is there a more fundamental problem with the design? Many thanks.

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1

    - by shiju
    In this post, I will demonstrate web application development using ASP. NET MVC 3, Razor and EF code First. This post will also cover Dependency Injection using Unity 2.0 and generic Repository and Unit of Work for EF Code First. The following frameworks will be used for this step by step tutorial. ASP.NET MVC 3 EF Code First CTP 5 Unity 2.0 Define Domain Model Let’s create domain model for our simple web application Category class public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } }   Expense class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First.    public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     }   RepositoryBasse – Generic Repository class public abstract class RepositoryBase<T> where T : class { private MyFinanceContext database; private readonly IDbSet<T> dbset; protected RepositoryBase(IDatabaseFactory databaseFactory) {     DatabaseFactory = databaseFactory;     dbset = Database.Set<T>(); }   protected IDatabaseFactory DatabaseFactory {     get; private set; }   protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } }   DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work   public interface IUnitOfWork {     void Commit(); }   UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } }   The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>.   public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext.   public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } }   Let’s create controller factory for Unity in the ASP.NET MVC 3 application. public class UnityControllerFactory : DefaultControllerFactory { IUnityContainer container; public UnityControllerFactory(IUnityContainer container) {     this.container = container; } protected override IController GetControllerInstance(RequestContext reqContext, Type controllerType) {     IController controller;     if (controllerType == null)         throw new HttpException(                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   }   Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies.   private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); }   In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity.   protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); }   Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller   public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        }   Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div>   We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Let’s create view page for insert Category information   @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.      @{     Layout = "~/Views/Shared/_Layout.cshtml"; }   Source Code You can download the source code from http://efmvc.codeplex.com/ . The source will be refactored on over time.   Summary In this post, we have created a simple web application using ASP.NET MVC 3 and EF Code First. We have discussed on technologies and practices such as ASP.NET MVC 3, Razor, EF Code First, Unity 2, generic Repository and Unit of Work. In my later posts, I will modify the application and will be discussed on more things. Stay tuned to my blog  for more posts on step by step application building.

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • (0xC03A0014) Failed to add device 'Microsoft Virtual Hard Disk'

    - by maniargaurav
    We had Windows 2008 SP2 Server. It was crashed due to mother board problem. After we got new motherboard we have installed Windows 2008 R2. Now when we try to attach Old VHD File we are getting following issue. Failed to add device 'Microsoft Virtual Hard Disk'. Cannot open attachment 'D:\Test\test.vhd'. Error: 'A virtual disk support provider for the specified file was not found.' TestVM': Cannot open attachment 'D:\Test\test.vhd'. Error: 'A virtual disk support provider for the specified file was not found.' (0xC03A0014). (Virtual machine ID 5626AAB2-C21C-48FF-8B70-40671CBC573B)

    Read the article

  • April 14th Links: ASP.NET, ASP.NET MVC, ASP.NET Web API and Visual Studio

    - by ScottGu
    Here is the latest in my link-listing blog series: ASP.NET Easily overlooked features in VS 11 Express for Web: Good post by Scott Hanselman that highlights a bunch of easily overlooked improvements that are coming to VS 11 (and specifically the free express editions) for web development: unit testing, browser chooser/launcher, IIS Express, CSS Color Picker, Image Preview in Solution Explorer and more. Get Started with ASP.NET 4.5 Web Forms: Good 5-part tutorial that walks-through building an application using ASP.NET Web Forms and highlights some of the nice improvements coming with ASP.NET 4.5. What is New in Razor V2 and What Else is New in Razor V2: Great posts by Andrew Nurse, a dev on the ASP.NET team, about some of the new improvements coming with ASP.NET Razor v2. ASP.NET MVC 4 AllowAnonymous Attribute: Nice post from David Hayden that talks about the new [AllowAnonymous] filter introduced with ASP.NET MVC 4. Introduction to the ASP.NET Web API: Great tutorial by Stephen Walher that covers how to use the new ASP.NET Web API support built-into ASP.NET 4.5 and ASP.NET MVC 4. Comprehensive List of ASP.NET Web API Tutorials and Articles: Tugberk Ugurlu links to a huge collection of articles, tutorials, and samples about the new ASP.NET Web API capability. Async Mashups using ASP.NET Web API: Nice post by Henrik on how you can use the new async language support coming with .NET 4.5 to easily and efficiently make asynchronous network requests that do not block threads within ASP.NET. ASP.NET and Front-End Web Development Visual Studio 11 and Front End Web Development - JavaScript/HTML5/CSS3: Nice post by Scott Hanselman that highlights some of the great improvements coming with VS 11 (including the free express edition) for front-end web development. HTML5 Drag/Drop and Async Multi-file Upload with ASP.NET Web API: Great post by Filip W. that demonstrates how to implement an async file drag/drop uploader using HTML5 and ASP.NET Web API. Device Emulator Guide for Mobile Development with ASP.NET: Good post from Rachel Appel that covers how to use various device emulators with ASP.NET and VS to develop cross platform mobile sites. Fixing these jQuery: A Guide to Debugging: Great presentation by Adam Sontag on debugging with JavaScript and jQuery.  Some really good tips, tricks and gotchas that can save a lot of time. ASP.NET and Open Source Getting Started with ASP.NET Web Stack Source on CodePlex: Fantastic post by Henrik (an architect on the ASP.NET team) that provides step by step instructions on how to work with the ASP.NET source code we recently open sourced. Contributing to ASP.NET Web Stack Source on CodePlex: Follow-on to the post above (also by Henrik) that walks-through how you can submit a code contribution to the ASP.NET MVC, Web API and Razor projects. Overview of the WebApiContrib project: Nice post by Pedro Reys on the new open source WebApiContrib project that has been started to deliver cool extensions and libraries for use with ASP.NET Web API. Entity Framework Entity Framework 5 Performance Improvements and Performance Considerations for EF5:  Good articles that describes some of the big performance wins coming with EF5 (which will ship with both .NET 4.5 and ASP.NET MVC 4). Automatic compilation of LINQ queries will yield some significant performance wins (up to 600% faster). ASP.NET MVC 4 and EF Database Migrations: Good post by David Hayden that covers the new database migrations support within EF 4.3 which allows you to easily update your database schema during development - without losing any of the data within it. Visual Studio What's New in Visual Studio 11 Unit Testing: Nice post by Peter Provost (from the VS team) that talks about some of the great improvements coming to VS11 for unit testing - including built-in VS tooling support for a broad set of unit test frameworks (including NUnit, XUnit, Jasmine, QUnit and more) Hope this helps, Scott

    Read the article

  • SCVMM 2012 R2 - Installing Virtual Switch Fails with Error 2916

    - by Brian M.
    So I've been attempting to teach myself SCVMM 2012 and Hyper-V Server 2012 R2, and I seem to have hit a snag. I've connected my Hyper-V Host to SCVMM 2012 successfully, and created a logical network, logical switch, and uplink port profile (which I essentially blew through with the default settings). However when I attempt to create a virtual switch on my Hyper-V host, I run into an issue. The job will use my logical network settings I created to configure the virtual switch, but when it tries to apply it to the host, it stalls and eventually fails with the following error: Error (2916) VMM is unable to complete the request. The connection to the agent vmhost1.test.loc was lost. WinRM: URL: [h**p://vmhost1.test.loc:5985], Verb: [GET], Resource: [h**p://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/v2/Msvm_ConcreteJob?InstanceID=2F401A71-14A2-4636-9B3E-10C0EE942D33] Unknown error (0x80338126) Recommended Action Ensure that the Windows Remote Management (WinRM) service and the VMM agent are installed and running and that a firewall is not blocking HTTP/HTTPS traffic. Ensure that VMM server is able to communicate with econ-hyperv2.econ.loc over WinRM by successfully running the following command: winrm id –r:vmhost1.test.loc This problem can also be caused by a Windows Management Instrumentation (WMI) service crash. If the server is running Windows Server 2008 R2, ensure that KB 982293 (h**p://support.microsoft.com/kb/982293) is installed on it. If the error persists, restart vmhost1.test.loc and then try the operation again. Refer to h**p://support.microsoft.com/kb/2742275 for more details. I restarted the server, and upon booting am greeted with a message stating "No active network adapters found." I load up powershell and run "Get-NetAdapter -IncludeHidden" to see what's going on, and get the following: Name InterfaceDescription ifIndex Status ---- -------------------- ------- ----- Local Area Connection* 5 WAN Miniport (PPPOE) 6 Di... Ethernet Microsoft Hyper-V Network Switch Def... 10 Local Area Connection* 1 WAN Miniport (L2TP) 2 Di... Local Area Connection* 8 WAN Miniport (Network Monitor) 9 Up Local Area Connection* 4 WAN Miniport (PPTP) 5 Di... Ethernet 2 Broadcom NetXtreme Gigabit Ethernet 13 Up Local Area Connection* 7 WAN Miniport (IPv6) 8 Up Local Area Connection* 9 Microsoft Kernel Debug Network Adapter 11 No... Local Area Connection* 3 WAN Miniport (IKEv2) 4 Di... Local Area Connection* 2 WAN Miniport (SSTP) 3 Di... vSwitch (TEST Test Swi... Hyper-V Virtual Switch Extension Ada... 17 Up Local Area Connection* 6 WAN Miniport (IP) 7 Up Now the machine is no longer visible on the network, and I don't have the slightest idea what went wrong, and more importantly how to undo the damage I caused in order to get back to where I was (save for re-installing Hyper-V Server, but I really would rather know what's going on and how to fix it)! Does anybody have any ideas? Much appreciated!

    Read the article

  • Testing radius server from Mac OS X client

    - by Calvin Froedge
    I have a radius server set up on a server running Ubuntu 11.04. I have configured my switch to use the authentication server's IP (192.168.1.2) for RADIUS / 802.1x authentication, and I created a connection to test connecting from my Mac OSX client. Here is my radius configuration for the client: client 192.168.1.0/16 { secret = testing123 } I can successfully authenticate using both 127.0.0.1 (localhost) and 192.168.1.2 (ip of eth1), so I know radius is getting those requests. I set up a connection to test from my macbook, and my requests are timing out. http://screencast.com/t/tMhRLS3H7 Is there a better way to test the radius connection from my macbook? Thanks! UPDATE: I was able to successfully test on Mac OSX client using RadPerf. This is available as a cross-platform command line tool.

    Read the article

  • opendkim reporting read private key failed: no start line

    - by Bob
    I've set up keys with the following: opendkim-genkey -t -s mail -d test.com The private key (pertinent to this question) contains: -----BEGIN RSA PRIVATE KEY----- (some stuff here) -----END RSA PRIVATE KEY----- My opendkim.conf contains: Domain test.com KeyFile /etc/opendkim/dkim.private Selector mail I am then testing with: opendkim-testkey -d test.com -s mail -k /etc/mail/dkim.public Yet receive the following error: opendkim-testkey: PEM_read_bio_PrivateKey() failed error:0906D06C:PEM routines:PEM_read_bio:no start line The files exist and have opendkim as the user and group, and adequate permissions. Has anyone any advice?

    Read the article

  • Microsoft launches IE9 preview – No support for XP

    - by samsudeen
    Microsoft launched the developer preview version of Internet Explorer 9 (IE9) at MIX 10 web conference yesterday.This release is aimed getting the feedback from website designers , developers and other community to make IE9 development better from its previous versions. Microsoft will update the developer preview every eight weeks and the next update is expected on mid of march.So what is new and interesting  about IE9 Chakra Chakra (The new scripting engine of IE9) renders the Java script much faster compared to IE8 and other browsers thus improving the performance significantly.According to Microsoft Chakra renders the java script in background with a separate thread parallel to the main engine which is complete new way of rendering from the current browser technologies Standards Microsoft is desperate to make ( surprisingly!!!) IE9 compliance to  web standards by supporting the open standards such as Accelerated support for HTML5 video support for new web technologies such as CSS3 and SVG2. ACID3 Test IE9 scores (55/100) in its latest ACID3 test which is much better compared to the IE8 score (22/100) but not even  nearer to their rivals Chrome, Opera, and Safari which scores 100/100 in ACID3 testing I am little disappointed over not able to download the  developer preview on my XP machine. The early comments looks much positive for IE9.If you want to explore IE9,check the Microsoft Test drive site  at Microsoft IE9 Test-drive You can also download the IE9 developer preview at Download Preview Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Setting Up GLFW3 in Visual Studio

    - by sm81095
    I decided a couple of days ago that I was going to start trying to develop games in C++ with OpenGL, instead of C# Monogame like I have been doing for a while. I was looking around for libraries to use, to make OpenGL a little easier to use. I settled on GLEW and GLFW. GLEW was a super easy copy/paste, but GLFW3 was not. After looking around for a while and fighting with CMake, I got the GLFW2.lib file created, and I added the additional include directories, library directories, and linked my program to the glfw3.lib file I just created. The problem is, I get these linker errors when I try to run or build my program: Error 1 error LNK2019: unresolved external symbol _glfwInit referenced in function _main C:\Codex Interactive\Projects\OGLTest\OGLTest\test.obj OGLTest Error 2 error LNK2019: unresolved external symbol _glfwTerminate referenced in function _main C:\Codex Interactive\Projects\OGLTest\OGLTest\test.obj OGLTest Error 3 error LNK2019: unresolved external symbol _glfwSetErrorCallback referenced in function _main C:\Codex Interactive\Projects\OGLTest\OGLTest\test.obj OGLTest and 10 other LNK2019 errors, all talking about some glfw method, as well as: Error 14 error LNK1120: 13 unresolved externals C:\Codex Interactive\Projects\OGLTest\Debug\OGLTest.exe 1 1 OGLTest at the very bottom of the error list. I've looked up most of these errors on their own, and the solutions that I find either do nothing to solve the problem, or are people commenting on how dumb people are for not being about to solve this linker problem. Any assistance to solve these errors would be greatly appreciated. Info: I built GLFW3 on Cmake for Visual Studio 11, 32 bit and 64 bit, and both threw the same errors. The only extra libraries I linked were opengl32.lib, glu32.lib, and glfw3.lib Here is the test code (from GLFW3's latest tutorial): Code

    Read the article

  • iis7 .net webservice 404 error

    - by user11457
    I have a webservice /test/Service1.asmx in the same folder as a page /test/test.aspx. the page works fine but I get the message bellow for the services in the same location. I know the file is there and the url is correct, added the script module and managed handler as well. If anyone knows what I'm missing here I'd appreciate it Server Error in '/' Application. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /test/Service1.asmx Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016

    Read the article

  • How to create an NFS proxy by using kernel server & client?

    - by Martin C. Martin
    I have a file server that exports as NFS. On an Ubuntu machine I mount that, then try to export it as an NFS volume. When I go to export it, I get the message: exportfs: /test/nfs-mount-point does not support NFS export How can I get this to work, or at least get more information as to what the problem is? Exact steps: Unbuntu 12.04 mount -f nfs myfileserver.com:/server-dir /test/nfs-mount-point [Works fine, I can read & write files] /etc/exports contains: /test/nfs-mount-point *(rw,no_subtree_check) sudo /etc/init.d/nfs-kernel-server restart Stopping NFS kernel daemon [ OK ] Unexporting directories for NFS kernel daemon... [ OK ] Exporting directories for NFS kernel daemon... exportfs: /test/nfs-mount-point does not support NFS export [ OK ] Starting NFS kernel daemon [ OK ]

    Read the article

  • Using VLC as RTSP server

    - by StackedCrooked
    I'm trying to figure out how to use the server capabilities of VLC. More specifically, how to export an SDP file when RTP streaming. In chapter 4 in the section related to RTP Streaming examples for server and client are given: vlc -vvv input_stream --sout '#rtp{dst=192.168.0.12,port=1234,sdp=rtsp://server.example.org:8080/test.sdp}' vlc rtsp://server.example.org:8080/test.sdp It's not very clear to me how to make it actually work. I have tried these two commands for server and client using two cmd instances: vlc -I rc screen:// --sout=#rtp{dst=127.0.0.1,port=4444,sdp=rtsp://localhost:8080/test.sdp} vlc -I rc rtsp://localhost:8080/test.sdp Invoking the second command causes the first one to crash. The second command shows the error message "could not connect to localhost:8080".

    Read the article

  • Apache is not interpreting .PHP files

    - by Ala ABUDEEB
    I recently downloaded OpenSUSE OS version 11.4 from the site to use it as a server..In order to do that I downloaded the server edition that has Apache/2.2.17 and PHP5 downloaded by default.....Ok till now it is fine Now I started the Apache successfully and put a test.php file in the documentRoot directory. test.php contain only <?php phpinfo() ?> Then using my browser I typed http://localhost/test.php and here was the problem the browser didn't display what phpinfo() should display, instead it asked me whether I want to open or save test.php...which is driving me crazy.... I googled a lot but no solution THis is /etc/apache2/conf.d/php5.conf (IfModule mod_php5.c) AddHandler application/x-httpd-php .php4 AddHandler application/x-httpd-php .php5 AddHandler application/x-httpd-php .php AddHandler application/x-httpd-php-source .php4s AddHandler application/x-httpd-php-source .php5s AddHandler application/x-httpd-php-source .phps DirectoryIndex index.php4 DirectoryIndex index.php5 DirectoryIndex index.php (/IfModule)

    Read the article

  • Migrating a virtual domain controller for DR exercise

    - by Dips
    Hello gurus, I have a question. I have a requirement where I have a virtual domain controller and I have to migrate it to another virtual server in a different location. It is for test purposes to test out a DR scenario and the test will be deemed successful if the users that authenticate using the production DC can do so in the backup DC. I don't know much about this and thus don't know why it was assigned to me. So any assistance will be greatly appreciated. What I had in mind was: 1) Taking a snapshot of the production server and then restoring it in the other server. But I was told that this is not the suggested way of doing it. I was not told why. Is that right?If a snapshot is to be taken then what is the best way to do it. Any ideas on where I can get the documentation for this? 2) Another way would be to build the test DC from ground up, match it to the specs of production DC and then perform the DR test. Is this a better option? What will be needed to perform such an activity? Where can I find documentation on that? I apologise for the length of this query. As I said I am quite a novice and hope to get a better resolution. Any assistance will be greatly appreciated. Regards,

    Read the article

  • Now Available: Visual Studio 2010 Release Candidate Virtual Machines with Sample Data and Hands-on-L

    - by John Alexander
    From a message from Brian Keller: “Back in December we posted a set of virtual machines pre-configured with Visual Studio 2010 Beta 2, Visual Studio Team Foundation Server 2010 Beta 2, and 7 hands-on-labs. I am pleased to announce that today we have shipped an updated virtual machine using the Visual Studio 2010 Release Candidate bits, a brand new sample application, and 9 hands-on-labs. This VM is customer-ready and includes everything you need to learn and/or deliver demonstrations of many of my favorite application lifecycle management (ALM) capabilities in Visual Studio 2010. This VM is available in the virtualization platform of your choice (Hyper-V, Virtual PC 2007 SP1, and Windows [7] Virtual PC). Hyper-V is highly recommended because of the performance benefits and snapshotting capabilities. Tailspin Toys The sample application we are using in this virtual machine is a simple ASP.NET MVC 2 storefront called Tailspin Toys. Tailspin Toys sells model airplanes and relies on the application lifecycle management capabilities of Visual Studio 2010 to help them build, test, and maintain their storefront. Major kudos go to Dan Massey for building out this great application for us. Hands-on-Labs / Demo Scripts The 9 hands-on-labs / demo scripts which accompany this virtual machine cover several of the core capabilities of conducting application lifecycle management with Visual Studio 2010. Each document can be used by an individual in a hands-on-lab capacity, to learn how to perform a given set of tasks, or used by a presenter to deliver a demonstration or classroom-style training. Unlike the beta 2 release, 100% of these labs target Tailspin Toys to help ensure a consistent storytelling experience. Software quality: Authoring and Running Manual Tests using Microsoft Test Manager 2010 Introduction to Test Case Management with Microsoft Test Manager 2010 Introduction to Coded UI Tests with Visual Studio 2010 Ultimate Debugging with IntelliTrace using Visual Studio 2010 Ultimate Software architecture: Code Discovery using the architecture tools in Visual Studio 2010 Ultimate Understanding Class Coupling with Visual Studio 2010 Ultimate Using the Architecture Explore in Visual Studio 2010 Ultimate to Analyze Your Code Software Configuration Management: Planning your Projects with Team Foundation Server 2010 Branching and Merging Visualization with Team Foundation Server 2010 “ Check out Brian’s Post for more info including download instructions…

    Read the article

  • Debian Squeeze and exim4: cannot send mail

    - by Fernando Campos
    Hello guys, Got this error after install and config of exim4-daemon-light and mailutils packages on Debian Squeeze. This package is aimed to send automatic messages from websites, like email confirmation and stuff. Configuration after package install: dpkg-reconfigure exim4-config You'll be presented with a welcome screen, followed by a screen asking what type mail delivery you'd like to support. Choose the option for "internet site" and select "Ok" to continue. After many configuration sceens you can test mail with: echo "test message" | mail -s "test message" [email protected] Here is the response: root@server:/etc# echo "test message" | mail -s "test message" [email protected] 2011-03-02 20:34:59 1PuxRT-0001Aj-T9 Cannot open main log file "/var/log/exim4/mainlog": Permission denied: euid=101 egid=103 2011-03-02 20:34:59 1PuxRT-0001Aj-T9 <= root@debian U=root P=local S=331 2011-03-02 20:34:59 1PuxRT-0001Aj-T9 Cannot open main log file "/var/log/exim4/mainlog": Permission denied: euid=101 egid=103 exim: could not open panic log - aborting: see message(s) above Can't send mail: sendmail process failed with error code 1 There is no /var/log/exim4 directory on my server. I tried to create it, but it didn't work. Please, can someone help me? Best regards, Fernando

    Read the article

  • Testing Routes in ASP.NET MVC with MvcContrib

    - by Guilherme Cardoso
    I've decide to write about unit testing in the next weeks. If we decide to develop with Test-Driven Developement pattern, it's important to not forget the routes. This article shows how to test routes. I'm importing my routes from my RegisterRoutes method from the Global.asax of Project.Web created by default (in SetUp). I'm using ShouldMapTp() from MvcContrib: http://mvccontrib.codeplex.com/ The controller is specified in the ShouldMapTo() signature, and we use lambda expressions for the action and parameters that are passed to that controller. [SetUp] public void Setup() { Project.Web.MvcApplication.RegisterRoutes(RouteTable.Routes); } [Test] public void Should_Route_HomeController() { "~/Home" .ShouldMapTo<HomeController>(action => action.Index()); } [Test] public void Should_Route_EventsController() { "~/Events" .ShouldMapTo<EventsController>(action => action.Index()); "~/Events/View/44/Concert-DevaMatri-22-January-" .ShouldMapTo<EventosController>(action => action.Read(1, "Title")); // In this example,44 is the Id for my Event and "Concert-DevaMatri-22-January" is the title for that Event } [TearDown] public void teardown() { RouteTable.Routes.Clear(); }

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >