Search Results

Search found 11841 results on 474 pages for 'virtual hosts'.

Page 138/474 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • NHibernate: discriminator without common base class?

    - by joniba
    Is it possible to map two classes to the same property without them sharing a common base class? For example, a situation like this: class Rule { public virtual int SequenceNumber { get; set; } public virtual ICondition Condition { get; set; } } interface ICondition { } class ExpressionCondition : ICondition { public virtual string Expression { get; set; } } class ThresholdCondition : ICondition { public virtual int Threshold { get; set; } } I also cannot add some empty abstract class that both conditions inherit from because the two ICondition implementations exist in different domains that are not allowed to reference each other. (Please no responses telling me that this situation should not occur in the first place - I'm aware of it and it doesn't help me.)

    Read the article

  • Creating a custom module for Orchard

    - by Moran Monovich
    I created a custom module using this guide from Orchard documentation, but for some reason I can't see the fields in the content type when I want to create a new one. this is my model: public class CustomerPartRecord : ContentPartRecord { public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual int PhoneNumber { get; set; } public virtual string Address { get; set; } public virtual string Profession { get; set; } public virtual string ProDescription { get; set; } public virtual int Hours { get; set; } } public class CustomerPart : ContentPart<CustomerPartRecord> { [Required(ErrorMessage="you must enter your first name")] [StringLength(200)] public string FirstName { get { return Record.FirstName; } set { Record.FirstName = value; } } [Required(ErrorMessage = "you must enter your last name")] [StringLength(200)] public string LastName { get { return Record.LastName; } set { Record.LastName = value; } } [Required(ErrorMessage = "you must enter your phone number")] [DataType(DataType.PhoneNumber)] public int PhoneNumber { get { return Record.PhoneNumber; } set { Record.PhoneNumber = value; } } [StringLength(200)] public string Address { get { return Record.Address; } set { Record.Address = value; } } [Required(ErrorMessage = "you must enter your profession")] [StringLength(200)] public string Profession { get { return Record.Profession; } set { Record.Profession = value; } } [StringLength(500)] public string ProDescription { get { return Record.ProDescription; } set { Record.ProDescription = value; } } [Required(ErrorMessage = "you must enter your hours")] public int Hours { get { return Record.Hours; } set { Record.Hours = value; } } } this is the Handler: class CustomerHandler : ContentHandler { public CustomerHandler(IRepository<CustomerPartRecord> repository) { Filters.Add(StorageFilter.For(repository)); } } the Driver: class CustomerDriver : ContentPartDriver<CustomerPart> { protected override DriverResult Display(CustomerPart part, string displayType, dynamic shapeHelper) { return ContentShape("Parts_Customer", () => shapeHelper.Parts_BankCustomer( FirstName: part.FirstName, LastName: part.LastName, PhoneNumber: part.PhoneNumber, Address: part.Address, Profession: part.Profession, ProDescription: part.ProDescription, Hours: part.Hours)); } //GET protected override DriverResult Editor(CustomerPart part, dynamic shapeHelper) { return ContentShape("Parts_Customer", () => shapeHelper.EditorTemplate( TemplateName:"Parts/Customer", Model: part, Prefix: Prefix)); } //POST protected override DriverResult Editor(CustomerPart part, IUpdateModel updater, dynamic shapeHelper) { updater.TryUpdateModel(part, Prefix, null, null); return Editor(part, shapeHelper); } the migration: public class Migrations : DataMigrationImpl { public int Create() { // Creating table CustomerPartRecord SchemaBuilder.CreateTable("CustomerPartRecord", table => table .ContentPartRecord() .Column("FirstName", DbType.String) .Column("LastName", DbType.String) .Column("PhoneNumber", DbType.Int32) .Column("Address", DbType.String) .Column("Profession", DbType.String) .Column("ProDescription", DbType.String) .Column("Hours", DbType.Int32) ); return 1; } public int UpdateFrom1() { ContentDefinitionManager.AlterPartDefinition("CustomerPart", builder => builder.Attachable()); return 2; } public int UpdateFrom2() { ContentDefinitionManager.AlterTypeDefinition("Customer", cfg => cfg .WithPart("CommonPart") .WithPart("RoutePart") .WithPart("BodyPart") .WithPart("CustomerPart") .WithPart("CommentsPart") .WithPart("TagsPart") .WithPart("LocalizationPart") .Creatable() .Indexed()); return 3; } } Can someone please tell me if I am missing something?

    Read the article

  • VS2010 final does only link project on "rebuild all", not on "build changed"

    - by Sam
    I've just migrated a solution containing c++ and c# projects from VS2008 to VS2010 and got a strange problem. When I select "rebuild all", everything compiles and links as I would expect it to do. Then I change some c++ source file (just add a space), build the project, I get several thousands of linking errors like these: GDlgPackerListe.obj : error LNK2028: Nicht aufgelöstes Token (0A0000C7) ""public: bool __thiscall LList::Add(class LBString const &)" (?Add@LList@@$$FQAE_NABVLBString@@@Z)", auf das in Funktion ""public: virtual void __thiscall LRcPackerListe::HookRunReport(class LFortschritt &)" (?HookRunReport@LRcPackerListe@@$$FUAEXAAVLFortschritt@@@Z)" verwiesen wird. Db_Lieferschein2.obj : error LNK2020: Nicht aufgelöstes Token (0A0000E6) "public: bool __thiscall LList::Add(class LBString const &)" (?Add@LList@@$$FQAE_NABVLBString@@@Z). bmed.obj : error LNK2028: Nicht aufgelöstes Token (0A00014D) ""public: bool __thiscall LList::Add(class LBString const &)" (?Add@LList@@$$FQAE_NABVLBString@@@Z)", auf das in Funktion ""public: virtual long __thiscall MENUKB::Methode(long,long)" (?Methode@MENUKB@@$$FUAEJJJ@Z)" verwiesen wird. GDlgPackerListe.obj : error LNK2028: Nicht aufgelöstes Token (0A0000C9) ""public: void __thiscall LList::Sort(void)" (?Sort@LList@@$$FQAEXXZ)", auf das in Funktion ""public: virtual void __thiscall LRcPackerListe::HookRunReport(class LFortschritt &)" (?HookRunReport@LRcPackerListe@@$$FUAEXAAVLFortschritt@@@Z)" verwiesen wird. Dlg_Gutschrift.obj : error LNK2020: Nicht aufgelöstes Token (0A000128) "public: virtual __thiscall LBaseType::~LBaseType(void)" (??1LBaseType@@$$FUAE@XZ). Module_Damals.lib(svSuchAltLink.obj) : error LNK2001: Nicht aufgelöstes externes Symbol ""public: __thiscall SView::SView(void)" (??0SView@@QAE@XZ)". Module_Damals.lib(svShowEMF.obj) : error LNK2001: Nicht aufgelöstes externes Symbol ""public: virtual void __thiscall SView::HookValueChanged(unsigned __int64)" (?HookValueChanged@SView@@UAEX_K@Z)". When I hit "rebuild all" it recompiles and links without any errors or even warnings and produces a working exe. I'm using Visual Studio 2010 final (german edition). Whats going on here? Or, more important: how do I get the linker to work correctly??

    Read the article

  • How to query across many-to-many association in NHibernate?

    - by Splash
    I have two entities, Post and Tag. The Post entity has a collection of Tags which represents a many-to-many join between the two (that is, each post can have any number of tags and each tag can be associated with any number of posts). I am trying to retrieve all Posts which have a given tag. However, I seem to be unable to get this query right. I essentially want something which means the same as the following pseudo-HQL: from Posts p where p.Tags contains (from Tags t where t.Name = :tagName) order by p.DateTime The only thing I've found which even approaches this is a post by Ayende. However, his approach requires the entity on the other side (in my case, Tag) to have a collection showing the other end of the many-to-many. I don't have this and don't really wish to have it. I find it hard to believe this can't be done. What am I missing? My entities & mappings look like this (simplified): public class Post { public virtual int Id { get; set; } public virtual string Title { get; set; } private IList<Tag> tags = new List<Tag>(); public virtual IEnumerable<Tag> Tags { get { return tags; } } public virtual void AddTag(Tag tag) { this.tags.Add(tag); } } public class PostMap : ClassMap<Post> { public PostMap() { Id(x => x.Id).GeneratedBy.HiLo("99"); Map(x => x.Title); HasManyToMany(x => x.Tags); } } // ---- public class Tag { public virtual int Id { get; set; } public virtual string Name { get; set; } } public class TagMap : ClassMap<Tag> { public TagMap () { Id(x => x.Id).GeneratedBy.HiLo("99"); Map(x => x.Name).Unique(); } }

    Read the article

  • NHibernate - Saving simple parent-child relationship generates unnecessary selects with assigned id

    - by Alice
    Entities: public class Parent { virtual public long Id { get; set; } virtual public string Description { get; set; } virtual public ICollection<Child> Children { get; set; } } public class Child { virtual public long Id { get; set; } virtual public string Description { get; set; } virtual public Parent Parent { get; set; } } Mappings: public class ParentMap : ClassMap<Parent> { public ParentMap() { Id(x => x.Id).GeneratedBy.Assigned(); Map(x => x.Description); HasMany(x => x.Children) .AsSet() .Inverse() .Cascade.AllDeleteOrphan(); } } public class ChildMap : ClassMap<Child> { public ChildMap() { Id(x => x.Id).GeneratedBy.Assigned(); Map(x => x.Description); References(x => x.Parent) .Not.Nullable() .Cascade.All(); } } and using (var session = sessionFactory.OpenSession()) using (var transaction = session.BeginTransaction()) { var parent = new Parent { Id = 1 }; parent.Children = new HashSet<Child>(); var child1 = new Child { Id = 2, Parent = parent }; var child2 = new Child { Id = 3, Parent = parent }; parent.Children.Add(child1); parent.Children.Add(child2); session.Save(parent); transaction.Commit(); } this codes generates following sql NHibernate: SELECT child_.Id, child_.Description as Descript2_0_, child_.Parent_id as Parent3_0_ FROM [Child] child_ WHERE child_.Id=@p0;@p0 = 2 [Type: Int64 (0)] NHibernate: SELECT child_.Id, child_.Description as Descript2_0_, child_.Parent_id as Parent3_0_ FROM [Child] child_ WHERE child_.Id=@p0;@p0 = 3 [Type: Int64 (0)] NHibernate: INSERT INTO [Parent] (Description, Id) VALUES (@p0, @p1);@p0 = NULL[Type: String (4000)], @p1 = 1 [Type: Int64 (0)] NHibernate: INSERT INTO [Child] (Description, Parent_id, Id) VALUES (@p0, @p1, @p2);@p0 = NULL [Type: String (4000)], @p1 = 1 [Type: Int64 (0)], @p2 = 2 [Type:Int64 (0)] NHibernate: INSERT INTO [Child] (Description, Parent_id, Id) VALUES (@p0, @p1, @p2);@p0 = NULL [Type: String (4000)], @p1 = 1 [Type: Int64 (0)], @p2 = 3 [Type:Int64 (0)] Why are these two selects generated and how can I remove it?

    Read the article

  • How to mimic polymorphism in classes with template methods (c++)?

    - by davide
    in the problem i am facing i need something which works more or less like a polymorphic class, but which would allow for virtual template methods. the point is, i would like to create an array of subproblems, each one being solved by a different technique implemented in a different class, but holding the same interface, then pass a set of parameters (which are functions/functors - this is where templates jump up) to all the subproblems and get back a solution. if the parameters would be, e.g., ints, this would be something like: struct subproblem { ... virtual void solve (double& solution, double parameter)=0; } struct subproblem0: public subproblem { ... virtual void solve (double& solution, double parameter){...}; } struct subproblem1: public subproblem { ... virtual void solve (double* solution, double parameter){...}; } int main{ subproblem problem[2]; subproblem[0] = new subproblem0(); subproblem[1] = new subproblem1(); double argument0(0), argument1(1), sol0[2], sol1[2]; for(unsigned int i(0);i<2;++i) { problem[i]->solve( &(sol0[i]) , argument0); problem[i]->solve( &(sol1[i]) , argument1); } return 0; } but the problem is, i need the arguments to be something like Arg<T1,T2> argument0(f1,f2) and thus the solve method to be something of the likes of template<T1,T2> solve (double* solution, Arg<T1,T2> parameter) which cant obviously be declared virtual ( so cant be called from a pointer to the base class)... now i'm pretty stuck and don't know how to procede...

    Read the article

  • When mocking a class with Moq, how can I CallBase for just specific methods?

    - by Daryn
    I really appreciate Moq's Loose mocking behaviour that returns default values when no expectations are set. It's convenient and saves me code, and it also acts as a safety measure: dependencies won't get unintentionally called during the unit test (as long as they are virtual). However, I'm confused about how to keep these benefits when the method under test happens to be virtual. In this case I do want to call the real code for that one method, while still having the rest of the class loosely mocked. All I have found in my searching is that I could set mock.CallBase = true to ensure that the method gets called. However, that affects the whole class. I don't want to do that because it puts me in a dilemma about all the other properties and methods in the class that hide call dependencies: if CallBase is true then I have to either Setup stubs for all of the properties and methods that hide dependencies -- Even though my test doesn't think it needs to care about those dependencies, or Hope that I don't forget to Setup any stubs (and that no new dependencies get added to the code in the future) -- Risk unit tests hitting a real dependency. Q: With Moq, is there any way to test a virtual method, when I mocked the class to stub just a few dependencies? I.e. Without resorting to CallBase=true and having to stub all of the dependencies? Example code to illustrate (uses MSTest, InternalsVisibleTo DynamicProxyGenAssembly2) In the following example, TestNonVirtualMethod passes, but TestVirtualMethod fails - returns null. public class Foo { public string NonVirtualMethod() { return GetDependencyA(); } public virtual string VirtualMethod() { return GetDependencyA();} internal virtual string GetDependencyA() { return "! Hit REAL Dependency A !"; } // [... Possibly many other dependencies ...] internal virtual string GetDependencyN() { return "! Hit REAL Dependency N !"; } } [TestClass] public class UnitTest1 { [TestMethod] public void TestNonVirtualMethod() { var mockFoo = new Mock<Foo>(); mockFoo.Setup(m => m.GetDependencyA()).Returns(expectedResultString); string result = mockFoo.Object.NonVirtualMethod(); Assert.AreEqual(expectedResultString, result); } [TestMethod] public void TestVirtualMethod() // Fails { var mockFoo = new Mock<Foo>(); mockFoo.Setup(m => m.GetDependencyA()).Returns(expectedResultString); // (I don't want to setup GetDependencyB ... GetDependencyN here) string result = mockFoo.Object.VirtualMethod(); Assert.AreEqual(expectedResultString, result); } string expectedResultString = "Hit mock dependency A - OK"; }

    Read the article

  • How to map it? HasOne x References

    - by Felipe
    Hi everyones, I need to make a mapping One by One, and I have some doubts. I have this classes: public class DocumentType { public virtual int Id { get; set; } /* othes properties for documenttype */ public virtual DocumentConfiguration Configuration { get; set; } public DocumentType () { } } public class DocumentConfiguration { public virtual int Id { get; set; } /* some other properties for configuration */ public virtual DocumentType Type { get; set; } public DocumentConfiguration () { } } A DocumentType object has only one DocumentConfiguration, but it is not a inherits, it's only one by one and unique, to separate properties. How should be my mappings in this case ? Should I use References or HasOne ? Someone could give an example ? When I load a DocumentType object I'd like to auto load the property Configuration (in documentType). Thanks a lot guys! Cheers

    Read the article

  • Special folders & functionality in Windows [closed]

    - by lloydsparkes
    I am using the new Virtual PC Beta on Windows 7 and I saw: The virtual machines have a special directory, but once you are in it, the VMs are shown. How can I do the same thing, with custom headings ("Machine status", "Memory", etc.) and custom toolbar buttons ("Create virtual machine")? I can't seem to find any documentation for this.

    Read the article

  • How can I know when QProcess wants to read input?

    - by mpcabd
    I'm implementing a compiler in my Compilers class, I'm using Qt & C++. After I have generated the machine code from the source code, I'm executing the virtual machine that will execute the call. I'm facing a problem here, I'm using readyRead() signal to get output from the virtual machine, but how can I know that the virtual machine wants to read data from the user? I wanna show the user an input dialog each time the machine asks for input.

    Read the article

  • Determine an object's class returned by a factory method (Error: function does not take 1 arguments

    - by tzippy
    I have a factorymethod that either returns an object of baseclass or one that is of derivedclass (a derived class of baseclass). The derived class has a method virtual void foo(int x) that takes one argument. baseclass however has virtual void foo() without an argument. In my code, a factory method returns a pointer of type bar that definetly points to an object of class derivedclass. However since this is only known at runtime I get a compiler error saying that foo() does not take an argument. Can I cast this pointer to a pointer of type derivedclass? std::auto_ptr<baseclass> bar = classfactory::CreateBar(); //returns object of class derivedclass bar->foo(5); class baseclass { public: virtual void foo(); } class derivedclass : public baseclass { public: virtual void foo(int x); }

    Read the article

  • c# .net MVC4 Model to represent table or form

    - by Matthew Chambers
    Hello I am a little confused with regards to models in mvc 4 and thought someone may be able to point me in the right direction. This would be most appreciated. For example if i have a table that has the following fields [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 5)] public string UserName { get; set; } [Required(ErrorMessage="Email Address is Required")] [StringLength(15, ErrorMessage = "Email Address must be between {0} and {1} in size",MinimumLength = 5 )] [DataType(DataType.EmailAddress)] [Display(Name="Email")] public string Email { get; set; } [MaxLength(25)] [Display(Name="Mobile Telephone Number")] public string Mobile {get;set;} [MaxLength(500)] [Display(Name="Headline")] public string Headline {get;set;} [Required] [StringLength(200)] [Display(Name = "First Name")] public string FirstName {get;set;} [Required] [StringLength(200)] [Display(Name="Surname")] public string Surname { get; set;} public virtual int? DayOfBirthId { get; set; } public virtual DayOfBirth DayOfBirth { get; set; } public virtual int? MonthOfBirthId { get; set; } public virtual MonthOfBirth MonthOfBirth { get; set; } public virtual int? YearOfBirthId { get; set; } public virtual YearOfBirth YearOfBirth{get;set;} This is my user profile table in the database. However I would like a form that the user registers to the site with. When they first register i do not need all the details such as telephone all i really need is there username, email address and password. Do i create another model for this. Or do i have one model and on the controller set the fields to null or empty string that are not required on registration. I have validation also so would this be set for data that has not been entered on the form. My question is ultimately should all forms represent models- should the database be redesigned to meet this required. Or should the controller set the values that are not required. Or should there be another model that represents the form be created which maps to this table. I am a little confused on this and clarification of anyone would be most appreciated.

    Read the article

  • Using RIA DomainServices with ASP.NET and MVC 2

    - by Bobby Diaz
    Recently, I started working on a new ASP.NET MVC 2 project and I wanted to reuse the data access (LINQ to SQL) and business logic methods (WCF RIA Services) that had been developed for a previous project that used Silverlight for the front-end.  I figured that I would be able to instantiate the various DomainService classes from within my controller’s action methods, because after all, the code for those services didn’t look very complicated.  WRONG!  I didn’t realize at first that some of the functionality is handled automatically by the framework when the domain services are hosted as WCF services.  After some initial searching, I came across an invaluable post by Joe McBride, which described how to get RIA Service .svc files to work in an MVC 2 Web Application, and another by Brad Abrams.  Unfortunately, Brad’s solution was for an earlier preview release of RIA Services and no longer works with the version that I am running (PDC Preview). I have not tried the RC version of WCF RIA Services, so I am not sure if any of the issues I am having have been resolved, but I wanted to come up with a way to reuse the shared libraries so I wouldn’t have to write a non-RIA version that basically did the same thing.  The classes I came up with work with the scenarios I have encountered so far, but I wanted to go ahead and post the code in case someone else is having the same trouble I had.  Hopefully this will save you a few headaches! 1. Querying When I first tried to use a DomainService class to perform a query inside one of my controller’s action methods, I got an error stating that “This DomainService has not been initialized.”  To solve this issue, I created an extension method for all DomainServices that creates the required DomainServiceContext and passes it to the service’s Initialize() method.  Here is the code for the extension method; notice that I am creating a sort of mock HttpContext for those cases when the service is running outside of IIS, such as during unit testing!     public static class ServiceExtensions     {         /// <summary>         /// Initializes the domain service by creating a new <see cref="DomainServiceContext"/>         /// and calling the base DomainService.Initialize(DomainServiceContext) method.         /// </summary>         /// <typeparam name="TService">The type of the service.</typeparam>         /// <param name="service">The service.</param>         /// <returns></returns>         public static TService Initialize<TService>(this TService service)             where TService : DomainService         {             var context = CreateDomainServiceContext();             service.Initialize(context);             return service;         }           private static DomainServiceContext CreateDomainServiceContext()         {             var provider = new ServiceProvider(new HttpContextWrapper(GetHttpContext()));             return new DomainServiceContext(provider, DomainOperationType.Query);         }           private static HttpContext GetHttpContext()         {             var context = HttpContext.Current;   #if DEBUG             // create a mock HttpContext to use during unit testing...             if ( context == null )             {                 var writer = new StringWriter();                 var request = new SimpleWorkerRequest("/", "/",                     String.Empty, String.Empty, writer);                   context = new HttpContext(request)                 {                     User = new GenericPrincipal(new GenericIdentity("debug"), null)                 };             } #endif               return context;         }     }   With that in place, I can use it almost as normally as my first attempt, except with a call to Initialize():     public ActionResult Index()     {         var service = new NorthwindService().Initialize();         var customers = service.GetCustomers();           return View(customers);     } 2. Insert / Update / Delete Once I got the records showing up, I was trying to insert new records or update existing data when I ran into the next issue.  I say issue because I wasn’t getting any kind of error, which made it a little difficult to track down.  But once I realized that that the DataContext.SubmitChanges() method gets called automatically at the end of each domain service submit operation, I could start working on a way to mimic the behavior of a hosted domain service.  What I came up with, was a base class called LinqToSqlRepository<T> that basically sits between your implementation and the default LinqToSqlDomainService<T> class.     [EnableClientAccess()]     public class NorthwindService : LinqToSqlRepository<NorthwindDataContext>     {         public IQueryable<Customer> GetCustomers()         {             return this.DataContext.Customers;         }           public void InsertCustomer(Customer customer)         {             this.DataContext.Customers.InsertOnSubmit(customer);         }           public void UpdateCustomer(Customer currentCustomer)         {             this.DataContext.Customers.TryAttach(currentCustomer,                 this.ChangeSet.GetOriginal(currentCustomer));         }           public void DeleteCustomer(Customer customer)         {             this.DataContext.Customers.TryAttach(customer);             this.DataContext.Customers.DeleteOnSubmit(customer);         }     } Notice the new base class name (just change LinqToSqlDomainService to LinqToSqlRepository).  I also added a couple of DataContext (for Table<T>) extension methods called TryAttach that will check to see if the supplied entity is already attached before attempting to attach it, which would cause an error! 3. LinqToSqlRepository<T> Below is the code for the LinqToSqlRepository class.  The comments are pretty self explanatory, but be aware of the [IgnoreOperation] attributes on the generic repository methods, which ensures that they will be ignored by the code generator and not available in the Silverlight client application.     /// <summary>     /// Provides generic repository methods on top of the standard     /// <see cref="LinqToSqlDomainService&lt;TContext&gt;"/> functionality.     /// </summary>     /// <typeparam name="TContext">The type of the context.</typeparam>     public abstract class LinqToSqlRepository<TContext> : LinqToSqlDomainService<TContext>         where TContext : System.Data.Linq.DataContext, new()     {         /// <summary>         /// Retrieves an instance of an entity using it's unique identifier.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="keyValues">The key values.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual TEntity GetById<TEntity>(params object[] keyValues) where TEntity : class         {             var table = this.DataContext.GetTable<TEntity>();             var mapping = this.DataContext.Mapping.GetTable(typeof(TEntity));               var keys = mapping.RowType.IdentityMembers                 .Select((m, i) => m.Name + " = @" + i)                 .ToArray();               return table.Where(String.Join(" && ", keys), keyValues).FirstOrDefault();         }           /// <summary>         /// Creates a new query that can be executed to retrieve a collection         /// of entities from the <see cref="DataContext"/>.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <returns></returns>         [IgnoreOperation]         public virtual IQueryable<TEntity> GetEntityQuery<TEntity>() where TEntity : class         {             return this.DataContext.GetTable<TEntity>();         }           /// <summary>         /// Inserts the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Insert<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.InsertOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Insert);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity) where TEntity : class         {             return this.Update(entity, null);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <param name="original">The original.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity, TEntity original)             where TEntity : class         {             if ( original == null )             {                 original = GetOriginal(entity);             }               var table = this.DataContext.GetTable<TEntity>();             table.TryAttach(entity, original);               return this.Submit(entity, original, DomainOperation.Update);         }           /// <summary>         /// Deletes the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Delete<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.TryAttach(entity);             //table.DeleteOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Delete);         }           protected virtual bool Submit(Object entity, Object original, DomainOperation operation)         {             var entry = new ChangeSetEntry(0, entity, original, operation);             var changes = new ChangeSet(new ChangeSetEntry[] { entry });             return base.Submit(changes);         }           private TEntity GetOriginal<TEntity>(TEntity entity) where TEntity : class         {             var context = CreateDataContext();             var table = context.GetTable<TEntity>();             return table.FirstOrDefault(e => e == entity);         }     } 4. Conclusion So there you have it, a fully functional Repository implementation for your RIA Domain Services that can be consumed by your ASP.NET and MVC applications.  I have uploaded the source code along with unit tests and a sample web application that queries the Customers table from inside a Controller, as well as a Silverlight usage example. As always, I welcome any comments or suggestions on the approach I have taken.  If there is enough interest, I plan on contacting Colin Blair or maybe even the man himself, Brad Abrams, to see if this is something worthy of inclusion in the WCF RIA Services Contrib project.  What do you think? Enjoy!

    Read the article

  • CodePlex Daily Summary for Sunday, June 13, 2010

    CodePlex Daily Summary for Sunday, June 13, 2010New ProjectsCurve Drawer: A Java project to explore the possibilities of drawing curves and knots.File Manager Redux: .NET version of the original File Manager.Hierachical Gantt Chart In SharePoint 2010: This solution makes it easier for shedule management. We will provide a wsp including a list definition and a custom gantt control. The list defi...Light Box Control for Asp.Net: Lightbox control for asp.net is used to display the thumbnail images. on clicking the thumbnail images the original images is displayed in the ligh...Linquify: Linquify is a Visual Studio 2008/2010 Addin and C# .NET business class / DTO generator for LINQ to SQL and the Entity Framework. It supports rapid ...Microsoft Dynamics CRM Query - T4 Template: A T4 Template that generates code that leverages LINQ to SQL and the Microsoft Dynamics CRM API to give a CRM data access solution. There is also ...Open Sound Control Library: A .NET Library for the Open Sound Control Protocol. This library makes it easy to use devices which communicate via OSC.Questionable Content Screensaver: A screensaver for the questionable content comic. It is written in C#, and uses the windows presentation foundation. See the comic at http://ww...Reflect: Reflect is an open source .NET reflection tool used for viewing metadata of .NET assemblies.runescape 602 client tools and server: runescape 602 client tools and serverSharpCrack: Hash cracker written in managed code.SilverCAT project: This is my Windows Azure study project. So far I did not find any value to share it to the public. If I find it out one day, I will add hereSilverStackAPI: My entry for the Stack Exchange API contest. A silverlight library and demo app.social bookmark control for asp.net: social bookmark control for asp.net, This control is used to bookmark the current asp.net page into popular social networking sites like facebook, ...SSIS Event Log Source: An SSIS 2005 Data Source component for loading Windows 2003/XP event logs (*.evt) into SQL Server 2005 for analysisUnOfficial ActiveWorlds Wrapper.Net: UnOfficial ActiveWorlds Wrapper .Net makes it easier for programmers to make active worlds bots. You'll no longer have to make it by yourself. It'...Using Named Pipe and self-elevation feature of Vista in a console application.: NPipeWithElevatedProc, make it easier for console application users, running programs with administrator privileges. The processing messages are al...Virtual Keyboard control for asp.net: Virtual Keyboard control for asp.net, This control is used to get the secured inputs through virtual keyboards.Visual Studio 2010 PowerShell Code Generator: Brings rich PowerShell functionalities into VS Templating. You can access the file system, the registry, and many other PowerShell features. You ca...WatchersNET.UrlShorty: This Module allows users to shorten a long URL and share it, this is a similiar service to web services like bit.ly, tinyurl.com and others. It als...New ReleasesBD File Hash: BD File Hash 1.0.5: The first public release of BD File Hash.Cassandraemon: Cassandraemon 0.6.0: Cassandraemon is LINQ Provider for Apache Cassandra. This is first release of Cassandraemon. Features You can Query by LINQ Support Regist, Del...Community Forums NNTP bridge: Community Forums NNTP Bridge V36: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Curve Drawer: Alpha 1: Basic functionality is available to draw curves and clear them.CycleMania Starter Kit EAP - ASP.NET 4 Problem - Design - Solution: Cyclemania 09.32: see Source Code tab for recent change historyDEWD: DEWD for Umbraco v1.0 (beta-2): Beta release of the package. Functional feature set and fairly stable. Since the last release: Default values (support for dynamic values such as t...Fiddler TreeView Panel Extension: FiddlerTreeViewPanel 0.71: Added support for double-click to expand/collapse all child nodes. Keep selected node when losing focus from the TreeView. Please refer to http://...HKGolden Express: HKGoldenExpress (Build 201006130200): New features: User can reply to message with quoting others' message. Bug fix: Incorrect format of dynamically generated Sitemap XML. Improveme...Liekhus ADO.NET Entity Data Model XAF Extensions: Version 1.1.2: Latest patches and changes.Light Box Control for Asp.Net: Light Box Control for asp.net: Lightbox control for asp.net is used to display the thumbnail images. on clicking the thumbnail images the original images is displayed in the ligh...Lightweight Fluent Workflow: Objectflow 1.1.0.0: This release has support for multi-threaded operations. As this required significant changes to the fluent interface I have introduced breaking ch...Linquify: Linquify 1.6: Linquify 1.6 Includes: - Support for Entity Framework foreign keys - TransactionsLiteFx: LiteFx Alpha: Versão alpha do LiteFx.Microsoft Dynamics CRM Query - T4 Template: MS CRM Query T4 Template Version 0.5 Beta: Initial ReleaseNHibernate Membership Provider: NHibernate Membership Provider 0.9c: This is an updated source package with updated unit tests and some minor refactoring.NLog - Advanced .NET Logging: Nightly Build 2010.06.12.001: Changes since the last build:2010-06-12 10:42:41 Jarek Kowalski Added Width, Height, AutoScroll and MaxLines parameters to RichTextBoxTarget. 2010...Radical: Radical 1.0.1 (Vacuum): First drop with support for Windows Phone 7SharpCrack: SharpCrack v0.8: First release of SharpCrack. It does not support brute-force mode.social bookmark control for asp.net: social bookmark control for asp.net: social bookmark control for asp.net, This control is used to bookmark the current asp.net page into popular social networking sites like facebook, ...StardustExtensions: Simple hello: This is a very simple hello world script. Is just a basic script, is not packaged and works on IronPythonTiledLib: TiledLib 1.5: This release introduces breaking changes from 1.2. If you upgrade to this version from 1.2, you may have compiler errors and/or runtime differences...UDC indexes parser: UDC Parser RC1: Обновлена библиотека токенов, добавлены xml-doc комментарии, обновлен и исправлен код, обновлена логика лексера, обновлена грамматика парсера. Доба...UnOfficial ActiveWorlds Wrapper.Net: UnOfficial ActiveWorlds Wrapper.Net V0.5.85.1: NewLogin Structure. LaserBeam. ChangedOld Functions Changes Function Names Old New WorldInstanceSet SetWorldInstance WorldInstanceGet GetWo...UrzaGatherer: UrzaGatherer v2.0.2a: Inegration of VS Installer.VCC: Latest build, v2.1.30612.0: Automatic drop of latest buildVirtual Keyboard control for asp.net: virtual keyboard control: Virtual Keyboard control for asp.net, This control is used to get the secured inputs through virtual keyboards.Visual Studio 2010 PowerShell Code Generator: PSCodeGenerator: How to install PowerShell Code GeneratorDownload the zip Unzip Run .\Install-PSCodeGenerator.ps1 at the PowerShell console prompt Copies the te...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 25 Beta: Build 25 (beta) New: Added support for Filter items (virtual folders) in Solution Explorer. New: Added "Get Lock..." to Solution Explorer context...WatchersNET.UrlShorty: WatchersNET.UrlShorty 01.00.00: First BETA Release Please Read the Readme or the Online Documentation for Install Instructions.Yet Another GPS: Release Beta 2.1: Release Beta 2.1: - Fix KML Template with Google Map Mobile Version - Add Signal Strength indecator - Add Time indecator - Fix Sound Language Prob...Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active Projectspatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog ModuleRhyduino - Arduino and Managed CodeBlogEngine.NETCommunity Forums NNTP bridgeCassandraemonMediaCoder.NETAndrew's XNA HelpersMicrosoft Silverlight Media Framework

    Read the article

  • Controlling fan speed on ASUS K43SV

    - by user181677
    ASUS K43SV laptop it very hot. Is it possible to control fan speed with fancontrol? When I run $sudo pwmconfig it displays this message: /usr/sbin/pwmconfig: There are no fan-capable sensor modules installed When I run $sensors, here is the output acpitz-virtual-0 Adapter: Virtual device temp1: +61.0°C (crit = +103.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +62.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +62.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +61.0°C (high = +86.0°C, crit = +100.0°C)

    Read the article

  • Is Your ASP.NET Development Server Not Working?

    - by Paulo Morgado
    Since Visual Studio 2005, Visual Studio comes with a development web server: the ASP.NET Development Server. I’ve been using this web server for simple test projects since than with Visual Studio 2005 and Visual Studio 2008 in Windows XP Professional on my work laptop and Windows XP Professional, Windows Vista 64bit Ultimate and Windows 7 64bit Ultimate at my home desktop without any problems (apart the known custom identity problem, that is). When I received my new work laptop, I installed Windows Vista 64bit Enterprise and Visual Studio 2008 and, for my surprise, the ASP.NET Development Server wasn’t working. I started looking for differences between the laptop environment and the desktop environment and the most notorious differences were: System Laptop Desktop SKU Windows Vista 64bit Enterprise Windows Vista 64bit Ultimate Joined to a Domain Yes No Anti-Virus McAffe ESET After asserting that no domain policies were being applied to my laptop and domain user and nothing was being logged by the ant-virus, my suspicions turned to the fact that the laptop was running an Enterprise SKU and the desktop was running an Ultimate SKU. After having problems with other applications I was sure that problem was the Enterprise SKU, but never found a solution to the problem. Because I wasn’t doing any web development at the time, I left it alone. After upgrading to Windows 7, the problem persisted but, because I wasn’t doing any web development at the time, once again, I left it alone. Now that I installed Visual Studio 2010 I had to solve this. After searching around forums and blogs that either didn’t offer an answer or offered very complicated workarounds that, sometimes, involved messing with the registry, I came to the conclusion that the solution is, in fact, very simple. When Windows Vista is installed, hosts file, according to this contains this definition: 127.0.0.1 localhost ::1 localhost This was not what I had on my laptop hosts file. What I had was this: #127.0.0.1 localhost #::1 localhost I might have changed it myself, but from the amount of people that I found complaining about this problem on Windows Vista, this was probably the way it was. The installation of Windows 7 leaves the hosts file like this: #127.0.0.1 localhost #::1 localhost And although the ASP.NET Development Server works fine on Windows 7 64bit Ultimate, on Windows 7 64bit Enterprise it needs to be change to this: 127.0.0.1 localhost ::1 localhost And I suspect it’s the same with Windows Vista 64bit Enterprise.

    Read the article

  • Does just-ping determine a website's accessibility and/or speed?

    - by Yves
    While looking for a webhost I wanted one that had good connectivity around the world, and ran their (shared hosting) test IPs on just-ping.com. This is a part of a sample result: München, Germany: Packets lost (10%) 24.8 24.9 25.1 178.xx.xx.xxx Cologne, Germany: Okay 5.6 5.7 5.8 178.xx.xx.xxx New York, U.S.A.: Packets lost (30%) 80.3 80.4 80.7 178.xx.xx.xxx Stockholm, Sweden: Packets lost (100%) 178.xx.xx.xxx Santa Clara, U.S.A.: Packets lost (30%) 158.1 158.4 158.7 178.xx.xx.xxx Vancouver, Canada: Packets lost (70%) 189.4 189.5 189.5 178.xx.xx.xxx London, United Kingdom: Packets lost (100%) Am I correct in thinking that hosts with several "Packets lost" messages from different locations have less stable or slower connections than hosts with all "Okays"?

    Read the article

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • How to configure a zone cluster on Solaris Cluster 4.0

    - by JuergenS
    This is a short overview on how to configure a zone cluster on Solaris Cluster 4.0. This is a little bit different as in Solaris Cluster 3.2/3.3 because Solaris Cluster 4.0 is only running on Solaris 11. The name of the zone cluster must be unique throughout the global Solaris Cluster and must be configured on a global Solaris Cluster. Please read all the requirements for zone cluster in Solaris Cluster Software Installation Guide for SC4.0. For Solaris Cluster 3.2/3.3 please refer to my previous blog Configuration steps to create a zone cluster in Solaris Cluster 3.2/3.3. A. Configure the zone cluster into the already running global clusterCheck if zone cluster can be created # cluster show-netprops to change number of zone clusters use # cluster set-netprops -p num_zoneclusters=12 Note: 12 zone clusters is the default, values can be customized! Create config file (zc1config) for zone cluster setup e.g: Configure zone cluster # clzc configure -f zc1config zc1 Note: If not using the config file the configuration can also be done manually # clzc configure zc1 Check zone configuration # clzc export zc1 Verify zone cluster # clzc verify zc1 Note: The following message is a notice and comes up on several clzc commands Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc1"... Install the zone cluster # clzc install zc1 Note: Monitor the consoles of the global zone to see how the install proceed! (The output is different on the nodes) It's very important that all global cluster nodes have installed the same set of ha-cluster packages! Boot the zone cluster # clzc boot zc1 Login into non-global-zones of zone cluster zc1 on all nodes and finish Solaris installation. # zlogin -C zc1 Check status of zone cluster # clzc status zc1 Login into non-global-zones of zone cluster zc1 and configure the shell environment for root (for PATH: /usr/cluster/bin, for MANPATH: /usr/cluster/man) # zlogin -C zc1 If using additional name service configure /etc/nsswitch.conf of zone cluster non-global zones. hosts: cluster files netmasks: cluster files Configure /etc/inet/hosts of the zone cluster zones Enter all the logical hosts of non-global zones B. Add resource groups and resources to zone cluster Create a resource group in zone cluster # clrg create -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Note1: Use command # cluster status for zone cluster resource group overview. Note2: You can also run all commands for zone cluster in global cluster by adding the option -Z to the command. e.g: # clrg create -Z zc1 -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Set up the logical host resource for zone cluster In the global zone do: # clzc configure zc1 clzc:zc1 add net clzc:zc1:net set address=<zone-logicalhost-ip> clzc:zc1:net end clzc:zc1 commit clzc:zc1 exit Note: Check that logical host is in /etc/hosts file In zone cluster do: # clrslh create -g app-rg -h <zone-logicalhost> <zone-logicalhost>-rs Set up storage resource for zone cluster Register HAStoragePlus # clrt register SUNW.HAStoragePlus Example1) ZFS storage pool In the global zone do: Configure zpool eg: # zpool create <zdata> mirror cXtXdX cXtXdX and # clzc configure zc1 clzc:zc1 add dataset clzc:zc1:dataset set name=zdata clzc:zc1:dataset end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p zpools=zdata app-hasp-rs Example2) HA filesystem In the global zone do: Configure SVM diskset and SVM devices. and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/data clzc:zc1:fs set special=/dev/md/datads/dsk/d0 clzc:zc1:fs set raw=/dev/md/datads/rdsk/d0 clzc:zc1:fs set type=ufs clzc:zc1:fs add options [logging] clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/data app-hasp-rs Example3) Global filesystem as loopback file system In the global zone configure global filesystem and it to /etc/vfstab on all global nodes e.g.: /dev/md/datads/dsk/d0 /dev/md/datads/dsk/d0 /global/fs ufs 2 yes global,logging and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/zone/fs (zc-lofs-mountpoint) clzc:zc1:fs set special=/global/fs (globalcluster-mountpoint) clzc:zc1:fs set type=lofs clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: (Create scalable rg if not already done) # clrg create -p desired_primaries=2 -p maximum_primaries=2 app-scal-rg # clrs create -g app-scal-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/zone/fs hasp-rs More details of adding storage available in the Installation Guide for zone cluster Switch resource group and resources online in the zone cluster # clrg online -eM app-rg # clrg online -eM app-scal-rg Test: Switch of the resource group in the zone cluster # clrg switch -n zonehost2 app-rg # clrg switch -n zonehost2 app-scal-rg Add supported dataservice to zone cluster Documentation for SC4.0 is available here Example output: Appendix: To delete a zone cluster do: # clrg delete -Z zc1 -F + Note: Zone cluster uninstall can only be done if all resource groups are removed in the zone cluster. The command 'clrg delete -F +' can be used in zone cluster to delete the resource groups recursively. # clzc halt zc1 # clzc uninstall zc1 Note: If clzc command is not successful to uninstall the zone, then run 'zoneadm -z zc1 uninstall -F' on the nodes where zc1 is configured # clzc delete zc1

    Read the article

  • How can Google publish Dalvik as Java-language compatible since Java is a trademark?

    - by Bruno Chagas
    According to this thread Java and JVM license You can write a compiler that implements the Java Language Specification or write a JVM that implements the Java Virtual Machine specification, but when you officially want to call it "Java", you have to prove it is compatible by passing the tests of the TCK (technology compatibility kit) and pay for a license from Oracle. So, how can Google (or any other java implementation for that matter) claims that Dalvik is a Java virtual machine?

    Read the article

  • The Proper Use of the VM Role in Windows Azure

    - by BuckWoody
    At the Professional Developer’s Conference (PDC) in 2010 we announced an addition to the Computational Roles in Windows Azure, called the VM Role. This new feature allows a great deal of control over the applications you write, but some have confused it with our full infrastructure offering in Windows Hyper-V. There is a proper architecture pattern for both of them. Virtualization Virtualization is the process of taking all of the hardware of a physical computer and replicating it in software alone. This means that a single computer can “host” or run several “virtual” computers. These virtual computers can run anywhere - including at a vendor’s location. Some companies refer to this as Cloud Computing since the hardware is operated and maintained elsewhere. IaaS The more detailed definition of this type of computing is called Infrastructure as a Service (Iaas) since it removes the need for you to maintain hardware at your organization. The operating system, drivers, and all the other software required to run an application are still under your control and your responsibility to license, patch, and scale. Microsoft has an offering in this space called Hyper-V, that runs on the Windows operating system. Combined with a hardware hosting vendor and the System Center software to create and deploy Virtual Machines (a process referred to as provisioning), you can create a Cloud environment with full control over all aspects of the machine, including multiple operating systems if you like. Hosting machines and provisioning them at your own buildings is sometimes called a Private Cloud, and hosting them somewhere else is often called a Public Cloud. State-ful and Stateless Programming This paradigm does not create a new, scalable way of computing. It simply moves the hardware away. The reason is that when you limit the Cloud efforts to a Virtual Machine, you are in effect limiting the computing resources to what that single system can provide. This is because much of the software developed in this environment maintains “state” - and that requires a little explanation. “State-ful programming” means that all parts of the computing environment stay connected to each other throughout a compute cycle. The system expects the memory, CPU, storage and network to remain in the same state from the beginning of the process to the end. You can think of this as a telephone conversation - you expect that the other person picks up the phone, listens to you, and talks back all in a single unit of time. In “Stateless” computing the system is designed to allow the different parts of the code to run independently of each other. You can think of this like an e-mail exchange. You compose an e-mail from your system (it has the state when you’re doing that) and then you walk away for a bit to make some coffee. A few minutes later you click the “send” button (the network has the state) and you go to a meeting. The server receives the message and stores it on a mail program’s database (the mail server has the state now) and continues working on other mail. Finally, the other party logs on to their mail client and reads the mail (the other user has the state) and responds to it and so on. These events might be separated by milliseconds or even days, but the system continues to operate. The entire process doesn’t maintain the state, each component does. This is the exact concept behind coding for Windows Azure. The stateless programming model allows amazing rates of scale, since the message (think of the e-mail) can be broken apart by multiple programs and worked on in parallel (like when the e-mail goes to hundreds of users), and only the order of re-assembling the work is important to consider. For the exact same reason, if the system makes copies of those running programs as Windows Azure does, you have built-in redundancy and recovery. It’s just built into the design. The Difference Between Infrastructure Designs and Platform Designs When you simply take a physical server running software and virtualize it either privately or publicly, you haven’t done anything to allow the code to scale or have recovery. That all has to be handled by adding more code and more Virtual Machines that have a slight lag in maintaining the running state of the system. Add more machines and you get more lag, so the scale is limited. This is the primary limitation with IaaS. It’s also not as easy to deploy these VM’s, and more importantly, you’re often charged on a longer basis to remove them. your agility in IaaS is more limited. Windows Azure is a Platform - meaning that you get objects you can code against. The code you write runs on multiple nodes with multiple copies, and it all works because of the magic of Stateless programming. you don’t worry, or even care, about what is running underneath. It could be Windows (and it is in fact a type of Windows Server), Linux, or anything else - but that' isn’t what you want to manage, monitor, maintain or license. You don’t want to deploy an operating system - you want to deploy an application. You want your code to run, and you don’t care how it does that. Another benefit to PaaS is that you can ask for hundreds or thousands of new nodes of computing power - there’s no provisioning, it just happens. And you can stop using them quicker - and the base code for your application does not have to change to make this happen. Windows Azure Roles and Their Use If you need your code to have a user interface, in Visual Studio you add a Web Role to your project, and if the code needs to do work that doesn’t involve a user interface you can add a Worker Role. They are just containers that act a certain way. I’ll provide more detail on those later. Note: That’s a general description, so it’s not entirely accurate, but it’s accurate enough for this discussion. So now we’re back to that VM Role. Because of the name, some have mistakenly thought that you can take a Virtual Machine running, say Linux, and deploy it to Windows Azure using this Role. But you can’t. That’s not what it is designed for at all. If you do need that kind of deployment, you should look into Hyper-V and System Center to create the Private or Public Infrastructure as a Service. What the VM Role is actually designed to do is to allow you to have a great deal of control over the system where your code will run. Let’s take an example. You’ve heard about Windows Azure, and Platform programming. You’re convinced it’s the right way to code. But you have a lot of things you’ve written in another way at your company. Re-writing all of your code to take advantage of Windows Azure will take a long time. Or perhaps you have a certain version of Apache Web Server that you need for your code to work. In both cases, you think you can (or already have) code the the software to be “Stateless”, you just need more control over the place where the code runs. That’s the place where a VM Role makes sense. Recap Virtualizing servers alone has limitations of scale, availability and recovery. Microsoft’s offering in this area is Hyper-V and System Center, not the VM Role. The VM Role is still used for running Stateless code, just like the Web and Worker Roles, with the exception that it allows you more control over the environment of where that code runs.

    Read the article

  • Don’t just P2V that server for Testing!

    - by Jonathan Kehayias
    If you use virtualization in your company, at some point in time you might be tempted to perform a Physical-To-Virtual conversion, also known as P2V.  The ability to create a complete working copy of a physical server as a virtual machine is really useful for migrating to a virtualized datacenter, but it can also wreak havoc in your environment if you use it to generate a copy of a server for testing and the only change you make is the name of the server.  The Problem: Consider that you...(read more)

    Read the article

  • How to find Microsoft.SharePoint.ApplicationPages.dll and some other assemblies

    - by KunaalKapoor
    You may be wondering where to find Microsoft.SharePoint.ApplicationPages.dll , if you are creating a new SharePoint application page? But don’t worry, it resides in _app_bin folder of your SharePoint site’s virtual directory.Assuming your IIS inetpub is at C then the exact path of Microsoft.SharePoint.ApplicationPages.dll isC:\Inetpub\wwwroot\wss\VirtualDirectories\<Your Virtual Server>\_app_bin\Microsoft.SharePoint.ApplicationPages.dllHere is the full list of assemblies at _app_bin folder:Microsoft.Office.DocumentManagement.Pages.dllMicrosoft.Office.officialfileSoap.dllMicrosoft.Office.Policy.Pages.dllMicrosoft.Office.SlideLibrarySoap.dllMicrosoft.Office.Workflow.Pages.dllMicrosoft.Office.WorkflowSoap.dllMicrosoft.SharePoint.ApplicationPages.dllSTSSOAP.DLL

    Read the article

  • Why doesn't Apache start from xampp control panel after changes to vhosts config?

    - by Grafica
    I'm running xampp on my local server, and want to host multiple sites, so I changed the httpd-vhosts.conf file. Will somebody let me know if there is something wrong with my code? Apache was running while I had only one site in the config, but after I added another site, I stopped apache, and I'm not able to restart it. # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # ##NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host.localhost" ##ServerName dummy-host.localhost ##ServerAlias www.dummy-host.localhost ##ErrorLog "logs/dummy-host.localhost-error.log" ##CustomLog "logs/dummy-host.localhost-access.log" combined ##</VirtualHost> ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host2.localhost" ##ServerName dummy-host2.localhost ##ServerAlias www.dummy-host2.localhost ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined ##</VirtualHost> NameVirtualHost * <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName localhost </VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName evamagnus.com <Directory "C:\xampp\htdocs\"> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs2\" ServerName mygrafica.com <Directory "C:\xampp\htdocs2\"> Order allow,deny Allow from all </Directory> </VirtualHost> Here is what it says in the control panel: 2:17:37 PM [apache] Starting apache service... 2:17:38 PM [apache] Status change detected: running 2:17:39 PM [apache] Status change detected: stopped Thanks in advance.

    Read the article

  • Run simple bash script to start applications at login

    - by ganjan
    I want to run a simple bash script automatically when I log in. For example #!/bin/bash echo "start spotify" gnome-terminal -e spotify --title spotify When I run this command, one gnome-terminal shows up and spotify show up. I also want the gnome-terminal to popup "hidden" in a different virtual desktop. (one of the other four virtual desktops you can choose from taskbar) I tried to add this to /home/me/.bash_login or something, but that didn't work..

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >