Search Results

Search found 63114 results on 2525 pages for 'set identity insert'.

Page 321/2525 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET MVC 3 Hosting :: How to Deploy Web Apps Using ASP.NET MVC 3, Razor and EF Code First - Part I

    - by mbridge
    First, you can download the source code from http://efmvc.codeplex.com. The following frameworks will be used for this step by step tutorial. public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } } Expense Class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } }    Define Domain Model Let’s create domain model for our simple web application Category Class We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First. public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     } RepositoryBasse – Generic Repository class protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } } DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work public interface IUnitOfWork {     void Commit(); } UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } } The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>. public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext. public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } } Let’s create controller factory for Unity in the ASP.NET MVC 3 application.                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   } Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies. private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); } In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity. protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); } Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        } Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>         }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div> We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Category.cshtml @model MyFinance.Domain.Category <div class="editor-label"> @Html.LabelFor(model => model.Name) </div> <div class="editor-field"> @Html.EditorFor(model => model.Name) @Html.ValidationMessageFor(model => model.Name) </div> <div class="editor-label"> @Html.LabelFor(model => model.Description) </div> <div class="editor-field"> @Html.EditorFor(model => model.Description) @Html.ValidationMessageFor(model => model.Description) </div> Let’s create view page for insert Category information @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.     @{     Layout = "~/Views/Shared/_Layout.cshtml"; } Tomorrow, we will cotinue the second part of this article. :)

    Read the article

  • Software index broken

    - by Arvind Gangwar
    When I was installing MTS Mblaz crossplatformui.deb for MTS data connect, its installed partial and shows error, and So I tried to uninstall "crossplatformui" but every time it showed following error. installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 205769 files and directories currently installed.) Removing crossplatformui ... ztemtvcdromd: no process found dpkg: error processing crossplatformui (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: crossplatformui Setting up firmware-b43-installer (4.150.10.5-5) ... --2012-06-01 14:11:21-- http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2 Resolving mirror2.openwrt.org... 46.4.11.11 Connecting to mirror2.openwrt.org|46.4.11.11|:80... failed: Connection refused. dpkg: error processing firmware-b43-installer (--configure): subprocess installed post-installation script returned error exit status 4

    Read the article

  • Using Deployment Manager

    - by Jess Nickson
    One of the teams at Red Gate has been working very hard on a new product: Deployment Manager. Deployment Manager is a free tool that lets you deploy updates to .NET apps, services and databases through a central dashboard. Deployment Manager has been out for a while, but I must admit that even though I work in the same building, until now I hadn’t even looked at it. My job at Red Gate is to develop and maintain some of our community sites, which involves carrying out regular deployments. One of the projects I have to deploy on a fairly regular basis requires me to send my changes to our build server, TeamCity. The output is a Zip file of the build. I then have to go and find this file, copy it across to the staging machine, extract it, and copy some of the sub-folders to other places. In order to keep track of what builds are running, I need to rename the folders accordingly. However, even after all that, I still need to go and update the site and its applications in IIS to point at these new builds. Oh, and then, I have to repeat the process when I deploy on production. Did I mention the multiple configuration files that then need updating as well? Manually? The whole process can take well over half an hour. I’m ready to try out a new process. Deployment Manager is designed to massively simplify the deployment processes from what could be lots of manual copying of files, managing of configuration files, and database upgrades down to a few clicks. It’s a big promise, but I decided to try out this new tool on one of the smaller ASP.NET sites at Red Gate, Format SQL (the result of a Red Gate Down Tools week). I wanted to add some new functionality, but given it was a new site with no set way of doing things, I was reluctant to have to manually copy files around servers. I decided to use this opportunity as a chance to set the site up on Deployment Manager and check out its functionality. What follows is a guide on how to get set up with Deployment Manager, a brief overview of its features, and what I thought of the experience. To follow along with the instructions that follow, you’ll first need to download Deployment Manager from Red Gate. It has a free ‘Starter Edition’ which allows you to create up to 5 projects and agents (machines you deploy to), so it’s really easy to get up and running with a fully-featured version. The Initial Set Up After installing the product and setting it up using the administration tool it provides, I launched Deployment Manager by going to the URL and port I had set it to run on. This loads up the main dashboard. The dashboard does a good job of guiding me through the process of getting started, beginning with a prompt to create some environments. 1. Setting up Environments The dashboard informed me that I needed to add new ‘Environments’, which are essentially ways of grouping the machines you want to deploy to. The environments that get added will show up on the main dashboard. I set up two such environments for this project: ‘staging’ and ‘live’.   2. Add Target Machines Once I had created the environments, I was ready to add ‘target machine’s to them, which are the actual machines that the deployment will occur on.   To enable me to deploy to a new machine, I needed to download and install an Agent on it. The ‘Add target machine’ form on the ‘Environments’ page helpfully provides a link for downloading an Agent.   Once the agent has been installed, it is just a case of copying the server key to the agent, and the agent key to the server, to link them up.   3. Run Health Check If, after adding your new target machine, the ‘Status’ flags an error, it is possible that the Agent and Server keys have not been entered correctly on both Deployment Manager and the Agent service.     You can ‘Check Health’, which will give you more information on any issues. It is probably worth running this regardless of what status the ‘Environments’ dashboard is claiming, just to be on the safe side.     4. Add Projects Going back to the main Dashboard tab at this point, I found that it was telling me that I needed to set up a new project.   I clicked the ‘project’ link to get started, gave my new project a name and clicked ‘Create’. I was then redirected to the ‘Steps’ page for the project under the Projects tab.   5. Package Steps The ‘Steps’ page was fairly empty when it first loaded.   Adding a ‘step’ allowed me to specify what packages I wanted to grab for the deployment. This part requires a NuGet package feed to be set up, which is where Deployment Manager will look for the packages. At Red Gate, we already have one set up, so I just needed to tell Deployment Manager about it. Don’t worry; there is a nice guide included on how to go about doing all of this on the ‘Package Feeds’ page in ‘Settings’, if you need any help with setting these bits up.    At Red Gate we use a build server, TeamCity, which is capable of publishing built projects to the NuGet feed we use. This makes the workflow for Format SQL relatively simple: when I commit a change to the project, the build server is configured to grab those changes, build the project, and spit out a new NuGet package to the Red Gate NuGet package feed. My ‘package step’, therefore, is set up to look for this package on our feed. The final part of package step was simply specifying which machines from what environments I wanted to be able to deploy the project to.     Format SQL Now the main Dashboard showed my new project and environment in a rather empty looking grid. Clicking on my project presented me with a nice little message telling me that I am now ready to create my first release!   Create a release Next I clicked on the ‘Create release’ button in the Projects tab. If your feeds and package step(s) were set up correctly, then Deployment Manager will automatically grab the latest version of the NuGet package that you want to deploy. As you can see here, it was able to pick up the latest build for Format SQL and all I needed to do was enter a version number and description of the release.   As you can see underneath ‘Version number’, it keeps track of what version the previous release was given. Clicking ‘Create’ created the release and redirected me to a summary of it where I could check the details before deploying.   I clicked ‘Deploy this release’ and chose the environment I wanted to deploy to and…that’s it. Deployment Manager went off and deployed it for me.   Once I clicked ‘Deploy release’, Deployment Manager started to automatically update and provide continuing feedback about the process. If any errors do arise, then I can expand the results to see where it went wrong. That’s it, I’m done! Keep in mind, if you hit errors with the deployment itself then it is possible to view the log output to try and determine where these occurred. You can keep expanding the logs to narrow down the problem. The screenshot below is not from my Format SQL deployment, but I thought I’d post one to demonstrate the logging output available. Features One of the best bits of Deployment Manager for me is the ability to very, very easily deploy the same release to multiple machines. Deploying this same release to production was just a case of selecting the deployment and choosing the ‘live’ environment as the place to deploy to. Following on from this is the fact that, as Deployment Manager keeps track of all of your releases, it is extremely easy to roll back to a previous release if anything goes pear-shaped! You can view all your previous releases and select one to re-deploy. I needed this feature more than once when differences in my production and staging machines lead to some odd behavior.     Another option is to use the TeamCity integration available. This enables you to set Deployment Manager up so that it will automatically create releases and deploy these to an environment directly from TeamCity, meaning that you can always see the latest version up and running without having to do anything. Machine Specific Deployments ‘What about custom configuration files?’ I hear you shout. Certainly, it was one of my concerns. Our setup on the staging machine is not in line with that on production. What this means is that, should we deploy the same configuration to both, one of them is going to break. Thankfully, it turns out that Deployment Manager can deal with this. Given I had environments ‘staging’ and ‘live’, and that staging used the project’s web.config file, while production (‘live’) required the config file to undergo some transformations, I simply added a web.live.config file in the project, so that it would be included as part of the NuGet package. In this file, I wrote the XML document transformations I needed and Deployment Manager took care of the rest. Another option is to set up ‘variables’ for your project, which allow you to specify key-value pairs for your configuration file, and which environment to apply them to. You’ll find Variables as a full left-hand submenu within the ‘Projects’ tab. These features will definitely be of interest if you have a large number of environments! There are still many other features that I didn’t get a chance to play around with like running PowerShell scripts for more personalised deployments. Maybe next time! Also, let’s not forget that my use case in this article is a very simple one – deploying a single package. I don’t believe that all projects will be equally as simple, but I already appreciate how much easier Deployment Manager could make my life. I look forward to the possibility of moving our other sites over to Deployment Manager in the near future.   Conclusion In this article I have described the steps involved in setting up and configuring an instance of Deployment Manager, creating a new automated deployment process, and using this to actually carry out a deployment. I’ve tried to mention some of the features I found particularly useful, such as error logging, easy release management allowing you to deploy the same release multiple times, and configuration file transformations. If I had to point out one issue, then it would be that the releases are immutable, which from a development point of view makes sense. However, this causes confusion where I have to create a new release to deploy to a newly set up environment – I cannot simply deploy an old release onto a new environment, the whole release needs to be recreated. I really liked how easy it was to get going with the product. Setting up Format SQL and making a first deployment took very little time. Especially when you compare it to how long it takes me to manually deploy the other site, as I described earlier. I liked how it let me know what I needed to do next, with little messages flagging up that I needed to ‘create environments’ or ‘add some deployment steps’ before I could continue. I found the dashboard incredibly convenient. As the number of projects and environments increase, it might become awkward to try and search them and find out what state they are in. Instead, the dashboard handily keeps track of the latest deployments of each project and lets you know what version is running on each of the environments, and when that deployment occurred. Finally, do you remember my complaint about having to rename folders so that I could keep track of what build they came from? This is yet another thing that Deployment Manager takes care of for you. Each release is put into its own directory, which takes the name of whatever version number that release has, though these can be customised if necessary. If you’d like to take a look at Deployment Manager for yourself, then you can download it here.

    Read the article

  • Entity Framework 4.3.1 Code based Migrations and Connector/Net 6.6

    - by GABMARTINEZ
     Code-based migrations is a new feature as part of the Connector/Net support for Entity Framework 4.3.1. In this tutorial we'll see how we can use it so we can keep track of the changes done to our database creating a new application using the code first approach. If you don't have a clear idea about how code first works we highly recommend you to check this subject before going further with this tutorial. Creating our Model and Database with Code First  From VS 2010  1. Create a new console application 2.  Add the latest Entity Framework official package using Package Manager Console (Tools Menu, then Library Package Manager -> Package Manager Console). In the Package Manager Console we have to type  Install-Package EntityFramework This will add the latest version of this library.  We will also need to make some changes to your config file. A <configSections> was added which contains the version you have from EntityFramework.  An <entityFramework> section was also added where you can set up some initialization. This section is optional and by default is generated to use SQL Express. Since we don't need it for now (we'll see more about it below) let's leave this section empty as shown below. 3. Create a new Model with a simple entity. 4. Enable Migrations to generate the our Configuration class. In the Package Manager Console we have to type  Enable-Migrations; This will make some changes in our application. It will create a new folder called Migrations where all the migrations representing the changes we do to our model.  It will also create a Configuration class that we'll be using to initialize our SQL Generator and some other values like if we want to enable Automatic Migrations.  You can see that it already has the name of our DbContext. You can also create you Configuration class manually. 5. Specify our Model Provider. We need to specify in our Class Configuration that we'll be using MySQLClient since this is not part of the generated code. Also please make sure you have added the MySql.Data and the MySql.Data.Entity references to your project. using MySql.Data.Entity;   // Add the MySQL.Data.Entity namespace public Configuration() { this.AutomaticMigrationsEnabled = false; SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());    // This will add our MySQLClient as SQL Generator } 6. Add our Data Provider and set up our connection string <connectionStrings> <add name="PersonalContext" connectionString="server=localhost;User Id=root;database=Personal;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> <system.data> <DbProviderFactories> <remove invariant="MySql.Data.MySqlClient" /> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.6.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" /> </DbProviderFactories> </system.data> * The version recommended to use of Connector/Net is 6.6.2 or earlier. At this point we can create our database and then start working with Migrations. So let's do some data access so our database get's created. You can run your application and you'll get your database Personal as specified in our config file. Add our first migration Migrations are a great resource as we can have a record for all the changes done and will generate the MySQL statements required to apply these changes to the database. Let's add a new property to our Person class public string Email { get; set; } If you try to run your application it will throw an exception saying  The model backing the 'PersonelContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269). So as suggested let's add our first migration for this change. In the Package Manager Console let's type Add-Migration AddEmailColumn Now we have the corresponding class which generate the necessary operations to update our database. namespace MigrationsFromScratch.Migrations { using System.Data.Entity.Migrations; public partial class AddEmailColumn : DbMigration { public override void Up(){ AddColumn("People", "Email", c => c.String(unicode: false)); } public override void Down() { DropColumn("People", "Email"); } } } In the Package Manager Console let's type Update-Database Now you can check your database to see all changes were succesfully applied. Now let's add a second change and generate our second migration public class Person   {       [Key]       public int PersonId { get; set;}       public string Name { get; set; }       public string Address {get; set;}       public string Email { get; set; }       public List<Skill> Skills { get; set; }   }   public class Skill   {     [Key]     public int SkillId { get; set; }     public string Description { get; set; }   }   public class PersonelContext : DbContext   {     public DbSet<Person> Persons { get; set; }     public DbSet<Skill> Skills { get; set; }   } If you would like to customize any part of this code you can do that at this step. You can see there is the up method which can update your database and the down that can revert the changes done. If you customize any code you should make sure to customize in both methods. Now let's apply this change. Update-database -verbose I added the verbose flag so you can see all the SQL generated statements to be run. Downgrading changes So far we have always upgraded to the latest migration, but there may be times when you want downgrade to a specific migration. Let's say we want to return to the status we have before our last migration. We can use the -TargetMigration option to specify the migration we'd like to return. Also you can use the -verbose flag. If you like to go  back to the Initial state you can do: Update-Database -TargetMigration:$InitialDatabase  or equivalent: Update-Database -TargetMigration:0  Migrations doesn't allow by default a migration that would ocurr in a data loss. One case when you can got this message is for example in a DropColumn operation. You can override this configuration by setting AutomaticMigrationDataLossAllowed to true in the configuration class. Also you can set your Database Initializer in case you want that these Migrations can be applied automatically and you don't have to go all the way through creating a migration and updating later the changes. Let's see how. Database Initialization by Code We can specify an initialization strategy by using Database.SetInitializer (http://msdn.microsoft.com/en-us/library/gg679461(v=vs.103)). One of the strategies that I found very useful when you are at a development stage (I mean not for production) is the MigrateDatabaseToLatestVersion. This strategy will make all the necessary migrations each time there is a change in our model that needs a database replication, this also implies that we have to enable AutomaticMigrationsEnabled flag in our Configuration class. public Configuration()         {             AutomaticMigrationsEnabled = true;             AutomaticMigrationDataLossAllowed = true;             SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());    // This will add our MySQLClient as SQL Generator          } In the new EntityFramework section of your Config file we can set this at a context level basis.  The syntax is as follows: <contexts> <context type="Custom DbContext name, Assembly name"> <databaseInitializer type="System.Data.Entity.MigrateDatabaseToLatestVersion`2[[ Custom DbContext name, Assembly name],  [Configuration class name, Assembly name]],  EntityFramework" /> </context> </contexts> In our example this would be: The syntax is kind of odd but very convenient. This way all changes will always be applied when we do any data access in our application. There are a lot of new things to explore in EF 4.3.1 and Migrations so we'll continue writing some more posts about it. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Hope you found this information useful. Happy MySQL/.Net Coding! 

    Read the article

  • MobaXTerm - SSH Key authentication

    - by Chip Sprague
    I have a key that I converted and works fine with Putty. I have tried these formats: ssh -p 1111 -i id_rsa [email protected] ssh -i id_rsa -p 1111 [email protected] The key is in the same folder as the MobaXTerm executable. Thanks! EDIT: [chip.client] $ ssh -p 1111 -i id_rsa [email protected] -v Warning: Identity file id_rsa not accessible: No such file or directory. OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to 192.168.0.9 [192.168.0.100] port 1111. debug1: Connection established. debug1: identity file /home/chip/.ssh/id_rsa type -1 debug1: identity file /home/chip/.ssh/id_rsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debian-3ubuntu7 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu7 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 [email protected] debug1: kex: client->server aes128-ctr hmac-md5 [email protected] debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: checking without port identifier Warning: Permanently added '[192.168.0.100]:1111' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/chip/.ssh/id_rsa debug1: No more authentication methods to try. Permission denied (publickey). [01/09/2011 - 09:15.38] ~

    Read the article

  • How do I configure permissions for a cluster share using Powershell on 2008?

    - by Andrew J. Brehm
    I have a cluster resource of type "file share" but when I try to configure the "security" parameter I get the following error (excerpt): Set-ClusterParameter : Parameter 'security' does not exist on the cluster object Using cluster.exe I get a better result, namely the usual nothing when the command worked. But when I check in Failover Cluster Manager the permissions have not changed. In Server 2003 the cluster.exe method worked. Any ideas? Update: Entire command and error. PS C:\> $resource=get-clusterresource testshare PS C:\> $resource Name State Group ResourceType ---- ----- ----- ------------ testshare Offline Test File Share PS C:\> $resource|set-clusterparameter security "domain\account,grant,f" Set-ClusterParameter : Parameter 'security' does not exist on the cluster object 'testshare'. If you are trying to upda te an existing parameter, please make sure the parameter name is specified correctly. You can check for the current par ameters by passing the .NET object received from the appropriate Get-Cluster* cmdlet to "| Get-ClusterParameter". If yo u are trying to update a common property on the cluster object, you should set the property directly on the .NET object received by the appropriate Get-Cluster* cmdlet. You can check for the current common properties by passing the .NET o bject received from the appropriate Get-Cluster* cmdlet to "| fl *". If you are trying to create a new unknown paramete r, please use -Create with this Set-ClusterParameter cmdlet. At line:1 char:31 + $resource|set-clusterparameter <<<< security "domain\account,grant,f" + CategoryInfo : NotSpecified: (:) [Set-ClusterParameter], ClusterCmdletException + FullyQualifiedErrorId : Set-ClusterParameter,Microsoft.FailoverClusters.PowerShell.SetClusterParameterCommand

    Read the article

  • foswiki: hide some topic info when editing in WYSWYG mode.

    - by Mica
    I have a FOSWiki installation with a bunch of Topic templates already defined. the problem is, when a user selects the topic, they are presented with a bunch of extra information that they should not edit, and should not even see really. Is there a way to hide this content in the WYSWYG editor? Example: The topic template looks like this: <!-- * Foswiki.GenPDFAddOn Settings * Set GENPDFADDON_TITLE = <font size="7"><center>Foo</center></font> * Set GENPDFADDON_HEADFOOTFONT = helvetica * Set GENPDFADDON_FORMAT = pdf14 * Set GENPDFADDON_PERMISSIONS = print,no-copy * Set GENPDFADDON_ORIENTATION = portrait * Set GENPDFADDON_PAGESIZE = letter * Set GENPDFADDON_TOCLEVELS = 0 * Set GENPDFADDON_HEADERSHIFT = 0 --> <!-- PDFSTART --> <!-- HEADER LEFT "Foo:Bar" --> <!-- HEADER RIGHT "%BASETOPIC%" --> <!-- HEADER CENTER " " --> <!-- FOOTER RIGHT "Doc Rev %REVINFO{"r$rev - $date " web="%WEB%" topic="%BASETOPIC%"}%" --> <!-- FOOTER LEFT "F-xxx Rev A" --> <!-- FOOTER CENTER "Page $PAGE(1)" --> Header 1 foo etc. etc. etc <!-- pdfstop --> And when the user selects the topic template, they get all that in the WYSWYG editor. I would like to hide all that so when the user selects the topic template, they get Header 1 foo etc etc etc Without any of the other mark-up.

    Read the article

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • MySQL Error: FUNCTION LEVENSHTEIN already exists

    - by kgrote
    I've got an ExpressionEngine database and I exported a couple of tables from it, then dropped those tables. When I try to re-import the tables in PHPMyAdmin, I get this error: SQL query: -- -- Database: `my_db` -- DELIMITER $$ -- -- Functions -- CREATE DEFINER=`my_username`@`%` FUNCTION `LEVENSHTEIN`(s1 VARCHAR(255), s2 VARCHAR(255)) RETURNS int(11) DETERMINISTIC BEGIN DECLARE s1_len, s2_len, i, j, c, c_temp, cost INT; DECLARE s1_char CHAR; DECLARE cv0, cv1 VARBINARY(256); SET s1_len = CHAR_LENGTH(s1), s2_len = CHAR_LENGTH(s2), cv1 = 0x00, j = 1, i = 1, c = 0; IF s1 = s2 THEN RETURN 0; ELSEIF s1_len = 0 THEN RETURN s2_len; ELSEIF s2_len = 0 THEN RETURN s1_len; ELSE WHILE j <= s2_len DO SET cv1 = CONCAT(cv1, UNHEX(HEX(j))), j = j + 1; END WHILE; WHILE i <= s1_len DO SET s1_char = SUBSTRING(s1, i, 1), c = i, cv0 = UNHEX(HEX(i)), j = 1; WHILE j <= s2_len DO SET c = c + 1; IF s1_char = SUBSTRING(s2, j, 1) THEN SET cost = 0; ELSE SET cost = 1; END IF; SET c_temp = CONV(HEX(SUBSTRING(cv1, j, 1)), 16, 10) + cost; IF c > c_temp THEN SET c = [...] MySQL said: Documentation #1304 - FUNCTION LEVENSHTEIN already exists I get this error even if I drop all tables from the DB and try to import anything. The only way I can get the error to go away is to totally delete the database and re-create it. What's causing that error and how can I stop it from happening?

    Read the article

  • Cisco ASA Site-to-Site VPN Dropping

    - by ScottAdair
    I have three sites, Toronto (1.1.1.1), Mississauga (2.2.2.2) and San Francisco (3.3.3.3). All three sites have ASA 5520. All the sites are connected together with two site-to-site VPN links between each other location. My issue is that the tunnel between Toronto and San Francisco is very unstable, dropping every 40 min to 60 mins. The tunnel between Toronto and Mississauga (which is configured in the same manner) is fine with no drops. I also noticed that my pings with drop but the ASA thinks that the tunnel is still up and running. Here is the configuration of the tunnel. Toronto (1.1.1.1) crypto map Outside_map 1 match address Outside_cryptomap crypto map Outside_map 1 set peer 3.3.3.3 crypto map Outside_map 1 set ikev1 transform-set ESP-AES-256-MD5 ESP-AES-256-SHA crypto map Outside_map 1 set ikev2 ipsec-proposal AES256 group-policy GroupPolicy_3.3.3.3 internal group-policy GroupPolicy_3.3.3.3 attributes vpn-idle-timeout none vpn-tunnel-protocol ikev1 ikev2 tunnel-group 3.3.3.3 type ipsec-l2l tunnel-group 3.3.3.3 general-attributes default-group-policy GroupPolicy_3.3.3.3 tunnel-group 3.3.3.3 ipsec-attributes ikev1 pre-shared-key ***** isakmp keepalive disable ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** San Francisco (3.3.3.3) crypto map Outside_map0 2 match address Outside_cryptomap_1 crypto map Outside_map0 2 set peer 1.1.1.1 crypto map Outside_map0 2 set ikev1 transform-set ESP-AES-256-MD5 ESP-AES-256-SHA crypto map Outside_map0 2 set ikev2 ipsec-proposal AES256 group-policy GroupPolicy_1.1.1.1 internal group-policy GroupPolicy_1.1.1.1 attributes vpn-idle-timeout none vpn-tunnel-protocol ikev1 ikev2 tunnel-group 1.1.1.1 type ipsec-l2l tunnel-group 1.1.1.1 general-attributes default-group-policy GroupPolicy_1.1.1.1 tunnel-group 1.1.1.1 ipsec-attributes ikev1 pre-shared-key ***** isakmp keepalive disable ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** I'm at a loss. Any ideas?

    Read the article

  • MySQL unstable behavior with external stroge

    - by onurozcelik
    In our project we are using MySQL 5.0.90(InnoDB engine) server with an external storage. We store MySQL data files in an external storage. When the external storage down for a reason we have unstable behaviours. So we made some tests. In Windows Server 2008 We closed external storage physically. MySQL service stoped and we could not reach the server. Then we opened the storage unit and we could not start service until we reconfigured MySQL. We made storage unit offline from operating system. After 3-4 minutes and some insert trials(some insert trials succeded) MySQL service stoped and we could not reach the server. Then we made storage unit online and we could start the service(not automatically). In Windows Server 2003 We made storage unit offline. After 3-4 minutes and some insert trials(some insert trials succeded) MySQL service stoped and we could not reach the server. Then we made storage unit online we could not start the service until we reinstalled MySQL. Before reinstall we tried to reconfigure but it did not work. We closed external storage physically. MySQL service stoped and we could not reach the server. After that we opened the storage unit and we could start service(not automatically) We expect service to autostart after storage unit is online/open. But these tests show unstable behaviors. Is there any solutions to this.

    Read the article

  • mysql INNODB inserts very slow

    - by 133794m3r
    The database's schema is as follows. CREATE TABLE `items` ( `id` mediumint( 8 ) unsigned NOT NULL AUTO_INCREMENT , `name` varchar( 45 ) NOT NULL , `main_type` tinyint( 4 ) NOT NULL , `rarity` tinyint( 4 ) NOT NULL , `stack_size` smallint( 6 ) NOT NULL , `sub_type` tinyint( 4 ) NOT NULL , `cost` mediumint( 8 ) unsigned NOT NULL , `ilvl` smallint( 6 ) unsigned NOT NULL DEFAULT '0', `flavor_text` varchar( 250 ) NOT NULL , `rlvl` tinyint( 3 ) unsigned NOT NULL , `final` tinyint( 4 ) NOT NULL DEFAULT '0', PRIMARY KEY ( `id` ) ) ENGINE = InnoDB DEFAULT CHARSET = ascii; Now, doing an insert on this table takes 0.22 seconds. I don't know why it's taking so long to do a single row insert. Reads are really really fast something like 0.005 seconds. With using the example configuration from here dev mysql innodb it averages ~0.002 to ~0.005 seconds. Why it takes more than 100x more time to do a single insert makes no sense to me. My computer is as follows. OS:Debian Sid x86-x64, Mysql 5.1, RAM:4GB ddr2, cpu 2.0Ghz dual core, HDD 7200RPM 32MB cache 640GB. Why it's taking almost 100x as much time for a SELECT * FROM items; vs INSERT INTO items ...; will never make any sense to me. It's still a small table at only 70 rows, and took that long even when it had 0 rows.

    Read the article

  • IIS 7.5 Warning : Cannot verify access to the path

    - by Mostafa
    I'm newbie in IIS 7.5 , Before this I used to run ASP.NET Website under IIS 5 , That was too easy . I'm trying to run a very simple asp.net website ( just created a new website from VS 2010 targeted in .net 3.5) in IIS 7.5.7600 on windows 7 Ultimate 64 bit . While adding application , during Test Setting i receive one warning that says : The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that \$ has Read access to the physical path. Then test these settings again But I don't know how to make sure application pool identity has read access to the physical path ? I'm wondering if there is any step by step article or some thing that show me the walk-though for running a simple asp.net website on IIS 7.5? I appreciate any help .

    Read the article

  • How do I clear out the ssh-agent entries (on Mac OS X )?

    - by cwd
    I'm running Mac OS X, and it appears that after SSHing to several machines, using identity files, my 'ssh-agent' builds up a lot of identity / keys and then sometimes offers too many to a remote machine, causing them to kick me off before connecting: Received disconnect from 10.12.10.16: 2: Too many authentication failures for cwd It's pretty obvious what's happening, and this page talks about it in more detail: SSH servers only allow you to attempt to authenticate a certain number of times. Each failed password attempt, each failed pubkey/identity that is offered, etc, take up one of these attempts. If you have a lot of SSH keys in your agent, you may find that an SSH server may kick you out before allowing you to attempt password authentication at all. If this is the case, there are a few different workarounds. Rebooting clears the agent and then everything works OK again. I can also add this line to my .ssh/config file to force it to use password authentication: PreferredAuthentications keyboard-interactive,password Anyhow, I saw the note on the page I referenced talking about deleting keys from the agent, but I'm not sure if that applies on a Mac since they appear to be cleared after reboot anyhow. Is there a simple way to clear out all keys in the 'ssh-agent' (the same thing that happens at reboot)?

    Read the article

  • Is it possible to add/register an MIB for the Windows built-in SNMP service?

    - by michielvoo
    I need to build monitoring into an existing .NET application. I will use SNMP to send the application's status to the Windows SNMP service. I have used a .NET library to create the SNMP SET request according to the MIB that I have been provided with, and with the correct community. My code now sends multiple 'variables' in a SET request, for example: Id: ".1.3.6.1.4.1.43607.1.1.1.1.1" (ObjectIdentifier) Data: 42 (Integer32) On my machine I have enabled the SNMP service, configured a community with READ/WRITE permissions, and added localhost to the list of hosts to accept requests from. When I send the SET request I get a response, but it has error status 17 which, according to MSDN means SNMP_ERRORSTATUS_NOTWRITABLE. The response also has error index set to 8, which is the number of variables I send. If I send 7 variables, the error index is set to 7. I think the problem is that the Windows SNMP service is preconfigured to only accept SET requests for a fixed set of MIBs. How can I get the Windows SNMP service to 'accept' my custom MIB SET request? Edit: I downloaded and installed the Windows Server 2003 Resource Kit and tried to 'compile' the MIB file with mibcc.exe ("SNMP MIB Compiler") but I have not been able to compile any MIB files (even the most basic ones like SNMPv2-SMI.mib).

    Read the article

  • Losing Windows Authentication intermittently

    - by Mark Robinson
    I'm running a website on IIS6 / Server 2003 which uses Integrated Windows Authentication on a local intranet. I can browse to the site but get intermittent "Object null" errors when calling the following C# code which is called on every request: .... GetUserIdFromPrincipal(User) .... public static string GetUserIdFromPrincipal(IPrincipal principal) { return principal.Identity is WindowsIdentity ? (principal.Identity as WindowsIdentity).User.Value : principal.Identity.Name; } (Apologies for the code sample but was torn between SF and SO but think this is more to do with server config). So as the error is intermittent clearly Windows Auth is working on some level but after navigating around the site for several clicks I get the null reference error meaning IPrincipal is null (I thought this should never be null in ASP.NET). The error only happens on this newly built VM. The code is fine on other machines. Does IIS request the Windows Auth details on each request? What would cause such an intermittent problem? Any help or suggestions would be much appreciated.

    Read the article

  • Powershell script to delete secondary SMTP addresses of Exchange 2010 Mail Contacts

    - by Zero Subnet
    I have a few thousand Exchange 2010 Mail Contacts who get erroneously assigned internal SMTP addresses by the default recipient policy. I'm trying to use the following command to delete these addresses (keeping the primary SMTP) and disabling the automatic update from recipient policy so the SMTP addresses don't get recreated again. Get-MailContact -OrganizationalUnit "domain.local/OU" -Filter {EmailAddresses -like *@domain.local -and name -notlike "ExchangeUM*"} -ResultSize unlimited -IgnoreDefaultScope | foreach {$contact = $_; $email = $contact.emailaddresses; $email | foreach {if ($_.smtpaddress -like *@domain.local) {$address = $_.smtpaddress; write-host "Removing address" $address "from Contact" $contact.name; Set-Mailcontact -Identity $contact.identity -EmailAddresses @{Remove=$address}; $contact | set-mailcontact -emailaddresspolicyenabled $false} }} I'm getting the following error though: You must provide a value expression on the right-hand side of the '-like' operator. At line:1 char:312 + Get-MailContact -OrganizationalUnit "domain.local/testou" -Filter {EmailAddresses -like "@domain.local" -and name -notlike "ExchangeUM"} -ResultSize unlimited -IgnoreDefaultScope | foreach {$contact = $; $ email = $contact.emailaddresses; $email | foreach {if ($.smtpaddress -like <<<< *@domain.local) {$address = $_.smt paddress; write-host "Removing address" $address "from Contact" $contact.name; Set-Mailcontact -Identity $contact.ident ity -EmailAddresses @{Remove=$address}; $contact }} + CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException + FullyQualifiedErrorId : ExpectedValueExpression Any help as to how to fix this?

    Read the article

  • How do I connect to SSH without the password to be requested every time ? - Already follow some answers here but it doesn't work

    - by MEM
    MAC OS X Lion 10.7.3 1) On host, I've created an authorized_keys file inside .ssh folder, by doing: touch authorized_keys 2) I've copy my public ssh key into host .ssh folder by doing: scp ~/.ssh/mykey.pub [email protected]:/home/userhost/.ssh/mykey.pub 3) I've place it's contents inside authorized files by doing: cat mykey.pub >> authorized_keys 4) Then I've removed the mykey.pub file: rm mykey.pub 5) On my terminal, locally, inside my ~/.ssh folder I made: ssh-add mykey (notice that it is without the pub extension); 6) I've closed and opened again the terminal. When I first connect to this host, it has being added to the *known_hosts* file inside ~/.ssh; I've pico known_hosts and the hash is there. Still, every time I connect by doing: ssh [email protected] it requests a password ! What am I missing here ? UPDATE: I've done EVEN TWO MORE THINGS here: 7) Set your key to be the default identity - if it doesn't exist, create; touch ~/.ssh/config and place inside the following line: IdentityFile ~/.ssh/yourkeyname *id_rsa is normally your default key. You should switched to your key. This tells that the outgoing ssh connections should use this as a default identity.* 8) Add a bash process to your ssh-agent: ssh-agent bash ssh-add ~/.ssh/yourkeyname Lisinge answer helped but it's not definitive. If we restart our machine, the password gets prompted again!!! How can we debug this? What can we do here? How can we check where is this process failing ? UPDATE 2: If I use: ssh -v -i <keyfile> [email protected] I get among other things: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 Warning: Identity file yourkeyname not accessible: No such file or directory. This message refers to what? The identify file is not accessible on the localhost, or it's not accessible on the remote host ? Please advice

    Read the article

  • Multiple copies off the same printer on Windows 7 from PrintUIEntry

    - by Kev
    I currently have a number of bat files which work perfectly fine on Windows XP which install the same printer multiple times with a number of finisher options set - e.g. after running the bat file below I would end up with four printers in the printer drop down called Sharp Kits Printer - A4 Single Sided Sharp Kits Printer - A4 Single Sided Stapled Sharp Kits Printer - A4 Duplex Stapled Sharp Kits Printer - A4 Duplex which all have there options configured in the relevant way. I have amended on Windows 7 to point to correct INF file and printer name in the INF files - a single printer installs fine. However when I run the complete batch file only the first printer in it is installed - occassionally the later ones flash up in the GUI but then vanish when you press F5 and are still missing after a reboot. SET QUEUENAME=http://192.168.7.123:631/printers/Sharp700 SET PPD=J:\DRIVERS\Printers\MX700-Win7-64\SJ1JWENG.INF SET PPDENTRYNAME=SHARP MX-M700U PPD J: cd "\DRIVERS\Printers\MX700-Win7-64" SET NICENAME="Sharp Kits Printer - A4 Single Sided" SET PREFS="J:\SCRIPTS\Printers-Win7-64bit\Sharp_SINGLE_SETTINGS.dat" %SYSTEMDRIVE%\WINDOWS\system32\rundll32.exe %SYSTEMDRIVE%\WINDOWS\system32\printui.dll,PrintUIEntry /w /b %NICENAME% /x /n "part of the n switch" /f "%PPD%" /if /r "%QUEUENAME%" /m "%PPDENTRYNAME%" rem restore settings go here... SET NICENAME="Sharp Kits Printer - A4 Duplex" SET PREFS="J:\SCRIPTS\Printers\Sharp_DUPLEX_SETTINGS.dat" %SYSTEMDRIVE%\WINDOWS\system32\rundll32.exe %SYSTEMDRIVE%\WINDOWS\system32\printui.dll,PrintUIEntry /w /b %NICENAME% /x /n "part of the n switch" /f "%PPD%" /if /r "%QUEUENAME%" /m "%PPDENTRYNAME%" rem restore settings go here... I have tried adding the "/u" paramater to the end, I have changed the "/n" paramater to be different (e.g. n1, n2,n3 etc) - both of these result in the same. I have also tried to change the port (/r) to have "_1" (etc) on the end like the GUI would but this errors as the port doesn't exist. Is it possible to do this on Windows 7, and if so how?

    Read the article

  • standard deviation on excel

    - by user270692
    so I have 3 data sets, which all came from the first data set using the excel program. all consisted of random numbers from =randbetween(0,100) starting from A2:A102 which is 100 integers. for data set b I had to enter=SUM(A2+5) press enter and drag it down to get the integers for cells B2:B102 and for C I had to do multiplication t get it. I did =PRODUCT(A2:A102*5). so everything was taken from data set A. now I did the formulas needed to do sample standard dev and mean(average) . for data set a and b the standard deviations were the same but the mean was larger in data set B of course because I added 5 to each cell in set A. my question is why wouldn't the standard deviation be the same for data set C also? if im using the info from data set A? and how do I calculate the standard deviation (sample) by hand so I can explain why the standard dev doesn't change but the mean does. I don't know what numbers to include in the formula for sample standard deviation.

    Read the article

  • Attempting to set a view's property results in an error: Request for Member … not a structure or …

    - by Mark McDonald
    I've declared a property in a view (created by interface builder, if it matters) and am trying to set the value from the view's controller – like so: self.view.url = someURL; That gives this error: Request for Member 'url' in something not a structure or union I have included the header for the view in the controller's .m file, but I'm probably just doing something wrong, but I don't know what – any ideas? The view code: @interface PDFView : UIView { NSURL *url; } @property (nonatomic, retain) NSURL *url; @end @implementation PDFView @synthesize url;

    Read the article

  • Pivotcache problem using ado recordset into excel

    - by bbenton
    I'm having a problem with runtime error 1004 at the last line. I'm bringing in an access query into excel 2007. I know the recordset is ok as I can see the fields and data. Im not sure about the picotcache was created in the set ptCache line. I see the application, but the index is 0. Code is below... Private Sub cmdPivotTables_Click() Dim rs As ADODB.Recordset Dim i As Integer Dim appExcel As Excel.Application Dim wkbTo As Excel.Workbook Dim wksTo As Excel.Worksheet Dim str As String Dim strSQL As String Dim rng As Excel.Range Dim rs As DAO.Recordset Dim db As DAO.Database Dim ptCache As Excel.PivotCache Set db = CurrentDb() 'to handle case where excel is not open On Error GoTo errhandler: Set appExcel = GetObject(, "Excel.Application") 'returns to default excel error handling On Error GoTo 0 appExcel.Visible = True str = FilePathReports & "Reports SCU\SCCUExcelReports.xlsx" 'tests if the workbook is open (using workbookopen functiion) If WorkbookIsOpen("SCCUExcelReports.xlsx", appExcel) Then Set wkbTo = appExcel.Workbooks("SCCUExcelReports.xlsx") wkbTo.Save 'To ensure correct Ratios&Charts is used wkbTo.Close End If Set wkbTo = GetObject(str) wkbTo.Application.Visible = True wkbTo.Parent.Windows("SCCUExcelReports.xlsx").Visible = True Set rs = New ADODB.Recordset strSQL = "SELECT viewBalanceSheetType.AccountTypeCode AS Type, viewBalanceSheetType.AccountGroupName AS AccountGroup, " _ & "viewBalanceSheetType.AccountSubGroupName As SubGroup, qryAmountIncludingAdjustment.BranchCode AS Branch, " _ & "viewBalanceSheetType.AccountNumber, viewBalanceSheetType.AccountName, " _ & "qryAmountIncludingAdjustment.Amount, qryAmountIncludingAdjustment.MonthEndDate " _ & "FROM viewBalanceSheetType INNER JOIN qryAmountIncludingAdjustment ON " _ & "viewBalanceSheetType.AccountID = qryAmountIncludingAdjustment.AccountID " _ & "WHERE (qryAmountIncludingAdjustment.MonthEndDate = GetCurrent()) " _ & "ORDER BY viewBalanceSheetType.AccountTypeSortOrder, viewBalanceSheetType.AccountGroupSortOrder, " _ & "viewBalanceSheetType.AccountNumber;" rs.Open strSQL, CurrentProject.Connection, adOpenDynamic, adLockOptimistic ' Set rs = db.OpenRecordset("qryExcelReportsTrialBalancePT", dbOpenForwardOnly) **'**********problem here Set ptCache = wkbTo.PivotCaches.Create(SourceType:=XlPivotTableSourceType.xlExternal) Set wkbTo.PivotCaches("ptCache").Recordset = rs**

    Read the article

  • ASP.NET MVC 2.0 Validation and ErrorMessages

    - by Raj Aththanayake
    I need to set the ErrorMessage property of the DataAnnotation's validation attribute in MVC 2.0. For example I should be able to pass an ID instead of the actual error message for the Model property, for example... [StringLength(2, ErrorMessage = "EmailContentID")] [DataType(DataType.EmailAddress)] public string Email { get; set; } Then use this ID ("EmailContentID") to retrieve some content(error message) from a another service e.g database. Then the error error message is displayed to the user instead of the ID. In order to do this I need to set the DataAnnotation validation attribute’s ErrorMessage property. It seems like a stright forward task by just overriding the DataAnnotationsModelValidatorProvider‘s protected override IEnumerable GetValidators(ModelMetadata metadata, ControllerContext context, IEnumerable attributes) However it is complicated now.... A. MVC DatannotationsModelValidator’s ErrorMessage property is readonly. So I cannot set anything here B. System.ComponentModel.DataAnnotationErrorMessage property(get and set) which is already set in MVC DatannotationsModelValidator so I cannot set it again. If I try to set it I get “The property cannot set more than once…” error message. public class CustomDataAnnotationProvider : DataAnnotationsModelValidatorProvider { protected override IEnumerable<ModelValidator> GetValidators(ModelMetadata metadata, ControllerContext context, IEnumerable<Attribute> attributes) { IEnumerable<ModelValidator> validators = base.GetValidators(metadata, context, attributes); foreach (ValidationAttribute validator in validators.OfType<ValidationAttribute>()) { messageId = validator.ErrorMessage; validator.ErrorMessage = "Error string from DB And" + messageId ; } //...... } } Can anyone please give me the right direction on this? Thanks in advance.

    Read the article

  • Pasting formatted Excel range into Outlook message

    - by Steph
    Hi everyone, I am using Office 2007 and I would like to use VBA to paste a range of formatted Excel cells into an Outlook message and then mail the message. In the following code (that I lifted from various sources), it runs without error and then sends an empty message... the paste does not work. Can anyone see the problem and better yet, help with a solution? Thanks, -Steph Sub SendMessage(SubjectText As String, Importance As OlImportance) Dim objOutlook As Outlook.Application Dim objOutlookMsg As Outlook.MailItem Dim objOutlookRecip As Outlook.Recipient Dim objOutlookAttach As Outlook.Attachment Dim iAddr As Integer, Col As Integer, SendLink As Boolean 'Dim Doc As Word.Document, wdRn As Word.Range Dim Doc As Object, wdRn As Object ' Create the Outlook session. Set objOutlook = CreateObject("Outlook.Application") ' Create the message. Set objOutlookMsg = objOutlook.CreateItem(olMailItem) Set Doc = objOutlookMsg.GetInspector.WordEditor 'Set Doc = objOutlookMsg.ActiveInspector.WordEditor Set wdRn = Doc.Range wdRn.Paste Set objOutlookRecip = objOutlookMsg.Recipients.Add("[email protected]") objOutlookRecip.Type = 1 objOutlookMsg.Subject = SubjectText objOutlookMsg.Importance = Importance With objOutlookMsg For Each objOutlookRecip In .Recipients objOutlookRecip.Resolve ' Set the Subject, Body, and Importance of the message. '.Subject = "Coverage Requests" 'objDrafts.GetFromClipboard Next .Send End With Set objOutlookMsg = Nothing Set objOutlook = Nothing End Sub

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >