Search Results

Search found 32453 results on 1299 pages for 'acts as list'.

Page 409/1299 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Using jQuery to POST Form Data to an ASP.NET ASMX AJAX Web Service

    - by Rick Strahl
    The other day I got a question about how to call an ASP.NET ASMX Web Service or PageMethods with the POST data from a Web Form (or any HTML form for that matter). The idea is that you should be able to call an endpoint URL, send it regular urlencoded POST data and then use Request.Form[] to retrieve the posted data as needed. My first reaction was that you can’t do it, because ASP.NET ASMX AJAX services (as well as Page Methods and WCF REST AJAX Services) require that the content POSTed to the server is posted as JSON and sent with an application/json or application/x-javascript content type. IOW, you can’t directly call an ASP.NET AJAX service with regular urlencoded data. Note that there are other ways to accomplish this. You can use ASP.NET MVC and a custom route, an HTTP Handler or separate ASPX page, or even a WCF REST service that’s configured to use non-JSON inputs. However if you want to use an ASP.NET AJAX service (or Page Methods) with a little bit of setup work it’s actually quite easy to capture all the form variables on the client and ship them up to the server. The basic steps needed to make this happen are: Capture form variables into an array on the client with jQuery’s .serializeArray() function Use $.ajax() or my ServiceProxy class to make an AJAX call to the server to send this array On the server create a custom type that matches the .serializeArray() name/value structure Create extension methods on NameValue[] to easily extract form variables Create a [WebMethod] that accepts this name/value type as an array (NameValue[]) This seems like a lot of work but realize that steps 3 and 4 are a one time setup step that can be reused in your entire site or multiple applications. Let’s look at a short example that looks like this as a base form of fields to ship to the server: The HTML for this form looks something like this: <div id="divMessage" class="errordisplay" style="display: none"> </div> <div> <div class="label">Name:</div> <div><asp:TextBox runat="server" ID="txtName" /></div> </div> <div> <div class="label">Company:</div> <div><asp:TextBox runat="server" ID="txtCompany"/></div> </div> <div> <div class="label" ></div> <div> <asp:DropDownList runat="server" ID="lstAttending"> <asp:ListItem Text="Attending" Value="Attending"/> <asp:ListItem Text="Not Attending" Value="NotAttending" /> <asp:ListItem Text="Maybe Attending" Value="MaybeAttending" /> <asp:ListItem Text="Not Sure Yet" Value="NotSureYet" /> </asp:DropDownList> </div> </div> <div> <div class="label">Special Needs:<br /> <small>(check all that apply)</small></div> <div> <asp:ListBox runat="server" ID="lstSpecialNeeds" SelectionMode="Multiple"> <asp:ListItem Text="Vegitarian" Value="Vegitarian" /> <asp:ListItem Text="Vegan" Value="Vegan" /> <asp:ListItem Text="Kosher" Value="Kosher" /> <asp:ListItem Text="Special Access" Value="SpecialAccess" /> <asp:ListItem Text="No Binder" Value="NoBinder" /> </asp:ListBox> </div> </div> <div> <div class="label"></div> <div> <asp:CheckBox ID="chkAdditionalGuests" Text="Additional Guests" runat="server" /> </div> </div> <hr /> <input type="button" id="btnSubmit" value="Send Registration" /> The form includes a few different kinds of form fields including a multi-selection listbox to demonstrate retrieving multiple values. Setting up the Server Side [WebMethod] The [WebMethod] on the server we’re going to call is going to be very simple and just capture the content of these values and echo then back as a formatted HTML string. Obviously this is overly simplistic but it serves to demonstrate the simple point of capturing the POST data on the server in an AJAX callback. public class PageMethodsService : System.Web.Services.WebService { [WebMethod] public string SendRegistration(NameValue[] formVars) { StringBuilder sb = new StringBuilder(); sb.AppendFormat("Thank you {0}, <br/><br/>", HttpUtility.HtmlEncode(formVars.Form("txtName"))); sb.AppendLine("You've entered the following: <hr/>"); foreach (NameValue nv in formVars) { // strip out ASP.NET form vars like _ViewState/_EventValidation if (!nv.name.StartsWith("__")) { if (nv.name.StartsWith("txt") || nv.name.StartsWith("lst") || nv.name.StartsWith("chk")) sb.Append(nv.name.Substring(3)); else sb.Append(nv.name); sb.AppendLine(": " + HttpUtility.HtmlEncode(nv.value) + "<br/>"); } } sb.AppendLine("<hr/>"); string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs == null) sb.AppendLine("No Special Needs"); else { sb.AppendLine("Special Needs: <br/>"); foreach (string need in needs) { sb.AppendLine("&nbsp;&nbsp;" + need + "<br/>"); } } return sb.ToString(); } } The key feature of this method is that it receives a custom type called NameValue[] which is an array of NameValue objects that map the structure that the jQuery .serializeArray() function generates. There are two custom types involved in this: The actual NameValue type and a NameValueExtensions class that defines a couple of extension methods for the NameValue[] array type to allow for single (.Form()) and multiple (.FormMultiple()) value retrieval by name. The NameValue class is as simple as this and simply maps the structure of the array elements of .serializeArray(): public class NameValue { public string name { get; set; } public string value { get; set; } } The extension method class defines the .Form() and .FormMultiple() methods to allow easy retrieval of form variables from the returned array: /// <summary> /// Simple NameValue class that maps name and value /// properties that can be used with jQuery's /// $.serializeArray() function and JSON requests /// </summary> public static class NameValueExtensionMethods { /// <summary> /// Retrieves a single form variable from the list of /// form variables stored /// </summary> /// <param name="formVars"></param> /// <param name="name">formvar to retrieve</param> /// <returns>value or string.Empty if not found</returns> public static string Form(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).FirstOrDefault(); if (matches != null) return matches.value; return string.Empty; } /// <summary> /// Retrieves multiple selection form variables from the list of /// form variables stored. /// </summary> /// <param name="formVars"></param> /// <param name="name">The name of the form var to retrieve</param> /// <returns>values as string[] or null if no match is found</returns> public static string[] FormMultiple(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).Select(nv => nv.value).ToArray(); if (matches.Length == 0) return null; return matches; } } Using these extension methods it’s easy to retrieve individual values from the array: string name = formVars.Form("txtName"); or multiple values: string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs != null) { // do something with matches } Using these functions in the SendRegistration method it’s easy to retrieve a few form variables directly (txtName and the multiple selections of lstSpecialNeeds) or to iterate over the whole list of values. Of course this is an overly simple example – in typical app you’d probably want to validate the input data and save it to the database and then return some sort of confirmation or possibly an updated data list back to the client. Since this is a full AJAX service callback realize that you don’t have to return simple string values – you can return any of the supported result types (which are most serializable types) including complex hierarchical objects and arrays that make sense to your client code. POSTing Form Variables from the Client to the AJAX Service To call the AJAX service method on the client is straight forward and requires only use of little native jQuery plus JSON serialization functionality. To start add jQuery and the json2.js library to your page: <script src="Scripts/jquery.min.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> json2.js can be found here (be sure to remove the first line from the file): http://www.json.org/json2.js It’s required to handle JSON serialization for those browsers that don’t support it natively. With those script references in the document let’s hookup the button click handler and call the service: $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); $.ajax({ url: "PageMethodsService.asmx/SendRegistration", type: "POST", contentType: "application/json", data: JSON.stringify({ formVars: arForm }), dataType: "json", success: function (result) { var jEl = $("#divMessage"); jEl.html(result.d).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, error: function (xhr, status) { alert("An error occurred: " + status); } }); } The key feature in this code is the $("#form1").serializeArray();  call which serializes all the form fields of form1 into an array. Each form var is represented as an object with a name/value property. This array is then serialized into JSON with: JSON.stringify({ formVars: arForm }) The format for the parameter list in AJAX service calls is an object with one property for each parameter of the method. In this case its a single parameter called formVars and we’re assigning the array of form variables to it. The URL to call on the server is the name of the Service (or ASPX Page for Page Methods) plus the name of the method to call. On return the success callback receives the result from the AJAX callback which in this case is the formatted string which is simply assigned to an element in the form and displayed. Remember the result type is whatever the method returns – it doesn’t have to be a string. Note that ASP.NET AJAX and WCF REST return JSON data as a wrapped object so the result has a ‘d’ property that holds the actual response: jEl.html(result.d).fadeIn(1000); Slightly simpler: Using ServiceProxy.js If you want things slightly cleaner you can use the ServiceProxy.js class I’ve mentioned here before. The ServiceProxy class handles a few things for calling ASP.NET and WCF services more cleanly: Automatic JSON encoding Automatic fix up of ‘d’ wrapper property Automatic Date conversion on the client Simplified error handling Reusable and abstracted To add the service proxy add: <script src="Scripts/ServiceProxy.js" type="text/javascript"></script> and then change the code to this slightly simpler version: <script type="text/javascript"> proxy = new ServiceProxy("PageMethodsService.asmx/"); $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); proxy.invoke("SendRegistration", { formVars: arForm }, function (result) { var jEl = $("#divMessage"); jEl.html(result).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, function (error) { alert(error.message); } ); } The code is not very different but it makes the call as simple as specifying the method to call, the parameters to pass and the actions to take on success and error. No more remembering which content type and data types to use and manually serializing to JSON. This code also removes the “d” property processing in the response and provides more consistent error handling in that the call always returns an error object regardless of a server error or a communication error unlike the native $.ajax() call. Either approach works and both are pretty easy. The ServiceProxy really pays off if you use lots of service calls and especially if you need to deal with date values returned from the server  on the client. Summary Making Web Service calls and getting POST data to the server is not always the best option – ASP.NET and WCF AJAX services are meant to work with data in objects. However, in some situations it’s simply easier to POST all the captured form data to the server instead of mapping all properties from the input fields to some sort of message object first. For this approach the above POST mechanism is useful as it puts the parsing of the data on the server and leaves the client code lean and mean. It’s even easy to build a custom model binder on the server that can map the array values to properties on an object generically with some relatively simple Reflection code and without having to manually map form vars to properties and do string conversions. Keep in mind though that other approaches also abound. ASP.NET MVC makes it pretty easy to create custom routes to data and the built in model binder makes it very easy to deal with inbound form POST data in its original urlencoded format. The West Wind West Wind Web Toolkit also includes functionality for AJAX callbacks using plain POST values. All that’s needed is a Method parameter to query/form value to specify the method to be called on the server. After that the content type is completely optional and up to the consumer. It’d be nice if the ASP.NET AJAX Service and WCF AJAX Services weren’t so tightly bound to the content type so that you could more easily create open access service endpoints that can take advantage of urlencoded data that is everywhere in existing pages. It would make it much easier to create basic REST endpoints without complicated service configuration. Ah one can dream! In the meantime I hope this article has given you some ideas on how you can transfer POST data from the client to the server using JSON – it might be useful in other scenarios beyond ASP.NET AJAX services as well. Additional Resources ServiceProxy.js A small JavaScript library that wraps $.ajax() to call ASP.NET AJAX and WCF AJAX Services. Includes date parsing extensions to the JSON object, a global dataFilter for processing dates on all jQuery JSON requests, provides cleanup for the .NET wrapped message format and handles errors in a consistent fashion. Making jQuery Calls to WCF/ASMX with a ServiceProxy Client More information on calling ASMX and WCF AJAX services with jQuery and some more background on ServiceProxy.js. Note the implementation has slightly changed since the article was written. ww.jquery.js The West Wind West Wind Web Toolkit also includes ServiceProxy.js in the West Wind jQuery extension library. This version is slightly different and includes embedded json encoding/decoding based on json2.js.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  AJAX  

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1

    - by shiju
    In this post, I will demonstrate web application development using ASP. NET MVC 3, Razor and EF code First. This post will also cover Dependency Injection using Unity 2.0 and generic Repository and Unit of Work for EF Code First. The following frameworks will be used for this step by step tutorial. ASP.NET MVC 3 EF Code First CTP 5 Unity 2.0 Define Domain Model Let’s create domain model for our simple web application Category class public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } }   Expense class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First.    public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     }   RepositoryBasse – Generic Repository class public abstract class RepositoryBase<T> where T : class { private MyFinanceContext database; private readonly IDbSet<T> dbset; protected RepositoryBase(IDatabaseFactory databaseFactory) {     DatabaseFactory = databaseFactory;     dbset = Database.Set<T>(); }   protected IDatabaseFactory DatabaseFactory {     get; private set; }   protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } }   DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work   public interface IUnitOfWork {     void Commit(); }   UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } }   The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>.   public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext.   public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } }   Let’s create controller factory for Unity in the ASP.NET MVC 3 application. public class UnityControllerFactory : DefaultControllerFactory { IUnityContainer container; public UnityControllerFactory(IUnityContainer container) {     this.container = container; } protected override IController GetControllerInstance(RequestContext reqContext, Type controllerType) {     IController controller;     if (controllerType == null)         throw new HttpException(                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   }   Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies.   private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); }   In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity.   protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); }   Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller   public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        }   Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div>   We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Let’s create view page for insert Category information   @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.      @{     Layout = "~/Views/Shared/_Layout.cshtml"; }   Source Code You can download the source code from http://efmvc.codeplex.com/ . The source will be refactored on over time.   Summary In this post, we have created a simple web application using ASP.NET MVC 3 and EF Code First. We have discussed on technologies and practices such as ASP.NET MVC 3, Razor, EF Code First, Unity 2, generic Repository and Unit of Work. In my later posts, I will modify the application and will be discussed on more things. Stay tuned to my blog  for more posts on step by step application building.

    Read the article

  • Using the ASP.NET Cache to cache data in a Model or Business Object layer, without a dependency on System.Web in the layer - Part One.

    - by Rhames
    ASP.NET applications can make use of the System.Web.Caching.Cache object to cache data and prevent repeated expensive calls to a database or other store. However, ideally an application should make use of caching at the point where data is retrieved from the database, which typically is inside a Business Objects or Model layer. One of the key features of using a UI pattern such as Model-View-Presenter (MVP) or Model-View-Controller (MVC) is that the Model and Presenter (or Controller) layers are developed without any knowledge of the UI layer. Introducing a dependency on System.Web into the Model layer would break this independence of the Model from the View. This article gives a solution to this problem, using dependency injection to inject the caching implementation into the Model layer at runtime. This allows caching to be used within the Model layer, without any knowledge of the actual caching mechanism that will be used. Create a sample application to use the caching solution Create a test SQL Server database This solution uses a SQL Server database with the same Sales data used in my previous post on calculating running totals. The advantage of using this data is that it gives nice slow queries that will exaggerate the effect of using caching! To create the data, first create a new SQL database called CacheSample. Next run the following script to create the Sale table and populate it: USE CacheSample GO   CREATE TABLE Sale(DayCount smallint, Sales money) CREATE CLUSTERED INDEX ndx_DayCount ON Sale(DayCount) go INSERT Sale VALUES (1,120) INSERT Sale VALUES (2,60) INSERT Sale VALUES (3,125) INSERT Sale VALUES (4,40)   DECLARE @DayCount smallint, @Sales money SET @DayCount = 5 SET @Sales = 10   WHILE @DayCount < 5000  BEGIN  INSERT Sale VALUES (@DayCount,@Sales)  SET @DayCount = @DayCount + 1  SET @Sales = @Sales + 15  END Next create a stored procedure to calculate the running total, and return a specified number of rows from the Sale table, using the following script: USE [CacheSample] GO   SET ANSI_NULLS ON GO   SET QUOTED_IDENTIFIER ON GO   -- ============================================= -- Author:        Robin -- Create date: -- Description:   -- ============================================= CREATE PROCEDURE [dbo].[spGetRunningTotals]       -- Add the parameters for the stored procedure here       @HighestDayCount smallint = null AS BEGIN       -- SET NOCOUNT ON added to prevent extra result sets from       -- interfering with SELECT statements.       SET NOCOUNT ON;         IF @HighestDayCount IS NULL             SELECT @HighestDayCount = MAX(DayCount) FROM dbo.Sale                   DECLARE @SaleTbl TABLE (DayCount smallint, Sales money, RunningTotal money)         DECLARE @DayCount smallint,                   @Sales money,                   @RunningTotal money         SET @RunningTotal = 0       SET @DayCount = 0         DECLARE rt_cursor CURSOR       FOR       SELECT DayCount, Sales       FROM Sale       ORDER BY DayCount         OPEN rt_cursor         FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales         WHILE @@FETCH_STATUS = 0 AND @DayCount <= @HighestDayCount        BEGIN        SET @RunningTotal = @RunningTotal + @Sales        INSERT @SaleTbl VALUES (@DayCount,@Sales,@RunningTotal)        FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales        END         CLOSE rt_cursor       DEALLOCATE rt_cursor         SELECT DayCount, Sales, RunningTotal       FROM @SaleTbl   END   GO   Create the Sample ASP.NET application In Visual Studio create a new solution and add a class library project called CacheSample.BusinessObjects and an ASP.NET web application called CacheSample.UI. The CacheSample.BusinessObjects project will contain a single class to represent a Sale data item, with all the code to retrieve the sales from the database included in it for simplicity (normally I would at least have a separate Repository or other object that is responsible for retrieving data, and probably a data access layer as well, but for this sample I want to keep it simple). The C# code for the Sale class is shown below: using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient;   namespace CacheSample.BusinessObjects {     public class Sale     {         public Int16 DayCount { get; set; }         public decimal Sales { get; set; }         public decimal RunningTotal { get; set; }           public static IEnumerable<Sale> GetSales(int? highestDayCount)         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager .ConnectionStrings["CacheSample"].ConnectionString;               using(SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         }     } }   The static GetSale() method makes a call to the spGetRunningTotals stored procedure and then reads each row from the returned SqlDataReader into an instance of the Sale class, it then returns a List of the Sale objects, as IEnnumerable<Sale>. A reference to System.Configuration needs to be added to the CacheSample.BusinessObjects project so that the connection string can be read from the web.config file. In the CacheSample.UI ASP.NET project, create a single web page called ShowSales.aspx, and make this the default start up page. This page will contain a single button to call the GetSales() method and a label to display the results. The html mark up and the C# code behind are shown below: ShowSales.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowSales.aspx.cs" Inherits="CacheSample.UI.ShowSales" %>   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">   <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server">     <title>Cache Sample - Show All Sales</title> </head> <body>     <form id="form1" runat="server">     <div>         <asp:Button ID="btnTest1" runat="server" onclick="btnTest1_Click"             Text="Get All Sales" />         &nbsp;&nbsp;&nbsp;         <asp:Label ID="lblResults" runat="server"></asp:Label>         </div>     </form> </body> </html>   ShowSales.aspx.cs using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls;   using CacheSample.BusinessObjects;   namespace CacheSample.UI {     public partial class ShowSales : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         {         }           protected void btnTest1_Click(object sender, EventArgs e)         {             System.Diagnostics.Stopwatch stopWatch = new System.Diagnostics.Stopwatch();             stopWatch.Start();               var sales = Sale.GetSales(null);               var lastSales = sales.Last();               stopWatch.Stop();               lblResults.Text = string.Format( "Count of Sales: {0}, Last DayCount: {1}, Total Sales: {2}. Query took {3} ms", sales.Count(), lastSales.DayCount, lastSales.RunningTotal, stopWatch.ElapsedMilliseconds);         }       } }   Finally we need to add a connection string to the CacheSample SQL Server database, called CacheSample, to the web.config file: <?xmlversion="1.0"?>   <configuration>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   Run the application and click the button a few times to see how long each call to the database takes. On my system, each query takes about 450ms. Next I shall look at a solution to use the ASP.NET caching to cache the data returned by the query, so that subsequent requests to the GetSales() method are much faster. Adding Data Caching Support I am going to create my caching support in a separate project called CacheSample.Caching, so the next step is to add a class library to the solution. We shall be using the application configuration to define the implementation of our caching system, so we need a reference to System.Configuration adding to the project. ICacheProvider<T> Interface The first step in adding caching to our application is to define an interface, called ICacheProvider, in the CacheSample.Caching project, with methods to retrieve any data from the cache or to retrieve the data from the data source if it is not present in the cache. Dependency Injection will then be used to inject an implementation of this interface at runtime, allowing the users of the interface (i.e. the CacheSample.BusinessObjects project) to be completely unaware of how the caching is actually implemented. As data of any type maybe retrieved from the data source, it makes sense to use generics in the interface, with a generic type parameter defining the data type associated with a particular instance of the cache interface implementation. The C# code for the ICacheProvider interface is shown below: using System; using System.Collections.Generic;   namespace CacheSample.Caching {     public interface ICacheProvider     {     }       public interface ICacheProvider<T> : ICacheProvider     {         T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);           IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);     } }   The empty non-generic interface will be used as a type in a Dictionary generic collection later to store instances of the ICacheProvider<T> implementation for reuse, I prefer to use a base interface when doing this, as I think the alternative of using object makes for less clear code. The ICacheProvider<T> interface defines two overloaded Fetch methods, the difference between these is that one will return a single instance of the type T and the other will return an IEnumerable<T>, providing support for easy caching of collections of data items. Both methods will take a key parameter, which will uniquely identify the cached data, a delegate of type Func<T> or Func<IEnumerable<T>> which will provide the code to retrieve the data from the store if it is not present in the cache, and absolute or relative expiry policies to define when a cached item should expire. Note that at present there is no support for cache dependencies, but I shall be showing a method of adding this in part two of this article. CacheProviderFactory Class We need a mechanism of creating instances of our ICacheProvider<T> interface, using Dependency Injection to get the implementation of the interface. To do this we shall create a CacheProviderFactory static class in the CacheSample.Caching project. This factory will provide a generic static method called GetCacheProvider<T>(), which shall return instances of ICacheProvider<T>. We can then call this factory method with the relevant data type (for example the Sale class in the CacheSample.BusinessObject project) to get a instance of ICacheProvider for that type (e.g. call CacheProviderFactory.GetCacheProvider<Sale>() to get the ICacheProvider<Sale> implementation). The C# code for the CacheProviderFactory is shown below: using System; using System.Collections.Generic;   using CacheSample.Caching.Configuration;   namespace CacheSample.Caching {     public static class CacheProviderFactory     {         private static Dictionary<Type, ICacheProvider> cacheProviders = new Dictionary<Type, ICacheProvider>();         private static object syncRoot = new object();           ///<summary>         /// Factory method to create or retrieve an implementation of the  /// ICacheProvider interface for type <typeparamref name="T"/>.         ///</summary>         ///<typeparam name="T">  /// The type that this cache provider instance will work with  ///</typeparam>         ///<returns>An instance of the implementation of ICacheProvider for type  ///<typeparamref name="T"/>, as specified by the application  /// configuration</returns>         public static ICacheProvider<T> GetCacheProvider<T>()         {             ICacheProvider<T> cacheProvider = null;             // Get the Type reference for the type parameter T             Type typeOfT = typeof(T);               // Lock the access to the cacheProviders dictionary             // so multiple threads can work with it             lock (syncRoot)             {                 // First check if an instance of the ICacheProvider implementation  // already exists in the cacheProviders dictionary for the type T                 if (cacheProviders.ContainsKey(typeOfT))                     cacheProvider = (ICacheProvider<T>)cacheProviders[typeOfT];                 else                 {                     // There is not already an instance of the ICacheProvider in       // cacheProviders for the type T                     // so we need to create one                       // Get the Type reference for the application's implementation of       // ICacheProvider from the configuration                     Type cacheProviderType = Type.GetType(CacheProviderConfigurationSection.Current. CacheProviderType);                     if (cacheProviderType != null)                     {                         // Now get a Type reference for the Cache Provider with the                         // type T generic parameter                         Type typeOfCacheProviderTypeForT = cacheProviderType.MakeGenericType(new Type[] { typeOfT });                         if (typeOfCacheProviderTypeForT != null)                         {                             // Create the instance of the Cache Provider and add it to // the cacheProviders dictionary for future use                             cacheProvider = (ICacheProvider<T>)Activator. CreateInstance(typeOfCacheProviderTypeForT);                             cacheProviders.Add(typeOfT, cacheProvider);                         }                     }                 }             }               return cacheProvider;                 }     } }   As this code uses Activator.CreateInstance() to create instances of the ICacheProvider<T> implementation, which is a slow process, the factory class maintains a Dictionary of the previously created instances so that a cache provider needs to be created only once for each type. The type of the implementation of ICacheProvider<T> is read from a custom configuration section in the application configuration file, via the CacheProviderConfigurationSection class, which is described below. CacheProviderConfigurationSection Class The implementation of ICacheProvider<T> will be specified in a custom configuration section in the application’s configuration. To handle this create a folder in the CacheSample.Caching project called Configuration, and add a class called CacheProviderConfigurationSection to this folder. This class will extend the System.Configuration.ConfigurationSection class, and will contain a single string property called CacheProviderType. The C# code for this class is shown below: using System; using System.Configuration;   namespace CacheSample.Caching.Configuration {     internal class CacheProviderConfigurationSection : ConfigurationSection     {         public static CacheProviderConfigurationSection Current         {             get             {                 return (CacheProviderConfigurationSection) ConfigurationManager.GetSection("cacheProvider");             }         }           [ConfigurationProperty("type", IsRequired=true)]         public string CacheProviderType         {             get             {                 return (string)this["type"];             }         }     } }   Adding Data Caching to the Sales Class We now have enough code in place to add caching to the GetSales() method in the CacheSample.BusinessObjects.Sale class, even though we do not yet have an implementation of the ICacheProvider<T> interface. We need to add a reference to the CacheSample.Caching project to CacheSample.BusinessObjects so that we can use the ICacheProvider<T> interface within the GetSales() method. Once the reference is added, we can first create a unique string key based on the method name and the parameter value, so that the same cache key is used for repeated calls to the method with the same parameter values. Then we get an instance of the cache provider for the Sales type, using the CacheProviderFactory, and pass the existing code to retrieve the data from the database as the retrievalMethod delegate in a call to the Cache Provider Fetch() method. The C# code for the modified GetSales() method is shown below: public static IEnumerable<Sale> GetSales(int? highestDayCount) {     string cacheKey = string.Format("CacheSample.BusinessObjects.GetSalesWithCache({0})", highestDayCount);       return CacheSample.Caching.CacheProviderFactory. GetCacheProvider<Sale>().Fetch(cacheKey,         delegate()         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager. ConnectionStrings["CacheSample"].ConnectionString;               using (SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         },         null,         new TimeSpan(0, 10, 0)); }     This example passes the code to retrieve the Sales data from the database to the Cache Provider as an anonymous method, however it could also be written as a lambda. The main advantage of using an anonymous function (method or lambda) is that the code inside the anonymous function can access the parameters passed to the GetSales() method. Finally the absolute expiry is set to null, and the relative expiry set to 10 minutes, to indicate that the cache entry should be removed 10 minutes after the last request for the data. As the ICacheProvider<T> has a Fetch() method that returns IEnumerable<T>, we can simply return the results of the Fetch() method to the caller of the GetSales() method. This should be all that is needed for the GetSales() method to now retrieve data from a cache after the first time the data has be retrieved from the database. Implementing a ASP.NET Cache Provider The final step is to actually implement the ICacheProvider<T> interface, and add the implementation details to the web.config file for the dependency injection. The cache provider implementation needs to have access to System.Web. Therefore it could be placed in the CacheSample.UI project, or in its own project that has a reference to System.Web. Implementing the Cache Provider in a separate project is my favoured approach. Create a new project inside the solution called CacheSample.CacheProvider, and add references to System.Web and CacheSample.Caching to this project. Add a class to the project called AspNetCacheProvider. Make the class a generic class by adding the generic parameter <T> and indicate that the class implements ICacheProvider<T>. The C# code for the AspNetCacheProvider class is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Caching;   using CacheSample.Caching;   namespace CacheSample.CacheProvider {     public class AspNetCacheProvider<T> : ICacheProvider<T>     {         #region ICacheProvider<T> Members           public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           #endregion           #region Helper Methods           private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             U value;             if (!TryGetValue<U>(key, out value))             {                 value = retrieveData();                 if (!absoluteExpiry.HasValue)                     absoluteExpiry = Cache.NoAbsoluteExpiration;                   if (!relativeExpiry.HasValue)                     relativeExpiry = Cache.NoSlidingExpiration;                   HttpContext.Current.Cache.Insert(key, value, null, absoluteExpiry.Value, relativeExpiry.Value);             }             return value;         }           private bool TryGetValue<U>(string key, out U value)         {             object cachedValue = HttpContext.Current.Cache.Get(key);             if (cachedValue == null)             {                 value = default(U);                 return false;             }             else             {                 try                 {                     value = (U)cachedValue;                     return true;                 }                 catch                 {                     value = default(U);                     return false;                 }             }         }           #endregion       } }   The two interface Fetch() methods call a private method called FetchAndCache(). This method first checks for a element in the HttpContext.Current.Cache with the specified cache key, and if so tries to cast this to the specified type (either T or IEnumerable<T>). If the cached element is found, the FetchAndCache() method simply returns it. If it is not found in the cache, the method calls the retrievalMethod delegate to get the data from the data source, and then adds this to the HttpContext.Current.Cache. The final step is to add the AspNetCacheProvider class to the relevant custom configuration section in the CacheSample.UI.Web.Config file. To do this there needs to be a <configSections> element added as the first element in <configuration>. This will match a custom section called <cacheProvider> with the CacheProviderConfigurationSection. Then we add a <cacheProvider> element, with a type property set to the fully qualified assembly name of the AspNetCacheProvider class, as shown below: <?xmlversion="1.0"?>   <configuration>  <configSections>     <sectionname="cacheProvider" type="CacheSample.Base.Configuration.CacheProviderConfigurationSection, CacheSample.Base" />  </configSections>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <cacheProvidertype="CacheSample.CacheProvider.AspNetCacheProvider`1, CacheSample.CacheProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null">  </cacheProvider>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   One point to note is that the fully qualified assembly name of the AspNetCacheProvider class includes the notation `1 after the class name, which indicates that it is a generic class with a single generic type parameter. The CacheSample.UI project needs to have references added to CacheSample.Caching and CacheSample.CacheProvider so that the actual application is aware of the relevant cache provider implementation. Conclusion After implementing this solution, you should have a working cache provider mechanism, that will allow the middle and data access layers to implement caching support when retrieving data, without any knowledge of the actually caching implementation. If the UI is not ASP.NET based, if for example it is Winforms or WPF, the implementation of ICacheProvider<T> would be written around whatever technology is available. It could even be a standalone caching system that takes full responsibility for adding and removing items from a global store. The next part of this article will show how this caching mechanism may be extended to provide support for cache dependencies, such as the System.Web.Caching.SqlCacheDependency. Another possible extension would be to cache the cache provider implementations instead of storing them in a static Dictionary in the CacheProviderFactory. This would prevent a build up of seldom used cache providers in the application memory, as they could be removed from the cache if not used often enough, although in reality there are probably unlikely to be vast numbers of cache provider implementation instances, as most applications do not have a massive number of business object or model types.

    Read the article

  • Header Guard Issues - Getting Swallowed Alive

    - by gjnave
    I'm totally at wit's end: I can't figure out how my dependency issues. I've read countless posts and blogs and reworked my code so many times that I can't even remember what almost worked and what didnt. I continually get not only redefinition errors, but class not defined errors. I rework the header guards and remove some errors simply to find others. I somehow got everything down to one error but then even that got broke while trying to fix it. Would you please help me figure out the problem? card.cpp #include <iostream> #include <cctype> #include "card.h" using namespace std; // ====DECL====== Card::Card() { abilities = 0; flavorText = 0; keywords = 0; artifact = 0; classType = new char[strlen("Card") + 1]; classType = "Card"; } Card::~Card (){ delete name; delete abilities; delete flavorText; artifact = NULL; } // ------------ Card::Card(const Card & to_copy) { name = new char[strlen(to_copy.name) +1]; // creating dynamic array strcpy(to_copy.name, name); type = to_copy.type; color = to_copy.color; manaCost = to_copy.manaCost; abilities = new char[strlen(to_copy.abilities) +1]; strcpy(abilities, to_copy.abilities); flavorText = new char[strlen(to_copy.flavorText) +1]; strcpy(flavorText, to_copy.flavorText); keywords = new char[strlen(to_copy.keywords) +1]; strcpy(keywords, to_copy.keywords); inPlay = to_copy.inPlay; tapped = to_copy.tapped; enchanted = to_copy.enchanted; cursed = to_copy.cursed; if (to_copy.type != ARTIFACT) artifact = to_copy.artifact; } // ====DECL===== int Card::equipArtifact(Artifact* to_equip){ artifact = to_equip; } Artifact * Card::unequipArtifact(Card * unequip_from){ Artifact * to_remove = artifact; artifact = NULL; return to_remove; // put card in hand or in graveyard } int Card::enchant( Card * to_enchant){ to_enchant->enchanted = true; cout << "enchanted" << endl; } int Card::disenchant( Card * to_disenchant){ to_disenchant->enchanted = false; cout << "Enchantment Removed" << endl; } // ========DECL===== Spell::Spell() { currPower = basePower; currToughness = baseToughness; classType = new char[strlen("Spell") + 1]; classType = "Spell"; } Spell::~Spell(){} // --------------- Spell::Spell(const Spell & to_copy){ currPower = to_copy.currPower; basePower = to_copy.basePower; currToughness = to_copy.currToughness; baseToughness = to_copy.baseToughness; } // ========= int Spell::attack( Spell *& blocker ){ blocker->currToughness -= currPower; currToughness -= blocker->currToughness; } //========== int Spell::counter (Spell *& to_counter){ cout << to_counter->name << " was countered by " << name << endl; } // ============ int Spell::heal (Spell *& to_heal, int amountOfHealth){ to_heal->currToughness += amountOfHealth; } // ------- Creature::Creature(){ summoningSick = true; } // =====DECL====== Land::Land(){ color = NON; classType = new char[strlen("Land") + 1]; classType = "Land"; } // ------ int Land::generateMana(int mana){ // ... // } card.h #ifndef CARD_H #define CARD_H #include <cctype> #include <iostream> #include "conception.h" class Artifact; class Spell; class Card : public Conception { public: Card(); Card(const Card &); ~Card(); protected: char* name; enum CardType { INSTANT, CREATURE, LAND, ENCHANTMENT, ARTIFACT, PLANESWALKER}; enum CardColor { WHITE, BLUE, BLACK, RED, GREEN, NON }; CardType type; CardColor color; int manaCost; char* abilities; char* flavorText; char* keywords; bool inPlay; bool tapped; bool cursed; bool enchanted; Artifact* artifact; virtual int enchant( Card * ); virtual int disenchant (Card * ); virtual int equipArtifact( Artifact* ); virtual Artifact* unequipArtifact(Card * ); }; // ------------ class Spell: public Card { public: Spell(); ~Spell(); Spell(const Spell &); protected: virtual int heal( Spell *&, int ); virtual int attack( Spell *& ); virtual int counter( Spell*& ); int currToughness; int baseToughness; int currPower; int basePower; }; class Land: public Card { public: Land(); ~Land(); protected: virtual int generateMana(int); }; class Forest: public Land { public: Forest(); ~Forest(); protected: int generateMana(); }; class Creature: public Spell { public: Creature(); ~Creature(); protected: bool summoningSick; }; class Sorcery: public Spell { public: Sorcery(); ~Sorcery(); protected: }; #endif conception.h -- this is an "uber class" from which everything derives class Conception{ public: Conception(); ~Conception(); protected: char* classType; }; conception.cpp Conception::Conception{ Conception(){ classType = new char[11]; char = "Conception"; } game.cpp -- this is an incomplete class as of this code #include <iostream> #include <cctype> #include "game.h" #include "player.h" Battlefield::Battlefield(){ card = 0; } Battlefield::~Battlefield(){ delete card; } Battlefield::Battlefield(const Battlefield & to_copy){ } // =========== /* class Game(){ public: Game(); ~Game(); protected: Player** player; // for multiple players Battlefield* root; // for battlefield getPlayerMove(); // ask player what to do addToBattlefield(); removeFromBattlefield(); sendAttack(); } */ #endif game.h #ifndef GAME_H #define GAME_H #include "list.h" class CardList(); class Battlefield : CardList{ public: Battlefield(); ~Battlefield(); protected: Card* card; // make an array }; class Game : Conception{ public: Game(); ~Game(); protected: Player** player; // for multiple players Battlefield* root; // for battlefield getPlayerMove(); // ask player what to do addToBattlefield(); removeFromBattlefield(); sendAttack(); Battlefield* field; }; list.cpp #include <iostream> #include <cctype> #include "list.h" // ========== LinkedList::LinkedList(){ root = new Node; classType = new char[strlen("LinkedList") + 1]; classType = "LinkedList"; }; LinkedList::~LinkedList(){ delete root; } LinkedList::LinkedList(const LinkedList & obj) { // code to copy } // --------- // ========= int LinkedList::delete_all(Node* root){ if (root = 0) return 0; delete_all(root->next); root = 0; } int LinkedList::add( Conception*& is){ if (root == 0){ root = new Node; root->next = 0; } else { Node * curr = root; root = new Node; root->next=curr; root->it = is; } } int LinkedList::remove(Node * root, Node * prev, Conception* is){ if (root = 0) return -1; if (root->it == is){ root->next = root->next; return 0; } remove(root->next, root, is); return 0; } Conception* LinkedList::find(Node*& root, const Conception* is, Conception* holder = NULL) { if (root==0) return NULL; if (root->it == is){ return root-> it; } holder = find(root->next, is); return holder; } Node* LinkedList::goForward(Node * root){ if (root==0) return root; if (root->next == 0) return root; else return root->next; } // ============ Node* LinkedList::goBackward(Node * root){ root = root->prev; } list.h #ifndef LIST_H #define LIST_H #include <iostream> #include "conception.h" class Node : public Conception { public: Node() : next(0), prev(0), it(0) { it = 0; classType = new char[strlen("Node") + 1]; classType = "Node"; }; ~Node(){ delete it; delete next; delete prev; } Node* next; Node* prev; Conception* it; // generic object }; // ---------------------- class LinkedList : public Conception { public: LinkedList(); ~LinkedList(); LinkedList(const LinkedList&); friend bool operator== (Conception& thing_1, Conception& thing_2 ); protected: virtual int delete_all(Node*); virtual int add( Conception*& ); // virtual Conception* find(Node *&, const Conception*, Conception* ); // virtual int remove( Node *, Node *, Conception* ); // removes question with keyword int display_all(node*& ); virtual Node* goForward(Node *); virtual Node* goBackward(Node *); Node* root; // write copy constrcutor }; // ============= class CircularLinkedList : public LinkedList { public: // CircularLinkedList(); // ~CircularLinkedList(); // CircularLinkedList(const CircularLinkedList &); }; class DoubleLinkedList : public LinkedList { public: // DoubleLinkedList(); // ~DoubleLinkedList(); // DoubleLinkedList(const DoubleLinkedList &); protected: }; // END OF LIST Hierarchy #endif player.cpp #include <iostream> #include "player.h" #include "list.h" using namespace std; Library::Library(){ root = 0; } Library::~Library(){ delete card; } // ====DECL========= Player::~Player(){ delete fname; delete lname; delete deck; } Wizard::~Wizard(){ delete mana; delete rootL; delete rootH; } // =====Player====== void Player::changeName(const char[] first, const char[] last){ char* backup1 = new char[strlen(fname) + 1]; strcpy(backup1, fname); char* backup2 = new char[strlen(lname) + 1]; strcpy(backup1, lname); if (first != NULL){ fname = new char[strlen(first) +1]; strcpy(fname, first); } if (last != NULL){ lname = new char[strlen(last) +1]; strcpy(lname, last); } return 0; } // ========== void Player::seeStats(Stats*& to_put){ to_put->wins = stats->wins; to_put->losses = stats->losses; to_put->winRatio = stats->winRatio; } // ---------- void Player::displayDeck(const LinkedList* deck){ } // ================ void CardList::findCard(Node* root, int id, NodeCard*& is){ if (root == NULL) return; if (root->it.id == id){ copyCard(root->it, is); return; } else findCard(root->next, id, is); } // -------- void CardList::deleteAll(Node* root){ if (root == NULL) return; deleteAll(root->next); root->next = NULL; } // --------- void CardList::removeCard(Node* root, int id){ if (root == NULL) return; if (root->id = id){ root->prev->next = root->next; // the prev link of root, looks back to next of prev node, and sets to where root next is pointing } return; } // --------- void CardList::addCard(Card* to_add){ if (!root){ root = new Node; root->next = NULL; root->prev = NULL; root->it = &to_add; return; } else { Node* original = root; root = new Node; root->next = original; root->prev = NULL; original->prev = root; } } // ----------- void CardList::displayAll(Node*& root){ if (root == NULL) return; cout << "Card Name: " << root->it.cardName; cout << " || Type: " << root->it.type << endl; cout << " --------------- " << endl; if (root->classType == "Spell"){ cout << "Base Power: " << root->it.basePower; cout << " || Current Power: " << root->it.currPower << endl; cout << "Base Toughness: " << root->it.baseToughness; cout << " || Current Toughness: " << root->it.currToughness << endl; } cout << "Card Type: " << root->it.currPower; cout << " || Card Color: " << root->it.color << endl; cout << "Mana Cost" << root->it.manaCost << endl; cout << "Keywords: " << root->it.keywords << endl; cout << "Flavor Text: " << root->it.flavorText << endl; cout << " ----- Class Type: " << root->it.classType << " || ID: " << root->it.id << " ----- " << endl; cout << " ******************************************" << endl; cout << endl; // ------- void CardList::copyCard(const Card& to_get, Card& put_to){ put_to.type = to_get.type; put_to.color = to_get.color; put_to.manaCost = to_get.manaCost; put_to.inPlay = to_get.inPlay; put_to.tapped = to_get.tapped; put_to.class = to_get.class; put_to.id = to_get.id; put_to.enchanted = to_get.enchanted; put_to.artifact = to_get.artifact; put_to.class = to_get.class; put.to.abilities = new char[strlen(to_get.abilities) +1]; strcpy(put_to.abilities, to_get.abilities); put.to.keywords = new char[strlen(to_get.keywords) +1]; strcpy(put_to.keywords, to_get.keywords); put.to.flavorText = new char[strlen(to_get.flavorText) +1]; strcpy(put_to.flavorText, to_get.flavorText); if (to_get.class = "Spell"){ put_to.baseToughness = to_get.baseToughness; put_to.basePower = to_get.basePower; put_to.currToughness = to_get.currToughness; put_to.currPower = to_get.currPower; } } // ---------- player.h #ifndef player.h #define player.h #include "list.h" // ============ class CardList() : public LinkedList(){ public: CardList(); ~CardList(); protected: virtual void findCard(Card&); virtual void addCard(Card* ); virtual void removeCard(Node* root, int id); virtual void deleteAll(); virtual void displayAll(); virtual void copyCard(const Conception*, Node*&); Node* root; } // --------- class Library() : public CardList(){ public: Library(); ~Library(); protected: Card* card; int numCards; findCard(Card&); // get Card and fill empty template } // ----------- class Deck() : public CardList(){ public: Deck(); ~Deck(); protected: enum deckColor { WHITE, BLUE, BLACK, RED, GREEN, MIXED }; char* deckName; } // =============== class Mana(int amount) : public Conception { public: Mana() : displayTotal(0), classType(0) { displayTotal = 0; classType = new char[strlen("Mana") + 1]; classType = "Mana"; }; protected: int accrued; void add(); void remove(); int displayTotal(); } inline Mana::add(){ accrued += 1; } inline Mana::remove(){ accrued -= 1; } inline Mana::displayTotal(){ return accrued; } // ================ class Stats() : public Conception { public: friend class Player; friend class Game; Stats() : wins(0), losses(0), winRatio(0) { wins = 0; losses = 0; if ( (wins + losses != 0) winRatio = wins / (wins + losses); else winRatio = 0; classType = new char[strlen("Stats") + 1]; classType = "Stats"; } protected: int wins; int losses; float winRatio; void int getStats(Stats*& ); } // ================== class Player() : public Conception{ public: Player() : wins(0), losses(0), winRatio(0) { fname = NULL; lname = NULL; stats = NULL; CardList = NULL; classType = new char[strlen("Player") + 1]; classType = "Player"; }; ~Player(); Player(const Player & obj); protected: // member variables char* fname; char* lname; Stats stats; // holds previous game statistics CardList* deck[]; // hold multiple decks that player might use - put ll in this private: // member functions void changeName(const char[], const char[]); void shuffleDeck(int); void seeStats(Stats*& ); void displayDeck(int); chooseDeck(); } // -------------------- class Wizard(Card) : public Player(){ public: Wizard() : { mana = NULL; rootL = NULL; rootH = NULL}; ~Wizard(); protected: playCard(const Card &); removeCard(Card &); attackWithCard(Card &); enchantWithCard(Card &); disenchantWithCard(Card &); healWithCard(Card &); equipWithCard(Card &); Mana* mana[]; Library* rootL; // Library Library* rootH; // Hand } #endif

    Read the article

  • fatal: git-http-push-failed (return code 22)

    - by Mariusz
    Hello, that's me again. After having problem with estabilishing connection to github.com now I have a problem with next step - pushing. I need to mention, that I am novice at GIT service, and this whole Distributed Subversion Checking Systems world.. I have done git init, then git add *.h and git add *.cpp, but currently git status does not print anything in "# On branch master" section? Previously It was correctly printing whole list of added files, now this list is gone. Nextly, I have executed: git remote add origin https://github.com/mgeeky/disasm.git and error has occured after: git push origin master Username: Password: error: Cannot access URL https://github.com/mgeeky/disasm.git/, return code 22 fatal: git-http-push failed What should I do now? I've tried: git push origin Username: Password: No refs in common and none specified; doing nothing. Perhaps you should specify a branch such as 'master'. Everything up-to-date But it seems to be okey.

    Read the article

  • WMI Rights required to read root\MicrosoftIISv2 in IIS7 with IIS6 compatibility mode

    - by JoeBilly
    I need to manage my IIS7 (Windows Server 2008) remotely with a WMI IIS6 API. So I added the IIS6 WMI Compatibility and IIS6 Metabase Compatibility roles to access the root\MicrosoftIIsv2 namespace. I have a domain account which is not administrator on the remote machine ; with this right, everything is ok. I configured these rights for my domain account to access the root\MicrosoftIIsv2 WMI namespace remotely ; note that these rights work perfectly on a IIS6 and Windows Server 2003 : DCOM : Account in Distributed COM Users Remote & local access to DCOM WMI : Root\CIMV2 (I need access here too) Execute methods, Enable Account, Remote Enable Root\Default (I need access here too) Execute methods, Enable Account, Remote Enable Root\MicrosoftIISv2 Execute methods, Enable Account, Provider Write, Remote Enable IIS Metabase (Metabase Explorer) : LM Full Control (W3SVC inherits these permissions) I tried to give some access on C:\Windows\System32\inetsrv too ; don't know if needed. My issue is : I can't list the IIS WebSites (\root\MicrosoftIISv2:IIsWebServerSetting.Name="W3SVC/*"). I don't get an 'access denied' but nothing is returned. My API and powershell tests can connect and execute queries in the root\MicrosoftIISv2 namespace I can read the IIsComputer class ex: Get-WmiObject IIsComputer -namespace "ROOT\MicrosoftIISv2" -authentication PacketPrivacy | SELECT * I can't read the IIsWebServerSetting, IIsWebServer ... to list the WebSites : the query returns an empty collection ex: Get-WmiObject IIsWebServerSetting -namespace "ROOT\MicrosoftIISv2" -authentication PacketPrivacy | SELECT ServerComment All queries work perfectly if the account is administrator as already said I am using PacketPrivacy authentication FI: I got a Warning Event 5605 with the Administrator right or not, that does not seem to have an impact : The root\MicrosoftIISv2 namespace is marked with the RequiresEncryption flag. Access to this namespace might be denied if the script or application does not have the appropriate authentication level. Change the authentication level to Pkt_Privacy and run the script or application again Ok, I have some more informations, when I use IIS 6 Metabase Explorer with my administrator account I can see the rights are correctly inherited for my non-administrator account. But when I try to connect using my non-administrator account, I can list the LM node, but get an "access denied, failed to get a key's data" when I try to browse the child nodes. I'll check further. I tried to Trace the WMI Activity, and everything seems OK ; this tends to confirm that the problem lies in IIS Rights.

    Read the article

  • How to add NT Virtual Machine\Virtual Machines to GPO

    - by Nicola Cassolato
    I have a Windows 2012 Server with Hyper-V enabled and a few virtual machines. My current configuration has a few account in the "Log on as a service" list in the domain policies, and sometimes this prevent my virtual machines from starting (I get this error: 'Error 0x80070569 ('VM_NAME' failed to start worker process: Logon Failure: The user has not been granted the requested logon type at this computer.') As described in this KB I would like to add NT Virtual Machine\Virtual Machines to my "Log on as a service" list to resolve my problem. My problem is that when I try to add that user to my domain policy I get an error message: "The following account could not be validated". My domain controller obviously doesn't know about that user since it's not an Hyper-V enabled server. How can I add that account to my Domain Policies?

    Read the article

  • OpenNebula: [HostPoolInfo] User couldn't be authenticated, aborting call

    - by ulf
    I installed OpenNebula 3.2.1 following the guide found under http://opennebula.org/documentation:rel3.2:ignc on a Debian 6.0.4 machine. Everything seemed fine until trying to execute the command onevm list Then I always get this: oneadmin@opennebula-master:~$ onevm list [VirtualMachinePoolInfo] User couldn't be authenticated, aborting call. The file one_auth exists. I even gave the oneadmin user a password although it doesn't seem to be required according to the guide. I copied the password hash from /etc/shadow to the one_auth file. Still no success. Any ideas are appreciated.

    Read the article

  • Mac OS X Server Configure DHCP Options 66 and 67

    - by Paul Adams
    I need to configure Mountain Lion (10.8.2) OS X Server BOOTP to provide DHCP options 66 and 67 to provide PXE booting for PCs on my network. I have tried following the bootpd MAN pages, but they are not specific enough. I have also read conflicting information on the net, but nothing definitive for Mountain Lion DHCP. From bootpd man page: bootpd has a built-in type conversion table for many more options, mostly those specified in RFC 2132, and will try to convert from whatever type the option appears in the property list to the binary, packet format. For example, if bootpd knows that the type of the option is an IP address or list of IP addresses, it converts from the string form of the IP address to the binary, network byte order numeric value. If the type of the option is a numeric value, it converts from string, integer, or boolean, to the proper sized, network byte-order numeric value. Regardless of whether bootpd knows the type of the option or not, you can always specify the DHCP option using the data property list type <key>dhcp_option_128</key> <data> AAqV1Tzo </data> My TFTP server is 172.16.152.20 and the bootfile is pxelinux.0 I have edited /etc/bootpd.plist and added the following to the subnet dict: <key>dhcp_option_66</key> <data> LW4gLWUgrBCYFAo= </data> <key>dhcp_option_67</key> <data> LW4gLWUgcHhlbGludXguMAo= </data> According to the man page, the data elements are supposed to be Base64 encoded, but no matter what I try, I cannot get PXE clients to boot. I have tried encoding 172.16.152.20 using various methods: echo "172.16.152.20" | openssl enc -base64 returns MTcyLjE2LjE1Mi4yMAo= DHCP Option Code Utility (http://mac.softpedia.com/get/Internet-Utilities/DHCP-Option-Code-Utility.shtml) generating a string from 172.16.152.20 yields: LW4gLWUgMTcyLjE2LjE1Mi4yMAo= (used in the above example) DHCP Option Code Utility generating an IP Addresss from 172.16.152.20 yields: LW4gLWUgrBCYFAo= Encoding pxelinux.0 with the above methods likewise yields different encodings. I have tried using all three methods of encoding the data elements, but nothing seems to work i.e. my PXE boot clients do not get directed to my TFTP server. Can anyone help? Regards, Paul Adams.

    Read the article

  • SOLVED:Bootloader isn't executable booting XEN PV Guest with virtual-manager

    - by user2284355
    I am going insane with an error I am encountering while trying to install a PV Guest of Debian Wheezy on a Ubuntu Server precise Xen default build with libvirt. The steps I take with virt-manager are the following: 1.Net install via: http://ftp.es.debian.org/debian/dists/stable/main/installer-amd64/ 2.Install process is flawless, installed via VNC over virt-manager 3.When the VM starts I get the following error: Error starting domain: POST operation failed: xend_post: error from xen daemon: (xend.err "Bootloader isn't executable") Most answers i have found on google say that I need to edit the VM's .cfg file and correct the path to pygrub but virt-manager does not seem to create this file (I have searched the entire drive with "find". Another detail is that virsh list --all shows no VMs (Not even dom0) while the command xm list shows all of them. Any help is much appreciated. EDIT: Connected remotely via virsh: virsh -c xen+ssh://user@ip dumpxml vmname Found line: /usr/bin/pygrub ln -s /usr/lib/xen-4.1/bin/pygrub /usr/bin/pygrub Now it works. If anyone can think of a better solution give me a shout. Cheers

    Read the article

  • Can't install kernel-uek-headers for currently running kernel

    - by haydenc2
    I have just created a VM in VMWare and installed a minimal install of Oracle Enterprise Linux 6.3. # cat /etc/oracle-release Oracle Linux Server release 6.3 It is running with the UEK kernel. # uname -r 2.6.39-200.24.1.el6uek.x86_64 When I try and install VMWare Tools, I get the following error. Searching for a valid kernel header path... The path "" is not a valid path to the 2.6.39-200.24.1.el6uek.x86_64 kernel headers. Would you like to change it? [yes] I have version 2.6.39 of the UEK installed, but the kernel-uek-headers are only 2.6.32. # yum list kernel-uek Installed Packages kernel-uek.x86_64 2.6.39-200.24.1.el6uek @anaconda-UEK2/6.3 kernel-uek.x86_64 2.6.39-200.29.3.el6uek @ol6_UEK_latest # yum list kernel-uek-headers Installed Packages kernel-uek-headers.x86_64 2.6.32-300.32.2.el6uek @ol6_latest And it appears that the headers for 2.6.39 aren't there. # yum list kernel-uek-headers --showduplicates Installed Packages kernel-uek-headers.x86_64 2.6.32-300.32.2.el6uek @ol6_latest Available Packages kernel-uek-headers.x86_64 2.6.32-100.28.5.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.28.9.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.28.11.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.28.15.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.28.17.el6 ol6_latest kernel-uek-headers.x86_64 2.6.32-100.34.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-100.35.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-100.36.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-100.37.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-200.16.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-200.19.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-200.20.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-200.23.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.3.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.4.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.7.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.11.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.20.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.21.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.24.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.25.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.27.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.29.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.29.2.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.32.1.el6uek ol6_latest kernel-uek-headers.x86_64 2.6.32-300.32.2.el6uek ol6_latest The kernel for 2.6.32 is there. # yum list kernel-uek --showduplicates Installed Packages kernel-uek.x86_64 2.6.39-200.24.1.el6uek @anaconda-UEK2/6.3 kernel-uek.x86_64 2.6.39-200.29.3.el6uek @ol6_UEK_latest Available Packages kernel-uek.x86_64 2.6.32-100.28.5.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.28.9.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.28.11.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.28.15.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.28.17.el6 ol6_latest kernel-uek.x86_64 2.6.32-100.34.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-100.35.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-100.36.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-100.37.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-200.16.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-200.19.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-200.20.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-200.23.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.3.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.4.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.7.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.11.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.20.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.21.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.24.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.25.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.27.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.29.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.29.2.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.32.1.el6uek ol6_latest kernel-uek.x86_64 2.6.32-300.32.2.el6uek ol6_latest kernel-uek.x86_64 2.6.39-100.5.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-100.6.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-100.7.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-100.10.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-200.24.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-200.29.1.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-200.29.2.el6uek ol6_UEK_latest kernel-uek.x86_64 2.6.39-200.29.3.el6uek ol6_UEK_latest Should I downgrade the kernel to 2.6.32 so I can install VMWare tools? Is there another way to get the kernel-uek-headers package for the 2.6.39 UEK?

    Read the article

  • Can't clone gitosis-admin.git local from my snow leopard server running gitosis

    - by joggink
    I've installed gitosis as described here: http://gist.github.com/264304 One of the things that I had to adjust was to give my git user permissions to use ssh (which I did trough server admin - access - ssh and added the 'git' user). When I ssh from my local machine (mac osx) as the git user, I get this response: PTY allocation request failed on channel 0 bash: gitosis-serve: command not found Connection to 10.0.0.108 closed. Which I think is normal, because in Pro GIT the author says you should get something like this if you try ssh'ing to the server using your git user: PTY allocation request failed on channel 0 fatal: unrecognized command 'gitosis-serve schacon@quaternion' Connection to gitserver closed. So far so good, I think? Now when I try to clone my gitosis-admin.git repository using this command: $git clone [email protected]:gitosis-admin.git I get this: Initialized empty Git repository in /Users/joggink/gitosis-admin/.git/ bash: gitosis-serve: command not found fatal: The remote end hung up unexpectedly So after doing some searching, I found an answer here on serverfault claiming I should use ssh:// as protocol for my git clone (which I thought is the default git protocol?) However, when I try: $git clone ssh://[email protected]:gitosis-admin.git This is the response: The authenticity of host ' (::1)' can't be established. RSA key fingerprint is 80:4d:77:c7:78:cb:c9:42:e3:82:06:7c:fe:c0:08:ce. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '' (RSA) to the list of known hosts. Password: Password: Password: Permission denied (publickey,keyboard-interactive). fatal: The remote end hung up unexpectedly When I type in my own password (because my git user has no password), I get following error: The authenticity of host ' (::1)' can't be established. RSA key fingerprint is 80:4d:77:c7:78:cb:c9:42:e3:82:06:7c:fe:c0:08:ce. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '' (RSA) to the list of known hosts. Password: bash: git-upload-pack: command not found fatal: The remote end hung up unexpectedly I added my upload-pack location as followed: $git clone -u /usr/local/git/bin/git-upload-pack ssh://[email protected]:gitosis-admin.git I get the error that gitosis-admin.git isn't a git repo... Initialized empty Git repository in /Users/joggink/gitosis-admin/.git/ The authenticity of host ' (::1)' can't be established. RSA key fingerprint is 80:4d:77:c7:78:cb:c9:42:e3:82:06:7c:fe:c0:08:ce. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '' (RSA) to the list of known hosts. Password: fatal: '[email protected]:gitosis-admin.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly I've been searching for a solution for almost a week now, and every topic I've found on the internet gives no result...

    Read the article

  • Windows 7 and wallpaper folders

    - by Andi
    I try to change my desktop wallpaper and I cannot choose a different folder from the one I chose at first... When I first installed W7 I changed the wallpaper by right-clicking in the desktop and choosing Customize (maybe it's not correct because I use the italian version) and then Change Wallpaper in the bottom and finally I browsed for a folder with some images in and chose the wallpaper. When I do the same exact things now, the folder I select in the final pass is not selected: I mean its pictures are not shown and it doesn't appear in the drop-down list. I can put a specific picture as a wallpaper by selecting it through Explorer, and some times (not always) the folder shows in the drop-down list. Is there any possible solution to this?

    Read the article

  • Munin-cron fails "Nothing to do", possibly a munin.conf problem?

    - by geerlingguy
    I have been working on this for a few hours now, and haven't yet been able to get munin to output the html files/generated graphs of resource usage on my CentOS 5.3 server. Here are some things I run as the munin user, and the results: /usr/share/munin/munin-update --nofork --debug (above works fine, takes ~2.4 seconds to complete) munin-run cpu (And other options/plugins (besides 'cpu'), all work fine and give desired output) munin-cron Fails with: [FATAL] There is nothing to do here, since there are no nodes with any plugins. Please refer to http://munin-monitoring.org/wiki/FAQ_no_graphs at /usr/share/munin/munin-html line 38 I am wondering if, perhaps, the settings in my munin.conf file might be causing a problem. Here's the contents of that file (below): bdir /var/lib/munin/ htmldir /home/archdev/public_html/monitoring logdir /var/log/munin rundir /var/run/munin/ tmpldir /etc/munin/templates [archstl.archstl.org] address 127.0.0.1 use_node_name yes Also, when I run the telnet localhost 4949 command, and list the node's plugins, it returns the default munin list... something seems to be wrong with the munin html creation process. :(

    Read the article

  • how to install newer GCC version in CentOS 5.7?

    - by gkdsp
    Using CentOS 5.7, how do I install GCC version 4.6? I just installed version 4.4 using # yum install gcc44 but that version still doesn't support variable length arrays from C99 standard. I don't see a newer version than 4.4 when I type: [root@host2 /etc]# yum list gcc\* Excluding Packages in global exclude list Finished Installed Packages gcc.x86_64 4.1.2-51.el5 installed gcc-c++.x86_64 4.1.2-51.el5 installed gcc-gfortran.x86_64 4.1.2-51.el5 installed gcc44.x86_64 4.4.4-13.el5 installed Available Packages gcc-gnat.x86_64 4.1.2-51.el5 system-base gcc-java.x86_64 4.1.2-51.el5 system-base gcc-objc.x86_64 4.1.2-51.el5 system-base gcc-objc++.x86_64 4.1.2-51.el5 system-base gcc44-c++.x86_64 4.4.4-13.el5 system-base gcc44-gfortran.x86_64 4.4.4-13.el5 system-base I wonder if the newer versions of GCC are not available to CentOS because they're deemed not yet reliable/stable enough (?) Can I download gcc-4.5.3.tar.gz from here: http://fileboar.com/gcc/releases/gcc-4.5.3/ but then how to install?

    Read the article

  • Is there a utility that displays hotkeys for the current application?

    - by Alwin
    To speed up my compyuter use I'm trying to use as many hotkeys as possible. But because I'm working in many applications it is really hard to remember all those shortcuts. I'm looking for a program that looks at what application is currently in use by me, and displays a list of possible hotkeys I want to use. Example: I'm writing a document in Word, and the utility program shows a list of hotkeys I could use in Word. With such a program I could learn the shortcut keys much faster. Does such a utility exist?

    Read the article

  • What's required to configure Ubuntu to use a specific DNS server?

    - by ks78
    I've setup two Amazon EC2 instances, both running Ubuntu Server. One is configured as a DNS server running bind9, which will be used to allow EC2 instances to communicate with each other based on hostname rather than IP, since their private IPs may change. I think I have the DNS server setup correctly. I want to use the second EC2 instance to test the DNS server. Using Webmin, I've added the DNS server's private IP to the client's DNS Servers list and added the domain to the Search Domains list. I did have to edit /etc/dhcp3/dhclint.conf to make my changes stick. After reboot, I expected I'd be able to ping or nslookup the DNS server from the test client, but it can't seem to find the server. Is there something I'm missing? What's required to configure an Ubuntu client to use a DNS server? I just want to make sure I'm not missing something before I assume the server's the problem.

    Read the article

  • NLB 2 Windows Server 2003 Servers

    - by Paul Hinett
    I need to configure windows NLB on 2 dedicated servers I have. My main machine has been running for some time, with several domain names pointing to the servers primary IP address. Both servers have 2 NIC's installed, and both have several secondary public IP addresses available if needed? What IP address would I use for the cluster IP, does this IP need to be added to the IP list of both public NIC's ip address list? What IP addresses do I use for the host's dedicated IP? Please help, this is driving me nuts...i've taken down the server twice on accident today! Thank you in advance!

    Read the article

  • Is there a taskbar for OS X?

    - by Paul Biggar
    I'd like to permanently see a clickable list of windows I have open, in the same way that the taskbar allows in Windows. Can I do this on Mac? Some details: i have many virtual desktops (spaces), so often a single application has windows on many of them. I often have multiple windows of each application, such as the terminal or browser, on the same virtual desktop I have multiple monitors, if it matters. Edit: When I say 'permanently see a clickable list of windows I have open' I mean that I want to see every window I have open, and I'd like to be able to click on each one to open that window. I'm not looking for the newer behaviour where tasks are clustered by application.

    Read the article

  • Why does limiting my virtual memory to 512MB with ulimit -v crash the JVM?

    - by Narinder Kumar
    I am trying to enforce maximum memory a program can consume on a Unix system. I thought ulimit -v should do the trick. Here is a sample Java program I have written for testing : import java.util.*; import java.io.*; public class EatMem { public static void main(String[] args) throws IOException, InterruptedException { System.out.println("Starting up..."); System.out.println("Allocating 128 MB of Memory"); List<byte[]> list = new LinkedList<byte[]>(); list.add(new byte[134217728]); //128 MB System.out.println("Done...."); } } By default, my ulimit settings are (output of ulimit -a) : core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 31398 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 31398 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited When I execute my java program (java EatMem), it executes without any problems. Now I try to limit max memory available to any program launched in the current shell to 512MB by launching the following command : ulimit -v 524288 ulimit -a output shows the limit to be set correctly (I suppose): core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 31398 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 31398 virtual memory (kbytes, -v) 524288 file locks (-x) unlimited If I now try to execute my java program, it gives me the following error: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. Ideally it should not happen as my Java program is only taking around 128MB of memory which is well within my specified ulimit parameters. If I change the arguments to my Java program as below: java -Xmx256m EatMem The program again works fine. While trying to give more memory than limited by ulimit like : java -Xmx800m EatMem results in expected error. Why the program fails to execute in the first case after setting ulimit ? I have tried the above test on Ubuntu 11.10 and 12.0.4 with Java 1.6 and Java 7

    Read the article

  • NIC models for HVM guests in Xen 4.0

    - by Jed Daniels
    Is there a list of NIC models that I should use with various HVM guests in Xen 4.0? Is there even a list of possible choices for the model=field? I'm planning on running both Windows and FreeBSD guests on the particular Xen box that I'm configuring now, and ideally I'd like a setting that can work on both of them and get me gigabit speeds without a lot of extra hassle loading special drivers (similar to VMware's e1000 NIC option). Any suggestions? The vif line I'm currently using in my config is: vif = [ 'type=ioemu,bridge=eth0,model=ne2k_pci,script=vif-ovs' ] This works in my Windows XP SP3 VM, but is only recognized as a 10Mbit interface. Changing to model=e1000 or model=i82557b resulted in Windows not being able to find a driver for the NIC.

    Read the article

  • Can one have multiple name servers that don't all belong to the same TLD/provider?

    - by Simon
    In light of the GoDaddy outage we updated our name server list for our domain to include an additional name server provider. The list looks something like this: ns61.domaincontrol.com ns54.domaincontrol.com ns1.dreamhost.com ns2.dreamhost.com Both Godaddy and Dreamhost have zone entries to handle the A and MX records. The idea is that if one provider goes out the other will be a fall-back. However, when I tested my config with http://www.intodns.com/ I am getting a warning about SOA serials not being agreed. Have I misunderstood some fundamentals in name-server config? What can I do to prevent future problems?

    Read the article

  • Generating a record of the full(-ish) package management state

    - by intuited
    I'm about to make some system changes and I'd like to have a record of my current happy system state. Is there a convenient way to create a record of this? I'd like to keep track of info like currently installed packages and their versions which packages are pinned at what version which source (as in /etc/apt/sources.list) they were installed from whether they were installed directly or automatically installed as a dependency of a different package "unknown unknowns": ie stuff that I don't know that I should be keeping track of but which may be important when trying to figure out why something doesn't work In short, I'd like to keep as much of the aptitude database as possible. What's the best way to do this? It would be nice if the resulting records were easily readable, though this is not really essential. It would be extra nice if it were readily versionable through an SCM tool like git. There is a superuser question that partially answers this, but it only provides the list of currently installed packages.

    Read the article

  • Generating a record of the full(-ish) package management state

    - by intuited
    I'm about to make some system changes and I'd like to have a record of my current happy system state. Is there a convenient way to create a record of this? I'd like to keep track of info like currently installed packages and their versions which packages are pinned at what version which source (as in /etc/apt/sources.list) they were installed from whether they were installed directly or automatically installed as a dependency of a different package "unknown unknowns": ie stuff that I don't know that I should be keeping track of but which may be important when trying to figure out why something doesn't work In short, I'd like to keep as much of the aptitude database as possible. What's the best way to do this? It would be nice if the resulting records were easily readable, though this is not really essential. It would be extra nice if it were readily versionable through an SCM tool like git. There is a superuser question that partially answers this, but it only provides the list of currently installed packages.

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >