Search Results

Search found 16127 results on 646 pages for 'physical model'.

Page 26/646 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Managed game frameworks with Model loading support [on hold]

    - by codymanix
    Iam looking for an alternative to XNA with support of model loading. Does anybody know when/if there if a MonoGame release is planned which includes its own content pipeline which works without XNA beeing installed? If not, what are the alternatives in managed game development? As far as I know, SlimDX and SharpDX on its own brings no functionality for loading models. Are there any open source libraries that can do that?

    Read the article

  • Going Beyond the Relational Model with Data

    SQL is a powerful tool for querying data, and for aggregating it. However, you can't easily use it to draw inferences, to make predictions, or to tease out subtle correlations. To provide ever more sophisticated inferences to businesses, the race is on to combine the power of the relational model with advanced statistical packages. Both IBM and PostGres are ready with solutions. And SQL Server? Hmm...

    Read the article

  • How can I share an entity framework model across website users

    - by richardmoss
    Hello, Currently my website is based around MVC and the Entity Framework running against a SQL Server 2005 database. So far, it has all been running very smoothly, and I really enjoy MVC and its slimmer more concise code (and no huge viewstates or soul destroying postbacks ;)) Recently I was working on upgrading the site to use a simple forum system, and this is where I started running into problems. When I was testing the site using two different browsers, if I created or replied to a post in one browser, the other browser couldn't see the post. At the moment, each visitor to the site gets their own copy of the entity model, which I store in their session data. Obviously this is the problem as updates to one model aren't getting carried to the other. As a test, I tried storing a single copy of the model which all visitors would access by assigning the model to a static variable. This worked, and both browsers could see each others modifications. However, it had its side effects. For example, if I fired up both browsers at the same time and the model was initialized, one browser would crash, and the other would work fine, despite me using a locking object so in theory one of them should have been delayed until the model was ready (of course I could have implemented this wrong ;)). Also, originally this site did use one model for all visitors and when it was live, it frequently shut down - killing the IIS application pool while it did. Now I'm not sure if this was related, but I don't really want to reintroduce whatever bug I had that caused this shut down. So, my question is a simple one really - what is the best way of either using the same model for all website users so they all see updates, or if they do have separate copies (which I imagine will have a performance impact in time) how can the models detect changes in the database and update themselves according. Thanks in advance for any advice! Regards; Richard Moss

    Read the article

  • Portable way to determining of printer is physical or virtual

    - by Mud
    I need direct-to-printer functionality for my website, with the ability to distinguish a physical printer from a virtual printer (file). Coupons.com has this functionality via a native binary which must be installed by the user. I'd prefer to avoid that. SmartSource.com does it via Java applet: Does anybody know how this is done? I dug through that Java APIs a bit, and don't see anything that would let you determine physical vs virtual, except looking at the name (that seems prone to misidentification). It would be nice to be able to do it in Java, because I already know how to write Java applets. Failing that, is there a way to do this in Flash or Silverlight? Thanks in advance. EDIT: Well deserved bounty awarded to Jason Sperske who worked out an elegant solution. Thanks to those of you who shared ideas, as well as those who actually investigated SmartSource.com's solution (like Adrian).

    Read the article

  • Virtual and Physical Memory / OutOfMemoryException

    - by user417518
    Hi, I am working on a 64-bit .Net Windows Service application that essentially loads up a bunch of data for processing. While performing data volume testing, we were able to overwhelm the process and it threw an OutOfMemoryException (I do not have any performance statistics on the process when it failed.) I have a hard time believing that the process requested a chunk of memory that would have exceeded the allowable address space for the process since its running on a 64-bit machine. I do know that the process is running on a machine that is consistently in the neighborhood of 80%-90% physical memory usage. My question is: Can the CLR throw an OutOfMemoryException if the machine is critically low on available physical memory even though the process wouldn't exceed it's allowable amount of virtual memory? Thanks for your help!

    Read the article

  • convert from physical path to virtual path

    - by user710502
    I have this function that gets the fileData as a byte array and a file path. The error I am getting is when it tries to set the fileInfo in the code bewlo. It says 'Physical Path given, Virtual Path expected' public override void WriteBinaryStorage(byte[] fileData, string filePath) { try { // Create directory if not exists. System.IO.FileInfo fileInfo = new System.IO.FileInfo(System.Web.HttpContext.Current.Server.MapPath(filePath)); //when it gets to this line the error is caught if (!fileInfo.Directory.Exists) { fileInfo.Directory.Create(); } // Write the binary content. System.IO.File.WriteAllBytes(System.Web.HttpContext.Current.Server.MapPath(filePath), fileData); } catch (Exception) { throw; } } When debugging it, is providing the filePath as "E:\\WEBS\\webapp\\default\\images\\mains\\myimage.jpg" . And the error message is 'E:/WEBS/webapp/default/images/mains/myimage.jpg' is a physical path, but a virtual path was expected. Also, what it is triggering this to happen is the following call properties.ResizeImage(imageName, Configurations.ConfigSettings.MaxImageSize, Server.MapPath(Configurations.EnvironmentConfig.LargeImagePath));

    Read the article

  • asp.net mvc route clashing with physical path in IIS7

    - by Andrew Bullock
    I'm messing about with controller organisation and I've hit a problem. If I have the following physical structure /Home/HomeController.cs /Home/Index.aspx /Home/About.aspx and I request the URI: /Home/Index I get a 403 Directory Listing Denied :( (im using a custom IControllerFactory and IViewEngine to look in this non-default path) Why is this happening? (I know the 403 is because its hitting the /Home folder, but why is it hitting the folder?) Why doesn't the UrlRoutingModule rewrite the route and let the controller pick up the request? Application_BeginRequest fires, but then it seems to pass control back to IIS to try and serve from the filesystem. Is it the UrlRoutingModule that defaults to a physical path if it exists before rewriting? Is there a way to make this work? N.B. Please don't suggest relocating my controllers etc. I know this is an obvious option, but that isn't the question ;) Using IIS7 In Integrated Mode Thanks

    Read the article

  • NHibernate: Dynamically swapping a single domain model between multiple physical data models

    - by Nigel
    Hi In this article Ayende describes how to map a single domain model to multiple physical data models. Is it possible to extend this principle such that the mapping can chosen dynamically? So for example, imagine we had an entity that could be written to the same physical schema in three ways depending on its current status, and lets assume that regardless of status each entity had a unique identifier. One solution would be to represent the entity in its different states with three separate classes: one for each mapping. Then the entity could be loaded and in order to change its state the entity could be mapped to a class representing one of its other states and then saved back to the schema, making use of a different mapping. I was wondering if it is at all possible to have the same entity represented by one class that held a status flag (kind of like a discriminator), and any save to the schema would choose the appropriate mapping based on the value of the status flag. Hopefully that made sense! Many thanks.

    Read the article

  • Backbone.js Model change events in nested collections not firing as expected

    - by Pallavi Kaushik
    I'm trying to use backbone.js in my first "real" application and I need some help debugging why certain model change events are not firing as I would expect. I have a web service at /employees/{username}/tasks which returns a JSON array of task objects, with each task object nesting a JSON array of subtask objects. For example, [{ "id":45002, "name":"Open Dining Room", "subtasks":[ {"id":1,"status":"YELLOW","name":"Clean all tables"}, {"id":2,"status":"RED","name":"Clean main floor"}, {"id":3,"status":"RED","name":"Stock condiments"}, {"id":4,"status":"YELLOW","name":"Check / replenish trays"} ] },{ "id":47003, "name":"Open Registers", "subtasks":[ {"id":1,"status":"YELLOW","name":"Turn on all terminals"}, {"id":2,"status":"YELLOW","name":"Balance out cash trays"}, {"id":3,"status":"YELLOW","name":"Check in promo codes"}, {"id":4,"status":"YELLOW","name":"Check register promo placards"} ] }] Another web service allows me to change the status of a specific subtask in a specific task, and looks like this: /tasks/45002/subtasks/1/status/red [aside - I intend to change this to a HTTP Post-based service, but the current implementation is easier for debugging] I have the following classes in my JS app: Subtask Model and Subtask Collection var Subtask = Backbone.Model.extend({}); var SubtaskCollection = Backbone.Collection.extend({ model: Subtask }); Task Model with a nested instance of a Subtask Collection var Task = Backbone.Model.extend({ initialize: function() { // each Task has a reference to a collection of Subtasks this.subtasks = new SubtaskCollection(this.get("subtasks")); // status of each Task is based on the status of its Subtasks this.update_status(); }, ... }); var TaskCollection = Backbone.Collection.extend({ model: Task }); Task View to renders the item and listen for change events to the model var TaskView = Backbone.View.extend({ tagName: "li", template: $("#TaskTemplate").template(), initialize: function() { _.bindAll(this, "on_change", "render"); this.model.bind("change", this.on_change); }, ... on_change: function(e) { alert("task model changed!"); } }); When the app launches, I instantiate a TaskCollection (using the data from the first web service listed above), bind a listener for change events to the TaskCollection, and set up a recurring setTimeout to fetch() the TaskCollection instance. ... TASKS = new TaskCollection(); TASKS.url = ".../employees/" + username + "/tasks" TASKS.fetch({ success: function() { APP.renderViews(); } }); TASKS.bind("change", function() { alert("collection changed!"); APP.renderViews(); }); // Poll every 5 seconds to keep the models up-to-date. setInterval(function() { TASKS.fetch(); }, 5000); ... Everything renders as expected the first time. But at this point, I would expect either (or both) a Collection change event or a Model change event to get fired if I change a subtask's status using my second web service, but this does not happen. Funnily, I did get change events to fire if I added one additional level of nesting, with the web service returning a single object that has the Tasks Collection embedded, for example: "employee":"pkaushik", "tasks":[{"id":45002,"subtasks":[{"id":1..... But this seems klugey... and I'm afraid I haven't architected my app right. I'll include more code if it helps, but this question is already rather verbose. Thoughts?

    Read the article

  • Unit Testing (xUnit) an ASP.NET Mvc Controller with a custom input model?

    - by Danny Douglass
    I'm having a hard time finding information on what I expect to be a pretty straightforward scenario. I'm trying to unit test an Action on my ASP.NET Mvc 2 Controller that utilizes a custom input model w/ DataAnnotions. My testing framework is xUnit, as mentioned in the title. Here is my custom Input Model: public class EnterPasswordInputModel { [Required(ErrorMessage = "")] public string Username { get; set; } [Required(ErrorMessage = "Password is a required field.")] public string Password { get; set; } } And here is my Controller (took out some logic to simplify for this ex.): [HttpPost] public ActionResult EnterPassword(EnterPasswordInputModel enterPasswordInput) { if (!ModelState.IsValid) return View(); // do some logic to validate input // if valid - next View on successful validation return View("NextViewName"); // else - add and display error on current view return View(); } And here is my xUnit Fact (also simplified): [Fact] public void EnterPassword_WithValidInput_ReturnsNextView() { // Arrange var controller = CreateLoginController(userService.Object); // Act var result = controller.EnterPassword( new EnterPasswordInputModel { Username = username, Password = password }) as ViewResult; // Assert Assert.Equal("NextViewName", result.ViewName); } When I run my test I get the following error on my test fact when trying to retrieve the controller result (Act section): System.NullReferenceException: Object reference not set to an instance of an object. Thanks in advance for any help you can offer!

    Read the article

  • Hierarchical object model with property inheritance and event bubbling?

    - by Winston Fassett
    I'm writing a document-based client application and I need a DOM or WPF-like, but non-visual model that: Is a tree composed of elements Can accept an unlimited number of custom properties that get/set any CLR type, including collections. Can inherit their values from their parent Can inherit their default values from an ancestor Can be derived/calculated from other properties, ancestors, or descendants Support event bubbling / tunneling There will be a core set of properties but other plugins may add their own or even create custom documents Supports full inspection by the owning document in order to persist the tree and attributes in an XML format. I realize that's a tall order but I was really hoping there would be something out there to help me get started. Unfortunately WPF DependencyObjects are too closed, proprietary, and coupled to WPF to be of any use as a document model. My needs also have a strong resemblance to the HTML DOM but I haven't been able to find any clean DOM implementations that could be decoupled from HTML or ported to .NET. My current platform is .NET/C# but if anyone knows of anything that might be useful for inspiration or embedding, regardless of the platform, I'd love to know.

    Read the article

  • Zend Framework-where do calls to my methods go? Controller of Model?

    - by Joel
    Hi guys, I'm confused about exactly what I should have in my controller and what in my method. Specifically, I have this in the action method: public function upcomingshowsAction() { $gcal = $this->_validateCalendarConnection(); $uncleanedFeedArray = $this->_getCalendarFeed($gcal); $finishedFeedArray = $this->_cleanFeed($uncleanedFeedArray); $this->view->googleArray = $finishedFeedArray; } And then (incorrectly I know), I have my methods still in the bottom of my controller. So what I'm wondering, is for those methods in the upcomingshowsAction method, should all the actual methods just be in one model and then I'd have something like this: public function upcomingshowsAction() { $finishedFeedArray = new Application_Model_calendarModelPage(); $this->view->googleArray = $finishedFeedArray; } And then something like this in the model: class Application_Model_CalendarModelPage { $gcal = $this->_validateCalendarConnection(); $uncleanedFeedArray = $this->_getCalendarFeed($gcal); $finishedFeedArray = $this->_cleanFeed($uncleanedFeedArray); public functions { ... ... ... } } Am I on the right track here? Thanks!

    Read the article

  • A basic T4 template for generating Model Metadata in ASP.NET MVC2

    - by rajbk
    I have been learning about T4 templates recently by looking at the awesome ADO.NET POCO entity generator. By using the POCO entity generator template as a base, I created a T4 template which generates metadata classes for a given Entity Data Model. This speeds coding by reducing the amount of typing required when creating view specific model and its metadata. To use this template, Download the template provided at the bottom. Set two values in the template file. The first one should point to the EDM you wish to generate metadata for. The second is used to suffix the namespace and classes that get generated. string inputFile = @"Northwind.edmx"; string suffix = "AutoMetadata"; Add the template to your MVC 2 Visual Studio 2010 project. Once you add it, a number of classes will get added to your project based on the number of entities you have.    One of these classes is shown below. Note that the DisplayName, Required and StringLength attributes have been added by the t4 template. //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------   using System; using System.ComponentModel; using System.ComponentModel.DataAnnotations;   namespace NorthwindSales.ModelsAutoMetadata { public partial class CustomerAutoMetadata { [DisplayName("Customer ID")] [Required] [StringLength(5)] public string CustomerID { get; set; } [DisplayName("Company Name")] [Required] [StringLength(40)] public string CompanyName { get; set; } [DisplayName("Contact Name")] [StringLength(30)] public string ContactName { get; set; } [DisplayName("Contact Title")] [StringLength(30)] public string ContactTitle { get; set; } [DisplayName("Address")] [StringLength(60)] public string Address { get; set; } [DisplayName("City")] [StringLength(15)] public string City { get; set; } [DisplayName("Region")] [StringLength(15)] public string Region { get; set; } [DisplayName("Postal Code")] [StringLength(10)] public string PostalCode { get; set; } [DisplayName("Country")] [StringLength(15)] public string Country { get; set; } [DisplayName("Phone")] [StringLength(24)] public string Phone { get; set; } [DisplayName("Fax")] [StringLength(24)] public string Fax { get; set; } } } The gen’d class can be used from your project by creating a partial class with the entity name and setting the MetadataType attribute.namespace MyProject.Models{ [MetadataType(typeof(CustomerAutoMetadata))] public partial class Customer { }} You can also copy the code in the metadata class generated and create your own ViewModel class. Note that the template is super basic  and does not take into account complex properties. I have tested it with the Northwind database. This is a work in progress. Feel free to modify the template to suite your requirements. Standard disclaimer follows: Use At Your Own Risk, Works on my machine running VS 2010 RTM/ASP.NET MVC 2 AutoMetaData.zip Mr. Incredible: Of course I have a secret identity. I don't know a single superhero who doesn't. Who wants the pressure of being super all the time?

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • Moving physical windows 7 to Hyper - V on windows 2008 r2

    - by ekamtaj
    Hey Guys, I have a Windows 7 on a PC, but I want to install Windows 2008 R2 on the computer. I also want to keep Windows 7 on as a VM. Can I use disk2vhd? http://technet.microsoft.com/en-us/sysinternals/ee656415.aspx Can I create a windows & full backup and restore it on Hyper-V? Please let me know what will work best and if you have any other suggestions.

    Read the article

  • Monitoring physical RAM errors on Linux

    - by user40157
    I would like to monitor the ram of two linux systems (Ubuntu and Red Hat). I realize I can run memtest86 from boot to diagnose bad ram. But are there are any solutions to monitor ram while the system is still running. I'm sort of thinking a daemon that writes and reads back from random unused memory. Anybody seen something like this before?

    Read the article

  • Latency between IIS and SQL on same physical, two VMs

    - by Jerad Rose
    I have a single server (2x4 core CPUs, 32GB ram), that is a Windows Server 2012 Hyper V host, and it hosts two guest VMs (also Windows Server 2012 instances). One of them is a web server, the other is a SQL server. When hitting a page that loops over 50 records, there is noticeable latency. I capture/report the timings of each iteration on the loop, and each iteration is about 20-30 milliseconds. Of course, this amounts to over a second of latency for the whole loop. I thought maybe SQL needed to be tuned, but running profiler on it, the queries are showing almost 0 duration, so it seems the bottleneck is in transit between the two VMs. I have both VMs configured to use the actual NIC (vs. using a VNIC), so maybe that's part of my problem. Also, this is a classic ASP site, so it's using the SQL OLE DB provider, and I'm wondering if that is part of the problem. This is a new server setup, from an existing Windows 2003/IIS6 server setup where both web and DB run on the same server instance (no virtualization). On that setup, there is no such latency when looping over the cursor like this. But there are so many variables, I'm not sure where to start ruling things out.

    Read the article

  • How to know which block device maps to which physical drive

    - by Karolis T.
    I have a server with software RAID 1, two hot-swap sata disks. One hard drive started showing errors, I'm thinking about removing and replacing it, only problem is that I have no idea which of the two correspond to which devices. And I can't shut the server down to find out. I have /dev/sda and /dev/sdb, /dev/sda is the failing one. Thought about doing something along the lines # mdadm --manage /dev/md0 --remove /dev/sda1 then somehow stop/suspend the drive using tuning software and try to listen which of the two stopped, but that's not gonna work in a noisy server environment. Drive panels have no LEDs. Thanks for any ideas!

    Read the article

  • Cannot access virtual machine via ping from the physical host machine

    - by Kenni
    I'm installing a FreeBSD Server on VirtualBox. I set up the IP address (192.168.10.5) for the virtual server to run a mail server and the host computer(Windows 7) with 192.168.10.184. The two machines cannot communicate or connect to each other. I cannot ping from the virtual machine to the host and vice versa. The host machine connects to a LAN. I want the mail server to run frm a VMachine. I think it's a problem with the network configuration of the virtual machine.

    Read the article

  • How would I add a second physical hard drive to proxmox

    - by Cygnus X
    I installed proxmox on a single 250GB hard drive and I would like to add a second identical hard drive to put more VM's on. I already tried once, and didn't get very far. I added it and formatted it as an ext4, but when I went to use the disk, it said only 8GB was available. That's not quite right. So I did some searching and found that I had to make the device ID 8e for a linux lvm. After I did this, it said I had to restart, so I did... and it wouldn't boot!!! What did I do wrong? And how do I do it right? (I know I could throw in a RAID card and do a RAID 0, but I'd rather not).

    Read the article

  • Getting FC11 to run under VMware Server, converted from physical machine

    - by Kristian
    I have a FC11 installation that I have converted to a VMware disk image to run om my VMware Server. I converted it with qemu-img, as the VMware Converter software apparently only converts Linux hosts to VMware Infrastructure servers. The disk image boots fine (grub is loaded and boots the kernel) but it seems like the disk is not found by the kernel, and the boot process stalls. Hotplugging USB devices work (the kernel prints debug information) and I'm able to press keys (Ctrl-Alt-Delete for instance). The VMware guest OS is set to RedHat Enterprise Linux 5 (32 bit), and I have tried both the LSI Logic, LSI Logic SAS and VMware Accelerated SCSI SCSI controllers, to no avail. I'm able to boot an installer disk and get into rescue mode and mount the filesystem, so my question is, what do I need to do to the guest kernel / initrd image to make it recognize the virtual disk?

    Read the article

  • 32 core (each physical core) 2.2 GhZ or 12 core (6 physical cores) 3.0GHZ?

    - by Tejaswi Rana
    I am working on a multithreaded application (Forex trading app built on C#) and had the client upgrade from the 12 core 3.0GHZ machine (Intel) to a 32 core 2.2 Ghz machine (AMD). The PassMark benchmark results were significantly higher when using multicores doing Integer, Floating and other calculations while for a single core calculation it was a bit slower than the pack (others that were being compared to with similar config as the 12 core one). Oh it also comes with 64 GB RAM (4 times as the other one) and a much faster SSD. So after configuring and running the application on that machine, not only did it not perform as well, it was significantly slower. We're talking about 30seconds - 1 minute slower on an app that usually completes processing within 5-20 secs. The application uses MAX DEGREE of PARALLELISM (TPL) which I've tried setting to number of cores and also half of that. I've also tried running single threaded and without setting any limits in parallel threading. While it may be the hardware has some issues, I am wondering if the CPU processing speed is the issue. I can overclock to 3.0 GHZ. But is that even a good idea? Server Info - AMD http://www.passmark.com/forum/showthread.php?4013-AMD-Dual-6272-performance-is-60-lower-than-benchmarks Seems that benchmark was wrong to start with - officially. Intel i7 3930k OS (same in both) Windows 7 Professional 64-bit

    Read the article

  • Baseline / Benchmark Physical and virtual server performance

    - by EyeonTech
    I am setting up a new server and there are some options. I want to perform some benchmarks and I need your help in determining the best tools and if possible run pre-configured benchmarks designed for SQL servers on Windows Server 2008/2012. Step 1. Run a performance monitor on the current Live SQL server (Windows Server 2008 Virtual machine running on ESXi. New server Hardware rundown: Intel® Server System R1304BTLSHBN - 1U Rack, LGA1155 http://ark.intel.com/products/53559/Intel-Server-System-R1304BTLSHBN Intel Xeon E3-1270V2 2x Intel SSD 330 Series 240GB 2.5in SATA 6Gb/s 25nm 1x WD 2TB WD2002FAEX 2TB 64M SATA3 CAVIAR BLACK 4x 8GB 1333MHz DDR3 ECC CL9 DIMM There are several options for configurations and I want to benchmark some of them and share the results. Option 1. Configure 2x SSDs at RAID 0. Install Windows Server 2008 directly to the 2TB WD Caviar HDD. Store Database files on the RAID 0 Volume. Benchmark the OS direct on the hardware as an SQL Server. Store SQL Backup databases on the 2TB WD Caviar HDD. Option 2. Configure 2x SSDs at RAID 0. Install Windows Server 2012 directly to the 2TB WD Caviar HDD. Install Hyper-V. Install the SQL Server (Server 2008) as a virtual machine. Store the Virtual Hard Disks on the SSDs. Option 3. Configure 2x SSDs at RAID 0. Install VMWare ESXi on a partition of the 2TB WD Caviar HDD. Install the SQL Server (Server 2008) as a virtual machine. Store the Virtual Hard Disks on the SSDs. I have a few tools in mind from http://technet.microsoft.com/en-us/library/cc768530(v=bts.10).aspx. Any tools with pre-configured test would be fantastic. Specifically if there are pre-configured perfmon sets avaliable. Any opinions on the setup to gain the best results is welcome. Thanks in advance.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >