Search Results

Search found 16547 results on 662 pages for 'physical design'.

Page 160/662 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Linked List Design

    - by Jim Scott
    The other day in a local .NET group I attend the following question came up: "Is it a valid interview question to ask about Linked Lists when hiring someone for a .NET development position?" Not having a computer sciense degree and being a self taught developer my response was that I did not feel it was appropriate as I in 5 years of developer with .NET had never been exposed to linked lists and did not hear any compeling reason for a use for one. However the person commented that it is a very common interview question so I decided when I left that I would do some reasearch on linked lists and see what I might be missing. I have read a number of posts on stack overflow and various google searches and decided the best way to learn about them was to write my own .NET classes to see how they worked from the inside out. Here is my class structure Single Linked List Constructor public SingleLinkedList(object value) Public Properties public bool IsTail public bool IsHead public object Value public int Index public int Count private fields not exposed to a property private SingleNode firstNode; private SingleNode lastNode; private SingleNode currentNode; Methods public void MoveToFirst() public void MoveToLast() public void Next() public void MoveTo(int index) public void Add(object value) public void InsertAt(int index, object value) public void Remove(object value) public void RemoveAt(int index) Questions I have: What are typical methods you would expect in a linked list? What is typical behaviour when adding new records? For example if I have 4 nodes and I am currently positioned in the second node and perform Add() should it be added after or before the current node? Or should it be added to the end of the list? Some of the designs I have seen explaining things seem to expose outside of the LinkedList class the Node object. In my design you simply add, get, remove values and know nothing about any node object. Should the Head and Tail be placeholder objects that are only used to define the head/tail of the list? I require my Linked List be instantiated with a value which creates the first node of the list which is essentially the head and tail of the list. Would you change that ? What should the rules be when it comes to removing nodes. Should someone be able to remove all nodes? Here is my Double Linked List Constructor public DoubleLinkedList(object value) Properties public bool IsHead public bool IsTail public object Value public int Index public int Count Private fields not exposed via property private DoubleNode currentNode; Methods public void AddFirst(object value) public void AddLast(object value) public void AddBefore(object existingValue, object value) public void AddAfter(object existingValue, object value) public void Add(int index, object value) public void Add(object value) public void Remove(int index) public void Next() public void Previous() public void MoveTo(int index)

    Read the article

  • How to design this ?

    - by Akku
    how can i make this entire process as 1 single event??? http://code.google.com/apis/visualization/documentation/dev/dsl_get_started.html and draw the chart on single click? I am new to servlets please guide me When a user clicks the "go " button with some input. The data goes to the servlet say "Test3". The servlet processes the data by the user and generates/feeds the data table dynamically Then I call the html page to draw the chart as shown in the tutorial link above. The problem is when I call the servlet it gives me a long json string in the browser as given in the tutorials "google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'1333639331',table:{cols:[{............................" Then when i manually call the html page to draw the chart i am see the chart. But when I call html page directly using the request dispatcher via the servlet I dont get the result. This is my code and o/p...... I need sugession as to how should be my approach to call the chart public class Test3 extends HttpServlet implements DataTableGenerator { protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { DataSourceHelper.executeDataSourceServletFlow(request, response, this , isRestrictedAccessMode() ); RequestDispatcher rd; rd = request.getRequestDispatcher("new.html");// it call's the html page which draws the chart as per the data added by the servlet..... rd.include(request, response);//forward(request, response); @Override public Capabilities getCapabilities() { return Capabilities.NONE; } protected boolean isRestrictedAccessMode() { return false; } @Override public DataTable generateDataTable(Query query, HttpServletRequest request) { // Create a data table. DataTable data = new DataTable(); ArrayList<ColumnDescription> cd = new ArrayList<ColumnDescription>(); cd.add(new ColumnDescription("name", ValueType.TEXT, "Animal name")); cd.add......... I get the following result along with unprocessed html page google.visualization.Query.setResponse({version:'0.6',statu..... <html> <head> <title>Getting Started Example</title> .... Entire html page as it is on the Browser. What I need is when a user clicks the go button the servlet should process the data and call the html page to draw the chart....Without the json string appearing on the browser.(all in one user click) What should be my approach or how should i design this.... there are no error in the code. since when i run the servlet i get the json string on the browser and then when i run the html page manually i get the chart drawn. So how can I do (servlet processing + html page drawing chart as final result) at one go without the long json string appearing on the browser. There is no problem with the html code....

    Read the article

  • Advice for Architecture Design Logic for software application

    - by Prasad
    Hi, I have a framework of basic to complex set of objects/classes (C++) running into around 500. With some rules and regulations - all these objects can communicate with each other and hence can cover most of the common queries in the domain. My Dream: I want to provide these objects as icons/glyphs (as I learnt recently) on a workspace. All these objects can be dragged/dropped into the workspace. They have to communicate only through their methods(interface) and in addition to few iterative and conditional statements. All these objects are arranged finally to execute a protocol/workflow/dataflow/process. After drawing the flow, the user clicks the Execute/run button. All the user interaction should be multi-touch enabled. The best way to show my dream is : Jeff Han's Multitouch Video. consider Jeff is playing with my objects instead of the google maps. :-) it should be like playing a jigsaw puzzle. Objective: how can I achieve the following while working on this final product: a) the development should be flexible to enable provision for web services b) the development should enable easy web application development c) The development should enable client-server architecture - d) further it should also enable mouse based drag/drop desktop application like Adobe programs etc. I mean to say: I want to economize on investments. Now I list my efforts till now in design : a) Created an Editor (VB) where the user writes (manually) the object / class code b) On Run/Execute, the code is copied into a main() function and passed to interpreter. c) Catch the output and show it in the console. The interpreter can be separated to become a server and the Editor can become the client. This needs lot of standard client-server architecture work. But some how I am not comfortable in the tightness of this system. Without interpreter is there much faster and better embeddable solution to this? - other than writing a special compiler for these objects. Recently learned about AXIS-C++ can help me - looks like - a friend suggested. Is that the way to go ? Here are my questions: (pl. consider me a self taught programmer and NOT my domain) a) From the stage of C++ objects to multi-touch product, how can I make sure I will develop the parallel product/service models as well.? What should be architecture aspects I should consider ? b) What technologies are best suited for this? c) If I am thinking of moving to Cloud Computing, how difficult/ how redundant / how unnecessary my efforts will be ? d) How much time in months would it take to get the first beta ? I take the liberty to ask if any of the experts here are interested in this project, please email me: [email protected] Thank you for any help. Looking forward.

    Read the article

  • Design time error - multiple controls with the same Id

    - by ilivewithian
    I'm using VS 2008, I have a very simple page that has a bunch of uniquely named controls. When I try to view it in design mode I get the following error: Error Rendering Control - Label12 An unhanded exception has occurred. Multiple controls with the same ID 'Label1' were found. FindControl requires that controls have unique IDs I've checked the HTML and the designer file and I can only see one control called Label1. What might be causing this? Also, here is the aspx markup I'm having trouble with? <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="CoachingAppearanceReport.aspx.vb" Inherits="AcademyPro.CoachingAppearanceReport" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <div id="appearanceDetail" class="Left CriteriaContainer"> <asp:Label ID="Label1" runat="server" Text="Appearance Type" AssociatedControlID="ddlAppearanceType" /> <asp:DropDownList ID="ddlAppearanceType" runat="server" CssClass="AppType" OnDataBound="ddlAppearanceType_DataBound" DataSourceID="odsAppearanceType" DataTextField="AppearanceType" DataValueField="AppearanceTypeCode"> </asp:DropDownList> <asp:RequiredFieldValidator ID="rfvAppearanceType" runat="server" ControlToValidate="ddlAppearanceType" InitialValue="" Text="*" ErrorMessage="The appearance type must be selected" /> <asp:Label ID="lblAppearanceType" runat="server" /> <br /> <div class="SubSettings"> <asp:Label ID="Label12" runat="server" Text="Subbed for" AssociatedControlID="ddlSubbedFor" /> <asp:DropDownList ID="ddlSubbedFor" runat="server" OnDataBound="ddlSubbedFor_DataBound" DataSourceID="odsPlayersInAgeGroup" DataTextField="PlayerName" DataValueField="PlayerID"> </asp:DropDownList> <asp:Label ID="lblSubbedFor" runat="server" /> <br /> <asp:Label ID="Label13" runat="server" Text="Mins" AssociatedControlID="txtSubMins" /> <asp:TextBox ID="txtSubMins" runat="server" MaxLength="3" CssClass="TinyWidth" /> <asp:Label ID="lblSubMins" runat="server" /> </div> </div> </ContentTemplate> </asp:UpdatePanel> </form> </body> </html>

    Read the article

  • Parallel Task Library WaitAny Design

    - by colithium
    I've just begun to explore the PTL and have a design question. My Scenario: I have a list of URLs that each refer to an image. I want each image to be downloaded in parallel. As soon as at least one image is downloaded, I want to execute a method that does something with the downloaded image. That method should NOT be parallelized -- it should be serial. I think the following will work but I'm not sure if this is the right way to do it. Because I have separate classes for collecting the images and for doing "something" with the collected images, I end up passing around an array of Tasks which seems wrong since it exposes the inner workings of how images are retrieved. But I don't know a way around it. In reality there is more to both of these methods but that's not important for this. Just know that they really shouldn't be lumped into one large method that both retrieves and does something with the image. Task<Image>[] downloadTasks = collector.RetrieveImages(listOfURLs); for (int i = 0; i < listOfURLs.Count; i++) { //Wait for any of the remaining downloads to complete int completedIndex = Task<Image>.WaitAny(downloadTasks); Image completedImage = downloadTasks[completedIndex].Result; //Now do something with the image (this "something" must happen serially) } /////////////////////////////////////////////////// public Task<Image>[] RetrieveImages(List<string> urls) { Task<Image>[] tasks = new Task<Image>[urls.Count]; int index = 0; foreach (string url in urls) { string lambdaVar = url; //Required... Bleh tasks[index] = Task<Image>.Factory.StartNew(() => { using (WebClient client = new WebClient()) { //TODO: Replace with live image locations string fileName = String.Format("{0}.png", i); client.DownloadFile(lambdaVar, Path.Combine(Application.StartupPath, fileName)); } return Image.FromFile(Path.Combine(Application.StartupPath, fileName)); }, TaskCreationOptions.LongRunning | TaskCreationOptions.AttachedToParent); index++; } return tasks; }

    Read the article

  • Help to argue why to develop software on a physical computer rather than via a remote desktop

    - by s5804
    Remote desktops are great and many times a blessing and cost effective (instead of leasing expensive cables). I am not arguing against remote desktops, just if one have the alternative to use either remote desktop or physical computer, I would choose the later. Also note that I am not arguing for or against remote work practices. But in my case I am required to be physically present in the office when developing software. Background, I work in a company which main business is not to develop software. Therefore the company IT policies are mainly focused on security and to efficiently deploying/maintaing thousands of computer to users. Further, the typical employee runs typical Office applications, like a word processors. Because safety/stability is such a big priority, every non production system/application, shall be deployed into a physical different network, called the test network. Software development of course also belongs in the test network. To access the test network the company has created a standard policy, which dictates that access to the test network shall go only via a remote desktop client. Practically from ones production computer one would open up a remote desktop client to a virtual computer located in the test network. On the virtual computer's remote desktop one would be able to access/run/install all development tools, like Eclipse IDE. Another solution would be to have a dedicated physical computer, which is physically only connected to the test network. Both solutions are available in the company. I have tested both approaches and found running Eclipse IDE, SQL developer, in the remote desktop client to be sluggish (keyboard strokes are delayed), commands like alt-tab takes me out of the remote client, enjoying... Further, screen resolution and colors are different, just to mention a few. Therefore there is nothing technical wrong with the remote client, just not optimal and frankly de-motivating. Now with the new policies put in place, plans are to remove the physical computers connected to the test network. I am looking for help to argue for why software developers shall have a dedicated physical software development computer, to be productive and cost effective. Remember that we are physically in office. Further one can notice that we are talking about approx. 50 computers out of 2000 employees. Therefore the extra budget is relatively small. This is more about policy than cost. Please note that there are lots of similar setups in other companies that work great due to a perfectly tuned systems. However, in my case it is sluggish and it would cost more money to trouble shoot the performance and fine tune it rather than to have a few physical computers. As a business case we have argued that productivity will go down by 25%, however it's my feeling that the reality is probably closer to 50%. This business case isn't really accepted and I find it very difficult to defend it to managers that has never ever used a rich IDE in their life, never mind developed software. Further the test network and remote client has no guaranteed service level, therefore it is down for a few hours per month with the lowest priority on the fix list. Help is appreciated.

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • Architectural Design for a Data-Driven Silverlight WP7 app

    - by Rosarch
    I have a Silverlight Windows Phone 7 app that pulls data from a public API. I find myself doing much of the same thing over and over again: In the UI, set a loading message or loading progress bar in place of where the content is Get the content, which may be already in memory, cached in isolated file storage, or require an HTTP request If the content can not be acquired (no network connection, etc), display an error message If the content is acquired, display it in the UI Keep the content in main memory for subsequent queries The content that is displayed to the user can be taken directly from a data source, such as an ObservableCollection, or it may be a query on a data source. I would like to factor out this repetitive process into a framework where ideally only the following needs to be specified: Where to display the content in the UI The UI elements to show while loading, on failure, and on success The URI of the HTTP request How to parse the HTTP response into the data structure that will kept in memory The location of the file in isolated storage, if it exists How to parse the file contents into the data structure that will be kept in memory It may sound like a lot, but two strings, three FrameworkElements, and two methods is less than the overhead that I currently have. Also, this needs to work for however the data is maintained in memory, and needs to work for direct collections and queries on those collections. My questions are: Has something like this already been implemented? Are my thoughts about the topic above fundamentally wrong in some way? Here is a design I'm thinking of: There are two components, a View and a Model. The View is given the FrameworkElements for loading, failure, and success. It is also given a reference to the corresponding Model. The View is a UserControl that is placed somewhere in the UI. The Model a class that is given the URI for the data, a method of how to parse the data, and optionally a filename and how to parse the file. It is responsible for retrieving the data and notifying the View whenever the current status (loading/fail/success) changes. If the data downloaded from the network is different from the cache, the network data takes precedence. When the app closes or is tombstoned, the model writes the data to the cache. How does that sound?

    Read the article

  • FluentNHibernate Unit Of Work / Repository Design Pattern Questions

    - by Echiban
    Hi all, I think I am at a impasse here. I have an application I built from scratch using FluentNHibernate (ORM) / SQLite (file db). I have decided to implement the Unit of Work and Repository Design pattern. I am at a point where I need to think about the end game, which will start as a WPF windows app (using MVVM) and eventually implement web services / ASP.Net as UI. Now I already created domain objects (entities) for ORM. And now I don't know how should I use it outside of ORM. Questions about it include: Should I use ORM entity objects directly as models in MVVM? If yes, do I put business logic (such as certain values must be positive and be greater than another Property) in those entity objects? It is certainly the simpler approach, and one I am leaning right now. However, will there be gotchas that would trash this plan? If the answer above is no, do I then create a new set of classes to implement business logic and use those as Models in MVVM? How would I deal with the transition between model objects and entity objects? I guess a type converter implementation would work well here. Now I followed this well written article to implement the Unit Of Work pattern. However, due to the fact that I am using FluentNHibernate instead of NHibernate, I had to bastardize the implementation of UnitOfWorkFactory. Here's my implementation: using System; using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.Cfg; using NHibernate.Tool.hbm2ddl; namespace ELau.BlindsManagement.Business { public class UnitOfWorkFactory : IUnitOfWorkFactory { private static readonly string DbFilename; private static Configuration _configuration; private static ISession _currentSession; private ISessionFactory _sessionFactory; static UnitOfWorkFactory() { // arbitrary default filename DbFilename = "defaultBlindsDb.db3"; } internal UnitOfWorkFactory() { } #region IUnitOfWorkFactory Members public ISession CurrentSession { get { if (_currentSession == null) { throw new InvalidOperationException(ExceptionStringTable.Generic_NotInUnitOfWork); } return _currentSession; } set { _currentSession = value; } } public ISessionFactory SessionFactory { get { if (_sessionFactory == null) { _sessionFactory = BuildSessionFactory(); } return _sessionFactory; } } public Configuration Configuration { get { if (_configuration == null) { Fluently.Configure().ExposeConfiguration(c => _configuration = c); } return _configuration; } } public IUnitOfWork Create() { ISession session = CreateSession(); session.FlushMode = FlushMode.Commit; _currentSession = session; return new UnitOfWorkImplementor(this, session); } public void DisposeUnitOfWork(UnitOfWorkImplementor adapter) { CurrentSession = null; UnitOfWork.DisposeUnitOfWork(adapter); } #endregion public ISession CreateSession() { return SessionFactory.OpenSession(); } public IStatelessSession CreateStatelessSession() { return SessionFactory.OpenStatelessSession(); } private static ISessionFactory BuildSessionFactory() { ISessionFactory result = Fluently.Configure() .Database( SQLiteConfiguration.Standard .UsingFile(DbFilename) ) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<UnitOfWorkFactory>()) .ExposeConfiguration(BuildSchema) .BuildSessionFactory(); return result; } private static void BuildSchema(Configuration config) { // this NHibernate tool takes a configuration (with mapping info in) // and exports a database schema from it _configuration = config; new SchemaExport(_configuration).Create(false, true); } } } I know that this implementation is flawed because a few tests pass when run individually, but when all tests are run, it would fail for some unknown reason. Whoever wants to help me out with this one, given its complexity, please contact me by private message. I am willing to send some $$$ by Paypal to someone who can address the issue and provide solid explanation. I am new to ORM, so any assistance is appreciated.

    Read the article

  • Core Data Model Design Question - Changing "Live" Objects also Changes Saved Objects

    - by mwt
    I'm working on my first Core Data project (on iPhone) and am really liking it. Core Data is cool stuff. I am, however, running into a design difficulty that I'm not sure how to solve, although I imagine it's a fairly common situation. It concerns the data model. For the sake of clarity, I'll use an imaginary football game app as an example to illustrate my question. Say that there are NSMO's called Downs and Plays. Plays function like templates to be used by Downs. The user creates Plays (for example, Bootleg, Button Hook, Slant Route, Sweep, etc.) and fills in the various properties. Plays have a to-many relationship with Downs. For each Down, the user decides which Play to use. When the Down is executed, it uses the Play as its template. After each down is run, it is stored in history. The program remembers all the Downs ever played. So far, so good. This is all working fine. The question I have concerns what happens when the user wants to change the details of a Play. Let's say it originally involved a pass to the left, but the user now wants it to be a pass to the right. Making that change, however, not only affects all the future executions of that Play, but also changes the details of the Plays stored in history. The record of Downs gets "polluted," in effect, because the Play template has been changed. I have been rolling around several possible fixes to this situation, but I imagine the geniuses of SO know much more about how to handle this than I do. Still, the potential fixes I've come up with are: 1) "Versioning" of Plays. Each change to a Play template actually creates a new, separate Play object with the same name (as far as the user can tell). Underneath the hood, however, it is actually a different Play. This would work, AFAICT, but seems like it could potentially lead to a wild proliferation of Play objects, esp. if the user keeps switching back and forth between several versions of the same Play (creating object after object each time the user switches). Yes, the app could check for pre-existing, identical Plays, but... it just seems like a mess. 2) Have Downs, upon saving, record the details of the Play they used, but not as a Play object. This just seems ridiculous, given that the Play object is there to hold those just those details. 3) Recognize that Play objects are actually fulfilling 2 functions: one to be a template for a Down, and the other to record what template was used. These 2 functions have a different relationship with a Down. The first (template) has a to-many relationship. But the second (record) has a one-to-one relationship. This would mean creating a second object, something like "Play-Template" which would retain the to-many relationship with Downs. Play objects would get reconfigured to have a one-to-one relationship with Downs. A Down would use a Play-Template object for execution, but use the new kind of Play object to store what template was used. It is this change from a to-many relationship to a one-to-one relationship that represents the crux of the problem. Even writing this question out has helped me get clearer. I think something like solution 3 is the answer. However if anyone has a better idea or even just a confirmation that I'm on the right track, that would be helpful. (Remember, I'm not really making a football game, it's just faster/easier to use a metaphor everyone understands.) Thanks.

    Read the article

  • Object validator - is this good design?

    - by neo2862
    I'm working on a project where the API methods I write have to return different "views" of domain objects, like this: namespace View.Product { public class SearchResult : View { public string Name { get; set; } public decimal Price { get; set; } } public class Profile : View { public string Name { get; set; } public decimal Price { get; set; } [UseValidationRuleset("FreeText")] public string Description { get; set; } [SuppressValidation] public string Comment { get; set; } } } These are also the arguments of setter methods in the API which have to be validated before storing them in the DB. I wrote an object validator that lets the user define validation rulesets in an XML file and checks if an object conforms to those rules: [Validatable] public class View { [SuppressValidation] public ValidationError[] ValidationErrors { get { return Validator.Validate(this); } } } public static class Validator { private static Dictionary<string, Ruleset> Rulesets; static Validator() { // read rulesets from xml } public static ValidationError[] Validate(object obj) { // check if obj is decorated with ValidatableAttribute // if not, return an empty array (successful validation) // iterate over the properties of obj // - if the property is decorated with SuppressValidationAttribute, // continue // - if it is decorated with UseValidationRulesetAttribute, // use the ruleset specified to call // Validate(object value, string rulesetName, string FieldName) // - otherwise, get the name of the property using reflection and // use that as the ruleset name } private static List<ValidationError> Validate(object obj, string fieldName, string rulesetName) { // check if the ruleset exists, if not, throw exception // call the ruleset's Validate method and return the results } } public class Ruleset { public Type Type { get; set; } public Rule[] Rules { get; set; } public List<ValidationError> Validate(object property, string propertyName) { // check if property is of type Type // if not, throw exception // iterate over the Rules and call their Validate methods // return a list of their return values } } public abstract class Rule { public Type Type { get; protected set; } public abstract ValidationError Validate(object value, string propertyName); } public class StringRegexRule : Rule { public string Regex { get; set; } public StringRegexRule() { Type = typeof(string); } public override ValidationError Validate(object value, string propertyName) { // see if Regex matches value and return // null or a ValidationError } } Phew... Thanks for reading all of this. I've already implemented it and it works nicely, and I'm planning to extend it to validate the contents of IEnumerable fields and other fields that are Validatable. What I'm particularly concerned about is that if no ruleset is specified, the validator tries to use the name of the property as the ruleset name. (If you don't want that behavior, you can use [SuppressValidation].) This makes the code much less cluttered (no need to use [UseValidationRuleset("something")] on every single property) but it somehow doesn't feel right. I can't decide if it's awful or awesome. What do you think? Any suggestions on the other parts of this design are welcome too. I'm not very experienced and I'm grateful for any help. Also, is "Validatable" a good name? To me, it sounds pretty weird but I'm not a native English speaker.

    Read the article

  • Tips and Suggestions IP Address Re-Addressing?

    - by RSXAdmin
    Hello serverfault Universe, My ever evolving and expanding local area network is currently using a class-C address. My network consists of multiple subnets depending on site/location. 192.168.1.x is site HQ 192.168.5.x is secondary site 192.168.10.x is so on and so forth. Long story short - I have inherited this network design from the previous admin who has left the company which started off with a dozen people and now has just over 300 full time/part time employees. We do not yet have client VPN access; but we do have site to site VPN setup. My question is, in preparation for outside client access to my network via Cisco ASA, I would like to re-address the HQ site because I understand a 192.168.1.x or 192.168.0.x are not very good choices for a company subnet - it may conflict with a home user's LAN when connecting to my LAN, I believe? Through your experience, does anyone out there have any suggestions and tips on how I can proceed with re-addressing my subnets. If I designed this network I would have gone with a 10.0.0.0 (mask 255.255.255.0) so I am leaning towards changing it to fit. Thank you.

    Read the article

  • Designing a software based load balancer

    - by Kishore pandey
    Hello to all Server fault users, I am new to this website but have constantly been using the mother website, stackover flow. Well to begin with, i would like to design a load balancer for the organization i am working for. As i am very new to this whole, idea about load balancing and networks. I am finding it very difficult to start my project. I did a lot of research on already existing load balancer and found some(HAPROXY,NGINX) that could solve my problems, but the point is, I am still in a dilemma if they could answer the following requirements of mine: The client and server in my architecture are distributed. The load balancer should take care of the firewall. LB server should balance the load among all servers present in WWW cloud. The LB server should have some sort of configuration file, with the help of which it is possible to configure the servers. Heart beat: With the help of which it would be possible to check if any server is down, if any server is down the request should be passed to some other server. Various load balancing algorithms of the incoming requests. Easy error handling. It should be fairly possible to prioritize the incoming requests. Is there any already available load balncer solution on the market that could satisfy these requirements? If not is there any base code available with the help of which i could develop my own load balncer. If not where should i start from scratch? I am practically new to everything. Any help from a load balancer expert is very much appreciated. Thanx a ton in advance. Cheers and regards. Kishore

    Read the article

  • .NET interview, code structure and the design

    - by j_lewis
    I have been given the below .NET question in an interview. I don’t know why I got low marks. Unfortunately I did not get a feedback. Question: The file hockey.csv contains the results from the Hockey Premier League. The columns ‘For’ and ‘Against’ contain the total number of goals scored for and against each team in that season (so Alabama scored 79 goals against opponents, and had 36 goals scored against them). Write a program to print the name of the team with the smallest difference in ‘for’ and ‘against’ goals. the structure of the hockey.csv looks like this (it is a valid csv file, but I just copied the values here to get an idea) Team - For - Against Alabama 79 36 Washinton 67 30 Indiana 87 45 Newcastle 74 52 Florida 53 37 New York 46 47 Sunderland 29 51 Lova 41 64 Nevada 33 63 Boston 30 64 Nevada 33 63 Boston 30 64 Solution: class Program { static void Main(string[] args) { string path = @"C:\Users\<valid csv path>"; var resultEvaluator = new ResultEvaluator(string.Format(@"{0}\{1}",path, "hockey.csv")); var team = resultEvaluator.GetTeamSmallestDifferenceForAgainst(); Console.WriteLine( string.Format("Smallest difference in ‘For’ and ‘Against’ goals > TEAM: {0}, GOALS DIF: {1}", team.Name, team.Difference )); Console.ReadLine(); } } public interface IResultEvaluator { Team GetTeamSmallestDifferenceForAgainst(); } public class ResultEvaluator : IResultEvaluator { private static DataTable leagueDataTable; private readonly string filePath; private readonly ICsvExtractor csvExtractor; public ResultEvaluator(string filePath){ this.filePath = filePath; csvExtractor = new CsvExtractor(); } private DataTable LeagueDataTable{ get { if (leagueDataTable == null) { leagueDataTable = csvExtractor.GetDataTable(filePath); } return leagueDataTable; } } public Team GetTeamSmallestDifferenceForAgainst() { var teams = GetTeams(); var lowestTeam = teams.OrderBy(p => p.Difference).First(); return lowestTeam; } private IEnumerable<Team> GetTeams() { IList<Team> list = new List<Team>(); foreach (DataRow row in LeagueDataTable.Rows) { var name = row["Team"].ToString(); var @for = int.Parse(row["For"].ToString()); var against = int.Parse(row["Against"].ToString()); var team = new Team(name, against, @for); list.Add(team); } return list; } } public interface ICsvExtractor { DataTable GetDataTable(string csvFilePath); } public class CsvExtractor : ICsvExtractor { public DataTable GetDataTable(string csvFilePath) { var lines = File.ReadAllLines(csvFilePath); string[] fields; fields = lines[0].Split(new[] { ',' }); int columns = fields.GetLength(0); var dt = new DataTable(); //always assume 1st row is the column name. for (int i = 0; i < columns; i++) { dt.Columns.Add(fields[i].ToLower(), typeof(string)); } DataRow row; for (int i = 1; i < lines.GetLength(0); i++) { fields = lines[i].Split(new char[] { ',' }); row = dt.NewRow(); for (int f = 0; f < columns; f++) row[f] = fields[f]; dt.Rows.Add(row); } return dt; } } public class Team { public Team(string name, int against, int @for) { Name = name; Against = against; For = @for; } public string Name { get; private set; } public int Against { get; private set; } public int For { get; private set; } public int Difference { get { return (For - Against); } } } Output: Smallest difference in for' andagainst' goals TEAM: Boston, GOALS DIF: -34 Can someone please review my code and see anything obviously wrong here? They were only interested in the structure/design of the code and whether the program produces the correct result (i.e lowest difference). Much appreciated. "P.S - Please correct me if the ".net-interview" tag is not the right tag to use"

    Read the article

  • why would you create two different subnets on the same physical network?

    - by xirtyllo
    I'm working at a messy location, one of the strange (for me) things is that on the same physical network there are two different subnets. Specifically some computers will have 10.0.0.0/24 and some others will have 172.16.0.0/24. There is only one DHCP server, which gives IPs on the 10.0.0.0/24 range, and there are two internet gateways, one with IP 172.16.0.1 and one with IP 10.0.0.1 . To give an example, I can easily swap one PC from one subnet to the other just by changing its IP and gateway settings. I am trying to imagine why they created the network this way, and which may be the possible advantages and/or drawbacks of having two different subnets on the same physical network. Any thoughts?

    Read the article

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • Hyper-V virtual machine unable to get IP address from DHCP server running on same physical box

    - by Bronumski
    We have a Windows server 2008 R2 with two network cards running AD, DHCP, DNS and Hyper-V The first nic is setup with a static IP address and DHCP, WDS, and DNS are bound to it. The second nic is configured in Hyper-V to be only used by Hyper-V and has been automatically configured so that only the virtual switch is enabled on the adapter. DHCP and DNS work fine for all physical machines on the network. It also works for Virtual Machines running on another physical box. Virtual machines that are bound to the virtual switch network adapter are unable to get a IP address. If the virtual machine is given a static IP address with correct subnet, gateway and dns everything works. Has anyone else got this working?

    Read the article

  • Can KVM CPU assignment count differ from physical hosts CPU count?

    - by javano
    I have read this question. I knew already that I could for example, have a quad core machine with four guests each having two vCPUs. As they don't all be require 100% CPU usage all the time, the scheduler would handle this for me. My question is about how this relates to a fail-over or migration situation; If host1 has two dual-core CPUs, and I assign guest1 four vCPUs (so it accessed all four physical cores), what will happen if I try and migrate it to host2 which only has one dual-core CPU? Can qemu-kvm emulate more vCPUs than there are physical? Or would I have to shut down the virtual machine, change the CPU assignment, migrate it, and then boot it back up (so no live migration)? Many thanks.

    Read the article

  • How to make a DHCP server on virtual machine serves other virtual machines(on different physical machines)?

    - by Tony
    I'm building a virtual cluster with VirtualBox and Opensuse. I have 10 physical machines and need several vms on each. The virtual machines are supposed to be in a "private" network, but still have internet access. I was asked to set up a virtual head node working as DHCP server. I installed DHCP server on the virtual head node and it seems works. On VirtualBox I set 2 network adapters to the head node, one bridged adapter and one internal network. One vm on the same physical machine has been set nic as internal network adapter. The vm can get IP address (so DHCP works) but can't access internet. What should I do? Specifically, what network adapter should I choose for head-node and work-nodes in VirtualBox? What in the virtual machines should I do?

    Read the article

  • Is it possible to define a virtual directory in IIS and make the files relative to the physical dir

    - by Mikey John
    Is it possible to define a virtual directory in IIS and somehow make the files in that directory relative to the physical directory and not to the virtual directory ? For instance on my server I have the following folders: D:\WebSite\Css\myTheme.css, D:\WebSite\Images\image1.jpg I created a virtual directory on IIS resources.mysite: Inside my website I reference the sheet like this resources.mysite/myTheme.css But inside myTheme.css I reference pictures from ../Images/images1.jpg. So the problem is that image1.jpg is not found because it is relative to the physical folder and not the virtual folder on IIS. Can I solve this problem without modifying the style sheet ?

    Read the article

  • What hardware is at physical address 0x80000000 on powerpc New World Macintosh?

    - by tinkerer
    Open Firmware device tree gives no clue what device might decode at physical address 0x80000000 to 0x8008200 on a G4 New World Macintosh. The mmu has three adjacent Virtual=Real translations for that block. They are the only address translations reported between the top or physical dram at 20000000 and the start of the PCI bridges at f0000000. (A possible clue is that frame-buffer-addr is reported as 9c008000 by Open Firmware, and that is not in the reported translation table either). I believe the architecture has been around since about 1999.

    Read the article

  • Does HyperV allow binding physical NIC on virtual machine with promiscues mode?

    - by MadBoy
    I have HyperV on Windows 2008 Enterprise R2 installed with some Virtual Server running that I wanted to have ISA or NTOP to monitor traffic. I've added additional physical NIC to server and wanted to use this NIC as traffic monitor (I've enabled port mirroring on switch). I can see on physical machine that runs HyperV a lot of traffic coming to the NIC so port mirroring works fine. However in virtual machine even thou I've assigned that NIC to only this one and only server it sees 0 packets. In VWMare Workstation it worked without problem and I could see mirrored traffic on VM. Should this be possible or HyperV is crippled?

    Read the article

  • Can IP v4 and IP v6 share a single physical Ethernet?

    - by sleske
    I keep reading about the transition from IP v4 to IP v6, and the possible advantages and problems. One thing that keeps popping up is "dual-stack" networking, meaning (I believe) a host can speak both IPv4 and IPv6. I don't quite understand how this works, however. Can a host actually transmit using IPv4 and IPv6 at the same time over the same physical Ethernet (like e.g. HTTP and FTP can be used simultaneously)? Or is the physical network strictly IPv4 or IPv6, with the "other" protocol sent via tunneling?

    Read the article

  • Is virtual machine slower than the underlying physical machine?

    - by Michal Illich
    This question is quite general, but most specifically I'm interested in knowing if virtual machine running Ubuntu Enterprise Cloud will be any slower than the same physical machine without any virtualization. How much (1%, 5%, 10%)? Did anyone measure performance difference of web server or db server (virtual VS physical)? If it depends on configuration, let's imagine two quad core processors, 12 GB of memory and a bunch of SSD disks, running 64-bit ubuntu enterprise server. On top of that, just 1 virtual machine allowed to use all resources available.

    Read the article

  • How to know if a server is physical or virtual? [duplicate]

    - by tachomi
    This question already has an answer here: is there anyway to know if your supposedly fully dedicated server is really a virtually resource-shared machine? [duplicate] 5 answers How can I know if my provider is giving me a dedicated physical server or if I'm getting virtual servers? The provider is from an other country so I don't have the chance to go and see the servers by myself, all of them have linux and they're managed by SSH. All the hire has been done by internet or phone. Are there any commands, dirs to analyze or a way to get this info? They of course tell me that the servers are physical, but when they make some support or hardware/software upgrade or that kind of stuff doesn't take a long. For example, when I ask for a new one with specific requirements at the end they give me more infrastructure than the one I asked for. Of course no one gives extra things without an extra price. Hope I have hinted

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >