Search Results

Search found 10115 results on 405 pages for 'coding practices'.

Page 120/405 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • What lessons can you learn from software maintanence?

    - by Vasil Remeniuk
    Hello everyone, In the perfect world, all the software developers would work with the cutting edge technologies, creating systems from the scratch. In the real life, almost all of us have to maintain software from time to time (unlucky ones do it on a regular basis). Personally I first 2 years of my career was fixing bugs in the company that no longer exists (it has been taken up by Oracle). And probably the biggest lesson I've learned that time - despite of the pressure, always try to get as much information about the domain as possible (even if it's irrelevant to fixing a specific bug or adding a feature) - abstract domain knowledge doesn't lose value as fast as knowledge about trendy frameworks or methodologies. What lessons have you learned from maintenance?

    Read the article

  • How to determine which inheriting class is using an abstract class's methods.

    - by Kin
    In my console application have an abstract Factory class "Listener" which contains code for listening and accepting connections, and spawning client classes. This class is inherited by two more classes (WorldListener, and MasterListener) that contain more protocol specific overrides and functions. I also have a helper class (ConsoleWrapper) which encapsulates and extends System.Console, containing methods for writing to console info on what is happening to instances of the WorldListener and MasterListener. I need a way to determine in the abstract ListenerClass which Inheriting class is calling its methods. Any help with this problem would be greatly appreciated! I am stumped :X Simplified example of what I am trying to do. abstract class Listener { public void DoSomething() { if(inheriting class == WorldListener) ConsoleWrapper.WorldWrite("Did something!"); if(inheriting class == MasterListener) ConsoleWrapper.MasterWrite("Did something!"); } } public static ConsoleWrapper { public void WorldWrite(string input) { System.Console.WriteLine("[World] {0}", input); } } public class WorldListener : Listener { public void DoSomethingSpecific() { ConsoleWrapper.WorldWrite("I did something specific!"); } } public void Main() { new WorldListener(); new MasterListener(); } Expected output [World] Did something! [World] I did something specific! [Master] Did something! [World] I did something specific!

    Read the article

  • how to merge ecommerce transaction data between two databases

    - by yamspog
    We currently run an ecommerce solution for a leisure and travel company. Everytime we have a release, we must bring the ecommerce site down as we update database schema and the data access code. We are using a custom built ORM where each data entity is responsible for their own CRUD operations. This is accomplished by dynamically generating the SQL based on attributes in the data entity. For example, the data entity for an address would be... [tableName="address"] public class address : dataEntity { [column="address1"] public string address1; [column="city"] public string city; } So, if we add a new column to the database, we must update the schema of the database and also update the data entity. As you can expect, the business people are not too happy about this outage as it puts a crimp in their cash-flow. The operations people are not happy as they have to deal with a high-pressure time when database and applications are upgraded. The programmers are upset as they are constantly getting in trouble for the legacy system that they inherited. Do any of you smart people out there have some suggestions?

    Read the article

  • How to set default values to all wrong or null parameters of method?

    - by Roman
    At the moment I have this code (and I don't like it): private RenderedImage private RenderedImage getChartImage (GanttChartModel model, String title, Integer width, Integer height, String xAxisLabel, String yAxisLabel, Boolean showLegend) { if (title == null) { title = ""; } if (xAxisLabel == null) { xAxisLabel = ""; } if (yAxisLabel == null) { yAxisLabel = ""; } if (showLegend == null) { showLegend = true; } if (width == null) { width = DEFAULT_WIDTH; } if (height == null) { height = DEFAULT_HEIGHT; } ... } How can I improve it? I have some thoughts about introducing an object which will contain all these parameters as fields and then, maybe, it'll be possible to apply builder pattern. But still don't have clear vision how to implement that and I'm not sure that it's worth to be done. Any other ideas?

    Read the article

  • What is the difference between using IDisposable vs a destructor in C#?

    - by j0rd4n
    When would I implement IDispose on a class as opposed to a destructor? I read this article, but I'm still missing the point. My assumption is that if I implement IDispose on an object, I can explicitly 'destruct' it as opposed to waiting for the garbage collector to do it. Is this correct? Does that mean I should always explicitly call Dispose on an object? What are some common examples of this?

    Read the article

  • De-normalization for the sake of reports - Good or Bad?

    - by Travis
    What are the pros/cons of de-normalizing an enterprise application database because it will make writing reports easier? Pro - designing reports in SSRS will probably be "easier" since no joins will be necessary. Con - developing/maintaining the app to handle de-normalized data will become more difficult due to duplication of data and synchronization. Others?

    Read the article

  • Passing HttpFileCollectionBase to the Business Layer - Bad?

    - by Terry_Brown
    hopefully there's an easy solution to this one. I have my MVC2 project which allows uploads of files on certain forms. I'm trying to keep my controllers lean, and handle the processing within the business layer of this sort of thing. That said, HttpFileCollectionBase is obviously in the System.Web assembly. Ideally I want to call to something like: UserService.SaveEvidenceFiles(MyUser user, HttpFileCollectionBase files); or something similar and have my business layer handle the logic of how and where these things are saved. But, it feels a little icky to have my models layer with a reference to System.Web in terms of separation of concerns etc. So, we have (that I'm aware of) a few options: the web project handling this, and my controllers getting fatter mapping the HttpFileCollectionBase to something my business layer likes passing the collection through, and accepting that I reference System.Web from my business project Would love some feedback here on best practice approaches to this sort of thing - even if not specifically within the context of the above.

    Read the article

  • Win7: Right place to install a program that may be 'shared' with other computers

    - by robsoft
    We have an app that currently installs itself into 'program files\our app', and it puts the internal data files into the common Application Data folder. This means the program is available to any user on that particular PC. Now we want to make a multi-user version of this program, multiple PCs accessing the program at the same time across the network. In the bad old days, under XP, we'd just have the user who installed the app 'share' the app directory and off we'd go. In principle, is this still the 'right' way to do it under Vista/Windows 7? We'd like to do this 'properly' and be as compliant as possible! Is there a recommended 'Microsoft' approach for doing this, or is it largely down to whatever we can get away with and subsequently support (hah!). I've tried researching this on the MS websites but not found anything too helpful at all - it'd be really useful to have a 'if you're trying to install this kind of thing, put it here' type guide for developers!

    Read the article

  • Where to draw the line between efficiency and practicality

    - by dclowd9901
    I understand very well the need for websites' front ends to be coded and compressed as much as possible, however, I feel like I have more lax standards than others when it comes to practical applications. For instance, while I understand why some would, I don't see anything wrong with putting selectors in the <html> or <body> tags on a website with an expected small visitation rate. I would only do this for a cheap website for a small client, because I can't really justify the cost of time otherwise. So, that said, do you think it's okay to draw a line? Where do you draw yours?

    Read the article

  • Best practice -- Content Tracking Remote Data (cURL, file_get_contents, cron, et. al)?

    - by user322787
    I am attempting to build a script that will log data that changes every 1 second. The initial thought was "Just run a php file that does a cURL every second from cron" -- but I have a very strong feeling that this isn't the right way to go about it. Here are my specifications: There are currently 10 sites I need to gather data from and log to a database -- this number will invariably increase over time, so the solution needs to be scalable. Each site has data that it spits out to a URL every second, but only keeps 10 lines on the page, and they can sometimes spit out up to 10 lines each time, so I need to pick up that data every second to ensure I get all the data. As I will also be writing this data to my own DB, there's going to be I/O every second of every day for a considerably long time. Barring magic, what is the most efficient way to achieve this? it might help to know that the data that I am getting every second is very small, under 500bytes.

    Read the article

  • How do i structure my SQL Database (tables, Schemas, users, stored procedures etc.) to prepare it fo

    - by AlexRednic
    I think the title is self explanatory. What I'm looking for is material so I can further my knowledge. I've never developed a full application before so building one from scratch is a bit overwhelming for me. And the first bump in the road is the database. Websites, articles, books, elaborate answers, anything will do as long as they keep me on the right track. Thanks

    Read the article

  • C# Async call garbage collection

    - by Troy
    Hello. I am working on a Silverlight/WCF application and of course have numerous async calls throughout the Silverlight program. I was wondering on how is the best way to handle the creation of the client classes and subscribing. Specifically, if I subscribe to an event in a method, after it returns does it fall out of scope? internal MyClass { public void OnMyButtonClicked() { var wcfClient = new WcfClient(); wcfClient.SomeMethodFinished += OnMethodCompleted; wcfClient.SomeMethodAsync(); } private void OnMethodCompleted(object sender, EventArgs args) { //Do something with the result //After this method does the subscription to the event //fall out of scope for garbage collection? } } Will I run into problems if I call the function again and create another subscription? Thanks in advance to anyone who responds.

    Read the article

  • Robust Large File Transfer with WCF

    - by Sharov
    I want to transfer big files (1GB) over unreliable transport channels. When connection is interrupted, I don't want start file transfering from the begining. I can partially store it in a temp table and store last readed position, so when connection is reestablished I can request continue uploading of file from this position. Is there any best-practice for such kind of things. I'm currently use chunking channel.

    Read the article

  • Reading ResultSet from multiple threads

    - by superdario
    Hello, In the database, I have a definition table that is read from the application once upon starting. This definition table rarely changes, so it makes sense to read it once and restart the application every time it changes. However, after the table is read (put into a ResultSet), it will be read by multiple handlers running in their own threads. How do you suggest to accomplish this? My idea was to populate a CachedRowSet, and then create a copy of this set (through the createCopy() method) for each handler every time a new request comes. Do you think this is wise? Does this offer a good performance? Thanks.

    Read the article

  • Alternatives to checking against the system time

    - by vikp
    Hi, I have an application which license should expire after some period of time. I can check the time in the applicatino against the system time, but system time can be changed by the administrator, therefore it's not a good idea to check against the system time in my opinion. What alternatives do I have? Thank you

    Read the article

  • What makes an effective UI for displaying versioning of structured hierarchical data

    - by Fadrian Sudaman
    Traditional version control system are displaying versioning information by grouping Projects-Folders-Files with Tree view on the left and details view on the right, then you will click on each item to look at revision history for that configuration history. Assuming that I have all the historical versioning information available for a project from Object-oriented model perspective (e.g. classes - methods - parameters and etc), what do you think will be the most effective way to present such information in UI so that you can easily navigate and access the snapshot view of the project and also the historical versioning information? Put yourself in the position that you are using a tool like this everyday in your job like you are currently using SVN, SS, Perforce or any VCS system, what will contribute to the usability, productivity and effectiveness of the tool. I personally find the classical way for display folders and files like above are very restrictive and less effective for displaying deep nested logical models. Assuming that this is a greenfield project and not restricted by specific technology, how do you think I should best approach this? I am looking for idea and input here to add values to my research project. Feel free to make any suggestions that you think is valuable. Thanks again for anyone that shares their thoughts.

    Read the article

  • Good code architecture for this problem?

    - by RCIX
    I am developing a space shooter game with customizable ships. You can increase the strength of any number of properties of the ship via a pair of radar charts*. Internally, i represent each ship as a subclassed SpaceObject class, which holds a ShipInfo that describes various properties of that ship. I want to develop a relatively simple API that lets me feed in a block of relative strengths (from minimum to maximum of what the radar chart allows) for all of the ship properties (some of which are simplifications of the underlying actual set of properties) and get back a ShipInfo class i can give to a PlayerShip class (that is the object that is instantiated to be a player ship). I can develop the code to do the transformations between simplified and actual properties myself, but i would like some recommendations as to what sort of architecture to provide to minimize the pain of interacting with this translator code (i.e. no methods with 5+ arguments or somesuch other nonsense). Does anyone have any ideas? *=not actually implemented yet, but that's the plan.

    Read the article

  • Should Factories Persist Entities?

    - by mxmissile
    Should factories persist entities they build? Or is that the job of the caller? Pseudo Example Incoming: public class OrderFactory { public Order Build() { var order = new Order(); .... return order; } } public class OrderController : Controller { public OrderController(IRepository repository) { this.repository = repository; } public ActionResult MyAction() { var order = factory.Build(); repository.Insert(order); ... } } or public class OrderFactory { public OrderFactory(IRepository repository) { this.repository = repository; } public Order Build() { var order = new Order(); ... repository.Insert(order); return order; } } public class OrderController : Controller { public ActionResult MyAction() { var order = factory.Build(); ... } } Is there a recommended practice here?

    Read the article

  • Configuration and Model-View

    - by HH
    I am using the Model-View pattern on a small application I'm writing. Here's the scenario: The model maintains a list of directories from where it can extract the data that it needs. The View has a Configuration or a Setting dialog where the user can modify this list of directories (the dialog has a JList displaying the list in addition to add and remove buttons). I need some advice from the community: The View needs to communicate these changes to the model. I thought first of adding to the model these methods: addDirectory() and removeDirectory(). But I am trying to limit the number of methods (or channels) that the View can use to communicate with and manipulate the model. Is there any good practice for this? Thank you.

    Read the article

  • Best practice to include log4Net external config file in ASP.NET

    - by Martin Buberl
    I have seen at least two ways to include an external log4net config file in an ASP.NET web application: Having the following attribute in your AssemblyInfo.cs file: [assembly: log4net.Config.XmlConfigurator(ConfigFile = "Log.config", Watch = true)] Calling the XmlConfigurator in the Global.asax.cs: protected void Application_Start() { XmlConfigurator.Configure(new FileInfo("Log.config")); } What would be the best practice to do it?

    Read the article

  • HTTP POST with URL query parameters -- good idea or not?

    - by Steven Huwig
    I'm designing an API to go over HTTP and I am wondering if using the HTTP POST command, but with URL query parameters only and no request body, is a good way to go. Considerations: "Good Web design" requires non-idempotent actions to be sent via POST. This is a non-idempotent action. It is easier to develop and debug this app when the request parameters are present in the URL. The API is not intended for widespread use. It seems like making a POST request with no body will take a bit more work, e.g. a Content-Length: 0 header must be explicitly added. It also seems to me that a POST with no body is a bit counter to most developer's and HTTP frameworks' expectations. Are there any more pitfalls or advantages to sending parameters on a POST request via the URL query rather than the request body? Edit: The reason this is under consideration is that the operations are not idempotent and have side effects other than retrieval. See the HTTP spec: In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested. ... Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.

    Read the article

  • How to not over-use jQuery?

    - by Fedyashev Nikita
    Typical jQuery over-use: $('button').click(function() { alert('Button clicked: ' + $(this).attr('id')); }); Which can be simplified to: $('button').click(function() { alert('Button clicked: ' + this.id); }); Which is way faster. Can you give me any more examples of similar jQuery over-use?

    Read the article

  • What is the procedure for debugging a production-only error?

    - by Lord Torgamus
    Let me say upfront that I'm so ignorant on this topic that I don't even know whether this question has objective answers or not. If it ends up being "not," I'll delete or vote to close the post. Here's the scenario: I just wrote a little web service. It works on my machine. It works on my team lead's machine. It works, as far as I can tell, on every machine except for the production server. The exception that the production server spits out upon failure originates from a third-party JAR file, and is skimpy on information. I search the web for hours, but don't come up with anything useful. So what's the procedure for tracking down an issue that occurs only on production machines? Is there a standard methodology, or perhaps category/family of tools, for this? The error that inspired this question has already been fixed, but that was due more to good fortune than a solid approach to debugging. I'm asking this question for future reference. Some related questions: Test accounts and products in a production system Running test on Production Code/Server

    Read the article

  • What is the recommended way of parsing an XML feed with multiple namespaces with ActionScript 3.0?

    - by dafko
    I have seen the following methods to be used in several online examples, but haven't found any documentation on the recommended way of parsing an XML feed. Method 1: protected function xmlResponseHandler(event:ResultEvent):void { var atom:Namespace = new Namespace("http://www.w3.org/2005/Atom"); var microsoftData:Namespace = new Namespace("http://schemas.microsoft.com/ado/2007/08/dataservices"); var microsoftMetadata:Namespace = new Namespace("http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"); var ac:ArrayCollection = new ArrayCollection(); var keyValuePairs:KeyValuePair; var propertyList:XMLList = (event.result as XML)..atom::entry.atom::content.microsoftMetadata::properties; for each (var properties:XML in propertyList) { keyValuePairs = new KeyValuePair(properties.microsoftData::FieldLocation, properties.microsoftData::Locationid); ac.addItem(keyValuePairs); } cb.dataProvider = ac; } Method 2: protected function xmlResponseHandler(event:ResultEvent):void { namespace atom = "http://www.w3.org/2005/Atom"; namespace d = "http://schemas.microsoft.com/ado/2007/08/dataservices"; namespace m = "http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"; use namespace d; use namespace m; use namespace atom; var ac:ArrayCollection = new ArrayCollection(); var keyValuePairs:KeyValuePair; var propertyList:XMLList = (event.result as XML)..entry.content.properties; for each (var properties:XML in propertyList) { keyValuePairs = new KeyValuePair(properties.FieldLocation, properties.Locationid); ac.addItem(keyValuePairs); } cb.dataProvider = ac; } Sample XML feed: <?xml version="1.0" encoding="iso-8859-1" standalone="yes"?> <feed xml:base="http://www.test.com/Test/my.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <title type="text">Test_Locations</title> <id>http://www.test.com/test/my.svc/Test_Locations</id> <updated>2010-04-27T20:41:23Z</updated> <link rel="self" title="Test_Locations" href="Test_Locations" /> <entry> <id>1</id> <title type="text"></title> <updated>2010-04-27T20:41:23Z</updated> <author> <name /> </author> <link rel="edit" title="Test_Locations" href="http://www.test.com/id=1" /> <category term="MySQLModel.Test_Locations" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:FieldLocation>Test Location</d:FieldLocation> <d:Locationid>test0129</d:Locationid> </m:properties> </content> </entry> <entry> <id>2</id> <title type="text"></title> <updated>2010-04-27T20:41:23Z</updated> <author> <name /> </author> <link rel="edit" title="Test_Locations" href="http://www.test.com/id=2" /> <category term="MySQLModel.Test_Locations" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:FieldLocation>Yet Another Test Location</d:FieldLocation> <d:Locationid>test25</d:Locationid> </m:properties> </content> </entry> </feed>

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >