Search Results

Search found 30894 results on 1236 pages for 'best practice'.

Page 12/1236 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Best practices for fixed-width processing in .NET

    - by jmgant
    I'm working a .NET web service that will be processing a text file with a relatively long, multilevel record format. Each record in the file represents a different entity; the record contains multiple sub-types. (The same record format is currently being processed by a COBOL job, if that gives you a better picture of what we're looking at). I've created a class structure (a DATA DIVISION if you will) to hold the input data. My question is, what best practices have you found for processing large, complex fixed-width files in .NET? My general approach will be to read the entire line into a string and then parse the data from the string into the classes I've created. But I'm not sure whether I'll get better results working with the characters in the string as an array, or with the string itself. I guess that's the specific question, string vs. char[], but I would appreciate any other pointers anyone has. Thanks.

    Read the article

  • SSRS 2008 Installation Guidelines/Best Practices?

    - by Brad Bowman
    I know I have seen recommendations for installing SSRS 2005 and it stated that you should separate SSRS from the DB Engine that hosts the data sources for your reports, that you should not install them on the same server. Is there any documentation for SSRS 2008 that provides guidelines/best practices for installation? I am assuming that the same holds true for SSRS 2008 as it did for SSRS 2005 but have not seen it specifically stated anywhere. We have a project that will utilize SSRS 2008 and there is a difference of opinion on where to install it, on it's own dedicated server or on the same SQL Server as the report data sources. If there is a link to any documentation that could be provided it would be gratefully appreciated, all I am finding besides what is in BOL refers to 2005. Thanks in advance!

    Read the article

  • Developing ASP.NET MVC UI Extensions - best approach

    - by user252160
    What are the best practices in developing rich UI extensions for ASP.NET MVC (I mean asynchronous / partial loading, slick effects, skinning etc) ? I saw that Telerik has an MVC suite of extensions, but haven't tried them yet, so I cannot comment on them. My biggest concern as of the moment is how to structure the code of my extensions so that C#, ASP markup, and JQuery remain separate from each other, yet encapsulated in a way that the extension must be easy to distribute and reuse. I know that the user control approach had many flaws, yet it sort of allowed application developers to just reference a control, set some parameters and get it going in a few minutes. I'd like to achieve the same kind of portability/reusability as I still keep the code easy to extend and build upon. Now, some ideas that come to mind as topics for discussion are: template helpers vs. extension method helpers or a possible integration efficient use of jQuery - naming conventions - - asynchronous action loading - etc you can add whatever comes to your mind

    Read the article

  • Best Practices for Managed SalesForce App Development?

    - by Fiid
    We're developing applications for AppExchange and are trying to figure out the best way to do development and release management. There are several issues around this: 1) Package Prefixes. We are developing code in unmanaged mode and releasing as managed, so we have to add all the package prefixes into the code. Is there a way to do this dynamically at runtime? Right now we're using an Ant script, which stops us benefitting from the force.com IDE plugin. 2) Resource files... We are doing some ajax-ey stuff and as a result have a few different resource files we upload, some of which are multiple file resources (zip files). Has anyone automated the building of these resources using ANT, and does that work well? Our environment seems very fragile and works for some developers and not others; have other people had this problem? How did you resolve it?

    Read the article

  • Best practices for organizing .NET P/Invoke code to Win32 APIs

    - by Paul Sasik
    I am refactoring a large and complicated code base in .NET that makes heavy use of P/Invoke to Win32 APIs. The structure of the project is not the greatest and I am finding DllImport statements all over the place, very often duplicated for the same function, and also declared in a variety of ways: The import directives and methods are sometimes declared as public, sometimes private, sometimes as static and sometimes as instance methods. My worry is that refactoring may have unintended consequences but this might be unavoidable. Are there documented best practices I can follow that can help me out? My instict is to organize a static/shared Win32 P/Invoke API class that lists all of these methods and associated constants in one file... (The code base is made up of over 20 projects with a lot of windows message passing and cross-thread calls. It's also a VB.NET project upgraded from VB6 if that makes a difference.)

    Read the article

  • Best practices book for CRUD apps

    - by Kevin L.
    We will soon be designing a new tool to calculate commissions across multiple business units. This new compensation scheme is pretty clever and well thought-out, but the complexity that the implementation will involve will make the Hubble look like a toaster. A significant portion of the programming industry involves CRUD apps; updating insurance data, calculating commissions (Joel included) ...even storing questions and answers for a programmer Q&A site. We as programmers have Code Complete for the low-level formatting/style and Design Patterns for high-level architecture (to name just a few). Where’s the comparable book that teaches best practices for CRUD?

    Read the article

  • Best Practices Question on using an ObjectDataSource in asp.net

    - by Lill Lansey
    Asp.net, c#, vs2008, sqlserver 2005. I am filling a DataTable in the data access layer with data from a sqlserver stored procedure. Best Practices Question – Is it ok to pass the DataTable to the business layer and use the DataTable from the business layer for an ObjectDataSource in the presentation layer, or Should I transfer the data in the data table into a List and use the List for an ObjectDataSource in the presentation layer? If I should transfer the data to a List, should that be done in the data access layer or the business layer? Does it make a difference if the data needs to be edited before being displayed?

    Read the article

  • ASP.NET MVC Best Implementation Practices

    - by RSolberg
    I've recently been asked to completely rewrite and redesign a web site and the owner of the company has stressed that he wants the site to be made with the latest and greatest technology available, but to avoid additional costs. As of right now, I'm torn between looking into a CMS implementation and writing a new implementation with MVC. The site is mainly brochure ware, but will need to allow the visitors to submit some data through forms. There are quite a few lists and content features that are dynamic and should be treated as such. Since ASP.NET MVC is new, I don't want to bastardize the implementation if I go that way... Any recommendations on best implementation practices for a MVC website? Also, has anyone had their MVC implementation hosted anywhere that they would recommend?

    Read the article

  • Best practices for file system dependencies in unit/integration tests

    - by Olvagor
    I just started writing tests for a lot of code. There's a bunch of classes with dependencies to the file system, that is they read CSV files, read/write configuration files and so on. Currently the test files are stored in the test directory of the project (it's a Maven2 project) but for several reasons this directory doesn't always exist, so the tests fail. Do you know best practices for coping with file system dependencies in unit/integration tests? Edit: I'm not searching an answer for that specific problem I described above. That was just an example. I'd prefer general recommendations how to handle dependencies to the file system/databases etc.

    Read the article

  • Best Practices of fault toleration and reliability for scheduled tasks or services

    - by user177883
    I have been working on many applications which run as windows service or scheduled tasks. Now, i want to make sure that these applications will be fault tolerant and reliable. For example; i have a service that runs every hour. if the service crashes while its operating or running, i d like the application to run again for the same period, to avoid data loss. moreover, i d like the program to report the error with details. My goal is to avoid data loss and not falling behind for running the program. I have built a class library that a user can import into a project. Library is supposed to keep information of running instance of the program, ie. program reads and writes information of running interval, running status etc. This data is stored in a database. I was curious, if there are some best practices to make the scheduled tasks/ windows services fault tolerant and reliable.

    Read the article

  • Best practices on what data to collect in an in-app web analytics

    - by Anton Gogolev
    Hi! In our SaaSy webapp we need to collect Google Analytics-like data (like, what pages were visited, how many 404s where there, etc.). I wonder if there are any best practices on what pieces of information should be collected (like, IP, User Agent, etc.) and how should these logs be stored. Requirements on what statistics we're going to display are not yet fixed, but I want to have a starting point. Can you help me out with this? Thanks.

    Read the article

  • Google App Engine and Git best practices

    - by systempuntoout
    I'm developing a small pet project on Google App Engine and i would like to keep code under source control using github; this will allow a friend of mine to checkout and modify the sources. I just have a directory with all sources (call it PetProject) and Google App Engine development server points to that directory. Is it correct to create a repo directly from PetProject directory or is it preferable to create a second directory mirroring the develop PetProject directory? In the latter case, anytime my friend will release something new, i need to pull fetch from Git copying the modified files to the develop PetProject directory. If i decide to keep the repo inside the develop directory, skipping .git on Gae yaml is enough? What are the best practices here?

    Read the article

  • MySQL indexes - what are the best practises?

    - by Haroldo
    I've been using indexes on my mysql databases for a while now but never properly learnt about them. Generally I put an index on any fields that i will be searching or selecting using a WHERE clause but sometimes it doesn't seem so black and white. What are the best practises for mysql indexes? example situations/dilemas: If a table has six columns and all of them are searchable, should i index all of them or none of them? . What are the negetive performance impacts of indexing? . If i have a VARCHAR 2500 column which is searchable from parts of my site, should i index it?

    Read the article

  • What is the best practice, point of view of well experienced developers

    - by Damien MIRAS
    My manager pushes me to use his self defined best practices. All of these practices are based on is own assumptions. I disagree with them and I would like to have some feedback of well experienced people about these practices. I would prefer answers from people involved in the maintenance of huge products and people whom have maintained the product for years. Developers with 15+ years of experience are preferred because my manager has that much experience himself. I have 7 years of experience. Here are the practices he wants me to use: never extends classes, use composition and interface instead because extending classes are unmaintainable and difficult to debug. What I think about that Extend when needed, respect "Liskov's Substitution Principle" and you'll never be stuck with a problem, but prefer composition and decoration. I don't know any serious project which has banned inheriting, sometimes it's impossible to not use that, i.e. in a UI framework. Design patterns are just unusable. In PHP, for simple use cases (for example a user needs a web interface to view a database table), his "best practice" is: copy some random php code wich we own, paste and modify it, put html and php code in same file, never use classes in PHP, it doesn't work well for small jobs, and in fact it doesn't work well at all, there is no good tool to work with. Copy & paste PHP code is good practice for maintenance because scripts are independent, if you have a bug somewhere you can fix it without side effects. What I think about that: NEVER EVER COPY code or do it because you have five minutes to deliver something, you will do some refactoring after that. Copy & paste code is a beginners error, if you have errors you'll have the error everywhere any time you have pasted it's a nightmare to maintain. If you repsect the "Open Close Principle" you'll rarely get edge effects, use unit test if you are afraid of that. For small jobs in PHP use at least something you get or write the HTML separately from the PHP code and reuse it any time you need it. Classes in PHP are mature, not as mature as other languages like python or java, but they are usable. There is tools to work with classes in PHP like Zend Studio that work very well. The use of classes or not depends not on the language you use but the programming paradigm you have choosen. I'm a OOP developer, I use PHP5, why do I have to shoot myself in the foot? When you find a simple bug in the code, and you can fix it simply, if you are not working on the code where you have found it, never fix it, even if it takes 5 seconds. He says to me his "best practices" are driven by the fact that he has a lot of experience in maintaining software in production (14 years) and he now knows what works and what doesn't work, what the community says is a fad, and the people advocating such principles as never copy & paste code, are not evolved in maintaining applications. What I think about that: If you find a bug fix it if you can do it quickly inform the people who've touched that code before, check if you have not introduced a new bug, ideally add a unit test for it. I currently work on a web commerce project, which serves 15k unique users per day. The code base has to be maintained and has been maintained this way since 2005. Ideally you include a short description of your position and experience in terms of years effectively maintaining an application which has been in production for real.

    Read the article

  • Notification Email Best Practices--From Server Setup to Programming

    - by Andrew Wagner
    All, I'm in the process now of building a SaaS tool that allows network admins to generate notification emails to the members of the end-users of our platform (among many many other things). I'm running into a bit of an "out of my expertise" wall, as I know there are a lot of variables involved with configuring an application that can: Run in a distributed way via load balancing and still-- Leverage a single mail server for sending notification emails Process unsubscribe requests Avoid any ISP blacklisting in the process. If anyone has the time and has done this before, I'd love if you could walk me through the A-Z of best practices both from a configuration perspective and an execution perspective for generating these emails (anything from necessary DNS settings to ideal SMTP setup and configuration) Currently, our application generates email via Google Apps using the PHPMailer class. While this works well, it doesn't queue messages (potential for timeout problems if any of our clients amass a very large list of end-users), and Google limits the amount of allowed generated email messages to 500/day. I know this is a lofty question, but any guidance you could provide would be smashing and a big help as we work through this hurtle in our beta development stage. Thanks!

    Read the article

  • UDDI Best Practices

    - by Andrew Cripps
    My organisation is getting into the SOA world (a bit late, but that's what it's like here!) and we're looking into the ESB Toolkit 2.0 (we already have BizTalk Server 2009). We're keen on implementing UDDI (specifically, the UDDI Services v3.0 that ships with BTS 2009), but we're low on actual UDDI experience. We want to manage the ever-burgeoning number of web services we have across all our environments. What are the best practices for implementing UDDI? For example:- Would you implement a single highly-available resilient UDDI server that hosts all services and bindings, including test environment versions? Or would you implement separate UDDI repositories for test and production environments? I'm aware of the Oasis Technical Note v2.0 on WSDL and UDDI, but does anyone actually implement that? I.e. the abstract parts of the WSDL as tModels, the implementation parts of the WSDL as bindings? Would you go to the effort of capturing non-web service endpoints in UDDI, or just use it for WSDL? What are the "gotchas"?

    Read the article

  • Best practices for class-mapping with SoapClient

    - by Foofy
    Using SoapClient's class mapping feature and it's pretty sweet. Unfortunately the SOAP service we're using has a bunch of read-only properties on some of the objects and will throw faults if the properties are passed back as anything but null. Need to filter out the properties before they're used in the SOAP call and am looking for advice on the best way to do it. So far the options are: Stick to a convention where I use getter and setter functions to manipulate the properties, and use property overloading to filter method access since only SoapClient would be doing that. E.g. developers would access properties like this: $obj->getAccountNumber() SoapClient would access properties like this: $obj->accountNumber I don't like this because the properties are still exposed and things could go wrong if developers don't stick to convention. Have a wrapper for SoapClient that sets a public property the mapped objects can check to see if the property is being accessed by SoapClient. I already have a wrapper that assigns a reference to itself to all the mapped objects. class SoapClientWrapper { public function __soapCall($method, $args) { $this->setSoapMode(true); $this->_soapClient->__soapCall($method, $args); $this->setSoapMode(false); } } class Invoice { function __get($val) { if($this->_soapClient->getSoapMode()) { return null; } else { return $this->$val; } } } This works but it doesn't feel right and seems a bit clunky. Do the mapping manually, and don't use SoapClient's mapping features. I'd just have a function on all the mapped objects that returns the safe-to-send properties. Also, nobody would have access to properties they shouldn't since I could enforce getters and setters. A lot more work, though.

    Read the article

  • git branch naming best practices

    - by skiphoppy
    I've been using a local git repository interacting with my group's CVS repository for several months, now. I've made an almost neurotic number of branches, most of which have thankfully merged back into my trunk. But naming is starting to become an issue. If I have a task easily named with a simple label, but I accomplish it in three stages which each include their own branch and merge situation, then I can repeat the branch name each time, but that makes the history a little confusing. If I get more specific in the names, with a separate description for each stage, then the branch names start to get long and unwieldy. I did learn looking through old threads here that I could start naming branches with a / in the name, i.e., topic/task, or something like that. I may start doing that and seeing if it helps keep things better organized. What are some best practices for naming git branches? Edit: Nobody has actually suggested any naming conventions. I do delete branches when I'm done with them. I just happen to have several around due to management constantly adjusting my priorities. :) As an example of why I might need more than one branch on a task, suppose I need to commit the first discrete milestone in the task to the group's CVS repository. At that point, due to my imperfect interaction with CVS, I would perform that commit and then kill that branch. (I've seen too much weirdness interacting with CVS if I try to continue to use the same branch at that point.)

    Read the article

  • Resizing video best practices (frame size)

    - by undefined
    I have read the following which is from Best Practices for Encoding Video with the VP6 Codec on the Adobe website here - http://www.adobe.com/devnet/flash/articles/encoding_video_print.html. It is talking about common video ratios (320x240, 640x480) Although these ratios are standard, and should be used to avoid distorting the video, the size of the encoded video is not set in stone. The original web video sizes used heights and widths that were evenly divisible by 16. This was mandatory for many early codecs. Although this is not necessary for modern codecs, you should stick to even heights and widths. What do they mean by 'even heights and widths'. I am thinking about encoding my video at 400x300 to make it slightly bigger, this is still 4x3 format but should I just stick at 320x240 and resize it on the screen? Clearly there are benefits to this in terms of storage size and delivery costs. In some places on my site I want to show the video at 400x300 but in others I want it to play full screen so this is why I am wondering if a larger original size (400x300) will give better results when blown up. Any thoughts?

    Read the article

  • Data Access Layer, Best Practices

    - by labratmatt
    I'm looking for input on the best way to refactor the data access layer (DAL) in my PHP based web app. I follow an MVC pattern: PHP/HTML/CSS/etc. views on the front end, PHP controllers/services in the middle, and a PHP DAL sitting on top of a relational database in the model. Pretty standard stuff. Things are working fine, but my DAL is getting large (codesmell?) and becoming a bit unwieldy. My DAL contains almost all of the logic to interface with my database and is full of functions that look like this: function getUser($user_id) { $statement = "select id, name from users where user_id=:user_id"; PDO builds statement and fetchs results as an array return $array_of_results_generated_by_PDO_fetch_method; } Notes: The logic in my controller only interacts with the model using functions like the above in the DAL I am not using a framework (I'm of the opinion that PHP is a templating language and there's no need to inject complexity via a framework) I generally use PHP as a procedural language and tend to shy away from its OOP approach (I enjoy OOP development but prefer to keep that complexity out of PHP) What approaches have you taken when your DAL has reached this point? Do I have a fundamental design problem? Do I simply need to chop my DAL into a number of smaller files (logically divide it)? Thanks.

    Read the article

  • Best practices for managing updating a database with a complex set of changes

    - by Sarge
    I am writing an application where I have some publicly available information in a database which I want the users to be able to edit. The information is not textual like a wiki but is similar in concept because the edits bring the public information increasingly closer to the truth. The changes will affect multiple tables and the update needs to be automatically checked before affecting the public tables. I'm working on the design and I'm wondering if there are any best practices that might help with some particular issues. I want to provide undo capability. I want to show the user the combined result of all their changes. When the user says they're done, I need to check the underlying public data to make sure it hasn't been changed by somebody else. My current plan is to have the user work in a set of tables setup to be a private working area. Once they're ready they can kick off a process to check everything and update the public tables. Undo can be recorded using Command pattern saving to a table. Are there any techniques I might have missed or useful papers or patterns? Thanks in advance!

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • Best practice when working with web services that return objects?

    - by Raj
    Hello, I'm currently working with web services that return objects such as a list of files e.g. File array. I wanted to know whether its best practice to bind this type of object directly to my front end code for example a repeater/listview or whether to first parse it into my own list of "file class" e.g. customFiles[] If the web service changes then it will break my front end code, however if I create my own CustomFile class, then i would only need to change my code in one place to fix the issue, but it just seems like a lot of extra work to create the same classes from a web service, i wanted to know what is the best practice for this type of work. thanks Raj.

    Read the article

  • Best implementation of Java Queue?

    - by Georges Oates Larsen
    I am working (In java) on a recursive image processing algorithm that recursively traverses the pixels of the image, outward from a center point. Unfortunately... That causes stack overflows, so I have decided to switch to a Queue-based algorithm. Now, this is all fine and dandy -- But considering the fact that its queue will be analyzing THOUSANDS of pixels in a very short amount of time, while constantly popping and pushing, WITHOUT maintaining a predictable state (It could be anywhere between length 100, and 20000); The queue implementation needs to have significantly fast popping and pushing abilities. A linked list seems attractive due to its ability to push elements unto its self without rearranging anything else in the list, but in order for it to be fast enough, it would need easy access to both its head, AND its tail (or second-to-last node if it were not doubly-linked). Sadly, though I cannot find any information related to the underlying implementation of linked lists in Java, so it's hard to say if a linked list is really the way to go... This brings me to my question... What would be the best implementation of the Queue interface in Java for what I intend to do? (I do not wish to edit or even access anything other than the head and tail of the queue -- I do not wish to do any sort of rearranging, or anything. On the flip side, I DO intend to do a lot of pushing and popping, and the queue will be changing size quite a bit, so preallocating would be inefficient)

    Read the article

  • BizTalk 2009 - BizTalk Server Best Practice Analyser

    - by StuartBrierley
    The BizTalk Server Best Practices Analyser  allows you to carry out a configuration level verification of your BizTalk installation, evaluating the deployed configuration but not modifying or tuning anything that it finds. The Best Practices Analyser uses "reading and reporting" to gather data from different sources, such as: Windows Management Instrumentation (WMI) classes SQL Server databases Registry entries When I first ran the analyser I got a number of errors, if you get any errors these should all be acted upon to resolve them, you should then run the scan again and see if any thing else is reported that needs acting upon. As you can see in the image above, the initial issue that jumped out to me was that the SQL Server Agent was not started. The reasons for this was absent mindedness - this run was against my development PC and I don't have SQL/BizTalk actively running unless I am using them.  Starting the agent service and running the scan again gave me the following results: This resolved most of the issues for me, but next major issue to look at was that there was no tracking host running.  You can also see that I was still getting an error with two of the SQL jobs.  The problem here was that I had not yet configured these two SQL jobs.  Configuring the backup and purge jobs and then starting the tracking host before running the scan again gave: This had cleared all the critical issues, but I did stil have a number of warnings.  For example on this report I was warned that the BizTalk Message box is hosted on the BizTalk Server.  While this is known to be less than ideal, it is as I expected on my development environment where I have installed Visual Studio, SQL and BizTalk on my laptop and I was happy to ignore this and other similar warnings. In your case you should take a look at any warnings you receive and decide what you want to do about each of them in turn.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >