Search Results

Search found 40339 results on 1614 pages for 'best settings'.

Page 16/1614 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Best practice for Google app engine and Git.

    - by systempuntoout
    I'm developing a small pet project on Google App Engine and i would like to keep code under source control using github; this will allow a friend of mine to checkout and modify the sources. I just have a directory with all sources (call it PetProject) and Google App Engine development server points to that directory. Is it correct to create a Repo directly from PetProject directory or is it preferable to create a second directory (mirror of the develop PetProject directory); in this case, anytime my friend will release something new, i need to pull from Git and then copy the modified files to the develop PetProject directory. If i decide to keep the Repo inside the develop directory, skippin .git on yaml is enough? What's the best practice here?

    Read the article

  • html clickable layout area. best practice

    - by Andrew Florko
    I am bad in html layout but I have to produce it :) I want to make big button on a page that is implemented as div with children tags (maybe - a bad idea). I can handle click event on boundary-div with javascript but it requires javascript enabled. I can wrap boundary-div with "anchor" tag but is doesn't work in IE Please, suggest me the best way to implement this. <a href="..."> <table> <td> ... </td> <td> ... <table> ... </table> </td> </table> </a>

    Read the article

  • Best Practice: Software Versioning

    - by Sebi
    I couldn't find a similar question here on SO, but if you find one please link. Is there any guideline or standard best practice how to version a software you develop in your spare time for fun, but nevertheless will be used by some people? I think it's necessary to version such software so that you know about with version one is talking about (e.g. for bug fixing, support, and so on). But where do I start the versioning? 0.0.0? or 0.0? And then how to I increment the numbers? major release.minor change? and should'nt any commit to a version control system be another version? or is this only for versions which are used in a productive manner?

    Read the article

  • Make a Method of the Business Layer secure. best practice / best pattern [.net/c#]

    - by gsharp
    Hi We are using ASP.NET with a lot of AJAX "Page Method" calls. The WebServices defined in the Page invokes methods from our BusinessLayer. To prevent hackers to call the Page Methods, we want to implement some security in the BusinessLayer. We are struggling with two different issues. First one: public List<Employees> GetAllEmployees() { // do stuff } This Method should be called by Authorized Users with the Role "HR". Second one: public Order GetMyOrder(int orderId) { // do sutff } This Method should only be called by the owner of the Order. I know it's easy to implement the security for each method like: public List<Employees> GetAllEmployees() { // check if the user is in Role HR } or public Order GetMyOrder(int orderId) { // check if the order.Owner = user } What I'm looking for is some pattern/best practice to implement this kind of security in a generic way (without coding the the if then else every time) I hope you get what i mean :-) Thanks for you help.

    Read the article

  • Make a Method of the Business Layer secure. best practice / best pattern

    - by gsharp
    We are using ASP.NET with a lot of AJAX "Page Method" calls. The WebServices defined in the Page invokes methods from our BusinessLayer. To prevent hackers to call the Page Methods, we want to implement some security in the BusinessLayer. We are struggling with two different issues. First one: public List<Employees> GetAllEmployees() { // do stuff } This Method should be called by Authorized Users with the Role "HR". Second one: public Order GetMyOrder(int orderId) { // do sutff } This Method should only be called by the owner of the Order. I know it's easy to implement the security for each method like: public List<Employees> GetAllEmployees() { // check if the user is in Role HR } or public Order GetMyOrder(int orderId) { // check if the order.Owner = user } What I'm looking for is some pattern/best practice to implement this kind of security in a generic way (without coding the the if then else every time) I hope you get what i mean :-)

    Read the article

  • Best ways to reuse Java methods

    - by carillonator
    I'm learning Java and OOP, and have been doing the problems at Project Euler for practice (awesome site btw). I find myself doing many of the same things over and over, like: checking if an integer is prime/generating primes generating the Fibonacci series checking if a number is a palindrome What is the best way to store and call these methods? Should I write a utility class and then import it? If so, do I import a .class file or the .java source? I'm working from a plain text editor and the Mac terminal. Thanks!

    Read the article

  • best-practice on for loop's condition

    - by guest
    what is considered best-practice in this case? for (i=0; i<array.length(); ++i) or for (i=array.length(); i>0; --i) assuming i don't want to iterate from a certain direction, but rather over the bare length of the array. also, i don't plan to alter the array's size in the loop body. so, will the array.length() become constant during compilation? if not, then the second approach should be the one to go for..

    Read the article

  • MySQL indexes - what are the best practises?

    - by Haroldo
    I've been using indexes on my mysql databases for a while now but never properly learnt about them. Generally I put an index on any fields that i will be searching or selecting using a WHERE clause but sometimes it doesn't seem so black and white. What are the best practises for mysql indexes? example situations/dilemas: If a table has six columns and all of them are searchable, should i index all of them or none of them? . What are the negetive performance impacts of indexing? . If i have a VARCHAR 2500 column which is searchable from parts of my site, should i index it?

    Read the article

  • Notification Email Best Practices--From Server Setup to Programming

    - by Andrew Wagner
    All, I'm in the process now of building a SaaS tool that allows network admins to generate notification emails to the members of the end-users of our platform (among many many other things). I'm running into a bit of an "out of my expertise" wall, as I know there are a lot of variables involved with configuring an application that can: Run in a distributed way via load balancing and still-- Leverage a single mail server for sending notification emails Process unsubscribe requests Avoid any ISP blacklisting in the process. If anyone has the time and has done this before, I'd love if you could walk me through the A-Z of best practices both from a configuration perspective and an execution perspective for generating these emails (anything from necessary DNS settings to ideal SMTP setup and configuration) Currently, our application generates email via Google Apps using the PHPMailer class. While this works well, it doesn't queue messages (potential for timeout problems if any of our clients amass a very large list of end-users), and Google limits the amount of allowed generated email messages to 500/day. I know this is a lofty question, but any guidance you could provide would be smashing and a big help as we work through this hurtle in our beta development stage. Thanks!

    Read the article

  • UDDI Best Practices

    - by Andrew Cripps
    My organisation is getting into the SOA world (a bit late, but that's what it's like here!) and we're looking into the ESB Toolkit 2.0 (we already have BizTalk Server 2009). We're keen on implementing UDDI (specifically, the UDDI Services v3.0 that ships with BTS 2009), but we're low on actual UDDI experience. We want to manage the ever-burgeoning number of web services we have across all our environments. What are the best practices for implementing UDDI? For example:- Would you implement a single highly-available resilient UDDI server that hosts all services and bindings, including test environment versions? Or would you implement separate UDDI repositories for test and production environments? I'm aware of the Oasis Technical Note v2.0 on WSDL and UDDI, but does anyone actually implement that? I.e. the abstract parts of the WSDL as tModels, the implementation parts of the WSDL as bindings? Would you go to the effort of capturing non-web service endpoints in UDDI, or just use it for WSDL? What are the "gotchas"?

    Read the article

  • Nested Routes and Parameters for Rails URLs (Best Practice)

    - by viatropos
    Hey there, I have a decent understanding of RESTful urls and all the theory behind not nesting urls, but I'm still not quite sure how this looks in an enterprise application, like something like Amazon, StackOverflow, or Google... Google has urls like this: http://code.google.com/apis/ajax/ http://code.google.com/apis/maps/documentation/staticmaps/ https://www.google.com/calendar/render?tab=mc Amazon like this: http://www.amazon.com/books-used-books-textbooks/b/ref=sa_menu_bo0?ie=UTF8&node=283155&pf_rd_p=328655101&pf_rd_s=left-nav-1&pf_rd_t=101&pf_rd_i=507846&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1PK4ZKN4YWJJ9B86ANC9 http://www.amazon.com/Ruby-Programming-Language-David-Flanagan/dp/0596516177/ref=sr_1_1?ie=UTF8&s=books&qid=1258755625&sr=1-1 And StackOverflow like this: http://stackoverflow.com/users/169992/viatropos http://stackoverflow.com/questions/tagged/html http://stackoverflow.com/questions/tagged?tagnames=html&sort=newest&pagesize=15 So my question is, what is best practice in terms of creating urls for systems like these? When do you start storing parameters in the url, when don't you? These big companies don't seem to be following the rules so hotly debated in the ruby community (that you should almost never nest URLs for example), so I'm wondering how you go about implementing your own urls in larger scale projects because it seems like the idea of not nesting urls breaks down at anything larger than a blog. Any tips?

    Read the article

  • Database structure and source control - best practice

    - by Paddy
    Background I came from several years working in a company where all the database objects were stored in source control, one file per object. We had a list of all the objects that was maintained when new items were added (to allow us to have scripts run in order and handle dependencies) and a VB script that ran to create one big script for running against the database. All the tables were 'create if not exists' and all the SP's etc. were drop and recreate. Up to the present and I am now working in a place where the database is the master and there is no source control for DB objects, but we do use redgate's tools for updating our production database (SQL compare), which is very handy, and requires little work. Question How do you handle your DB objects? I like to have them under source control (and, as we're using GIT, I'd like to be able to handle merge conflicts in the scripts, rather than the DB), but I'm going to be pressed to get past the ease of using SQL compare to update the database. I don't really want to have us updating scripts in GIT and then using SQL compare to update the production database from our DEV DB, as I'd rather have 'one version of the truth', but I don't really want to get into re-writing a custom bit of software to bundle the whole lot of scripts together. I think that visual studio database edition may do something similar to this, but I'm not sure if we will have the budget for it. I'm sure that this has been asked to death, but I can't find anything that seems to quite have the answer I'm looking for. Similar to this, but not quite the same: http://stackoverflow.com/questions/340614/what-are-the-best-practices-for-database-scripts-under-code-control

    Read the article

  • Best practices for class-mapping with SoapClient

    - by Foofy
    Using SoapClient's class mapping feature and it's pretty sweet. Unfortunately the SOAP service we're using has a bunch of read-only properties on some of the objects and will throw faults if the properties are passed back as anything but null. Need to filter out the properties before they're used in the SOAP call and am looking for advice on the best way to do it. So far the options are: Stick to a convention where I use getter and setter functions to manipulate the properties, and use property overloading to filter method access since only SoapClient would be doing that. E.g. developers would access properties like this: $obj->getAccountNumber() SoapClient would access properties like this: $obj->accountNumber I don't like this because the properties are still exposed and things could go wrong if developers don't stick to convention. Have a wrapper for SoapClient that sets a public property the mapped objects can check to see if the property is being accessed by SoapClient. I already have a wrapper that assigns a reference to itself to all the mapped objects. class SoapClientWrapper { public function __soapCall($method, $args) { $this->setSoapMode(true); $this->_soapClient->__soapCall($method, $args); $this->setSoapMode(false); } } class Invoice { function __get($val) { if($this->_soapClient->getSoapMode()) { return null; } else { return $this->$val; } } } This works but it doesn't feel right and seems a bit clunky. Do the mapping manually, and don't use SoapClient's mapping features. I'd just have a function on all the mapped objects that returns the safe-to-send properties. Also, nobody would have access to properties they shouldn't since I could enforce getters and setters. A lot more work, though.

    Read the article

  • git branch naming best practices

    - by skiphoppy
    I've been using a local git repository interacting with my group's CVS repository for several months, now. I've made an almost neurotic number of branches, most of which have thankfully merged back into my trunk. But naming is starting to become an issue. If I have a task easily named with a simple label, but I accomplish it in three stages which each include their own branch and merge situation, then I can repeat the branch name each time, but that makes the history a little confusing. If I get more specific in the names, with a separate description for each stage, then the branch names start to get long and unwieldy. I did learn looking through old threads here that I could start naming branches with a / in the name, i.e., topic/task, or something like that. I may start doing that and seeing if it helps keep things better organized. What are some best practices for naming git branches? Edit: Nobody has actually suggested any naming conventions. I do delete branches when I'm done with them. I just happen to have several around due to management constantly adjusting my priorities. :) As an example of why I might need more than one branch on a task, suppose I need to commit the first discrete milestone in the task to the group's CVS repository. At that point, due to my imperfect interaction with CVS, I would perform that commit and then kill that branch. (I've seen too much weirdness interacting with CVS if I try to continue to use the same branch at that point.)

    Read the article

  • REST API Best practice: How to accept as input a list of parameter values

    - by whatupwilly
    Hi All, We are launching a new REST API and I wanted some community input on best practices around how we should have input parameters formatted: Right now, our API is very JSON-centric (only returns JSON). The debate of whether we want/need to return XML is a separate issue. As our API output is JSON centric, we have been going down a path where our inputs are a bit JSON centric and I've been thinking that may be convenient for some but weird in general. For example, to get a few product details where multiple products can be pulled at once we currently have: http://our.api.com/Product?id=["101404","7267261"] Should we simplify this as: http://our.api.com/Product?id=101404,7267261 Or is having JSON input handy? More of a pain? We may want to accept both styles but does that flexibility actually cause more confusion and head aches (maintainability, documentation, etc.)? A more complex case is when we want to offer more complex inputs. For example, if we want to allow multiple filters on search: http://our.api.com/Search?term=pumas&filters={"productType":["Clothing","Bags"],"color":["Black","Red"]} We don't necessarily want to put the filter types (e.g. productType and color) as request names like this: http://our.api.com/Search?term=pumas&productType=["Clothing","Bags"]&color=["Black","Red"] Because we wanted to group all filter input together. In the end, does this really matter? It may be likely that there are so many JSON utils out there that the input type just doesn't matter that much. I know our javascript clients making AJAX calls to the API may appreciate the JSON inputs to make their life easier. Thanks, Will

    Read the article

  • Resizing video best practices (frame size)

    - by undefined
    I have read the following which is from Best Practices for Encoding Video with the VP6 Codec on the Adobe website here - http://www.adobe.com/devnet/flash/articles/encoding_video_print.html. It is talking about common video ratios (320x240, 640x480) Although these ratios are standard, and should be used to avoid distorting the video, the size of the encoded video is not set in stone. The original web video sizes used heights and widths that were evenly divisible by 16. This was mandatory for many early codecs. Although this is not necessary for modern codecs, you should stick to even heights and widths. What do they mean by 'even heights and widths'. I am thinking about encoding my video at 400x300 to make it slightly bigger, this is still 4x3 format but should I just stick at 320x240 and resize it on the screen? Clearly there are benefits to this in terms of storage size and delivery costs. In some places on my site I want to show the video at 400x300 but in others I want it to play full screen so this is why I am wondering if a larger original size (400x300) will give better results when blown up. Any thoughts?

    Read the article

  • Data Access Layer, Best Practices

    - by labratmatt
    I'm looking for input on the best way to refactor the data access layer (DAL) in my PHP based web app. I follow an MVC pattern: PHP/HTML/CSS/etc. views on the front end, PHP controllers/services in the middle, and a PHP DAL sitting on top of a relational database in the model. Pretty standard stuff. Things are working fine, but my DAL is getting large (codesmell?) and becoming a bit unwieldy. My DAL contains almost all of the logic to interface with my database and is full of functions that look like this: function getUser($user_id) { $statement = "select id, name from users where user_id=:user_id"; PDO builds statement and fetchs results as an array return $array_of_results_generated_by_PDO_fetch_method; } Notes: The logic in my controller only interacts with the model using functions like the above in the DAL I am not using a framework (I'm of the opinion that PHP is a templating language and there's no need to inject complexity via a framework) I generally use PHP as a procedural language and tend to shy away from its OOP approach (I enjoy OOP development but prefer to keep that complexity out of PHP) What approaches have you taken when your DAL has reached this point? Do I have a fundamental design problem? Do I simply need to chop my DAL into a number of smaller files (logically divide it)? Thanks.

    Read the article

  • Why aren't .NET "application settings" stored in the registry?

    - by Thomas
    Some time back in the nineties, Microsoft introduced the Windows Registry. Applications could store settings in different hives. There were hives for application-wide and user-specific scopes, and these were placed in appropriate locations, so that roaming profiles worked correctly. In .NET 2.0 and up, we have this thing called Application Settings. Applications can use them to store settings in XML files, app.exe.config and user.config. These are for application-wide and user-specific scopes, and these are placed in appropriate locations, so that roaming profiles work correctly. Sound familiar? What is the reason that these Application Settings are backed by XML files, instead of simply using the registry? Isn't this exactly what the registry was intended for? The only reason I can think of is that the registry is Windows-specific, and .NET tries to be platform-independent. Was this a (or the) reason, or are there other considerations that I'm overlooking?

    Read the article

  • Best Practice - Removing item from generic collection in C#

    - by Matt Davis
    I'm using C# in Visual Studio 2008 with .NET 3.5. I have a generic dictionary that maps types of events to a generic list of subscribers. A subscriber can be subscribed to more than one event. private static Dictionary<EventType, List<ISubscriber>> _subscriptions; To remove a subscriber from the subscription list, I can use either of these two options. Option 1: ISubscriber subscriber; // defined elsewhere foreach (EventType event in _subscriptions.Keys) { if (_subscriptions[event].Contains(subscriber)) { _subscriptions[event].Remove(subscriber); } } Option 2: ISubscriber subscriber; // defined elsewhere foreach (EventType event in _subscriptions.Keys) { _subscriptions[event].Remove(subscriber); } I have two questions. First, notice that Option 1 checks for existence before removing the item, while Option 2 uses a brute force removal since Remove() does not throw an exception. Of these two, which is the preferred, "best-practice" way to do this? Second, is there another, "cleaner," more elegant way to do this, perhaps with a lambda expression or using a LINQ extension? I'm still getting acclimated to these two features. Thanks. EDIT Just to clarify, I realize that the choice between Options 1 and 2 is a choice of speed (Option 2) versus maintainability (Option 1). In this particular case, I'm not necessarily trying to optimize the code, although that is certainly a worthy consideration. What I'm trying to understand is if there is a generally well-established practice for doing this. If not, which option would you use in your own code?

    Read the article

  • Best practices for managing updating a database with a complex set of changes

    - by Sarge
    I am writing an application where I have some publicly available information in a database which I want the users to be able to edit. The information is not textual like a wiki but is similar in concept because the edits bring the public information increasingly closer to the truth. The changes will affect multiple tables and the update needs to be automatically checked before affecting the public tables. I'm working on the design and I'm wondering if there are any best practices that might help with some particular issues. I want to provide undo capability. I want to show the user the combined result of all their changes. When the user says they're done, I need to check the underlying public data to make sure it hasn't been changed by somebody else. My current plan is to have the user work in a set of tables setup to be a private working area. Once they're ready they can kick off a process to check everything and update the public tables. Undo can be recorded using Command pattern saving to a table. Are there any techniques I might have missed or useful papers or patterns? Thanks in advance!

    Read the article

  • Best Practice With JFrame Constructors?

    - by David Barry
    In both my Java classes, and the books we used in them laying out a GUI with code heavily involved the constructor of the JFrame. The standard technique in the books seems to be to initialize all components and add them to the JFrame in the constructor, and add anonymous event handlers to handle events where needed, and this is what has been advocated in my class. This seems to be pretty easy to understand, and easy to work with when creating a very simple GUI, but seems to quickly get ugly and cumbersome when making anything other than a very simple gui. Here is a small code sample of what I'm describing: public class FooFrame extends JFrame { JLabel inputLabel; JTextField inputField; JButton fooBtn; JPanel fooPanel; public FooFrame() { super("Foo"); fooPanel = new JPanel(); fooPanel.setLayout(new FlowLayout()); inputLabel = new JLabel("Input stuff"); fooPanel.add(inputLabel); inputField = new JTextField(20); fooPanel.add(inputField); fooBtn = new JButton("Do Foo"); fooBtn.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { //handle event } }); fooPanel.add(fooBtn); add(fooPanel, BorderLayout.CENTER); } } Is this type of use of the constructor the best way to code a Swing application in java? If so, what techniques can I use to make sure this type of constructor is organized and maintainable? If not, what is the recommended way to approach putting together a JFrame in java?

    Read the article

  • Best practice - When to evaluate conditionals of function execution

    - by Tesserex
    If I have a function called from a few places, and it requires some condition to be met for anything it does to execute, where should that condition be checked? In my case, it's for drawing - if the mouse button is held down, then execute the drawing logic (this is being done in the mouse movement handler for when you drag.) Option one says put it in the function so that it's guaranteed to be checked. Abstracted, if you will. public function Foo() { DoThing(); } private function DoThing() { if (!condition) return; // do stuff } The problem I have with this is that when reading the code of Foo, which may be far away from DoThing, it looks like a bug. The first thought is that the condition isn't being checked. Option two, then, is to check before calling. public function Foo() { if (condition) DoThing(); } This reads better, but now you have to worry about checking from everywhere you call it. Option three is to rename the function to be more descriptive. public function Foo() { DoThingOnlyIfCondition(); } private function DoThingOnlyIfCondition() { if (!condition) return; // do stuff } Is this the "correct" solution? Or is this going a bit too far? I feel like if everything were like this function names would start to duplicate their code. About this being subjective: of course it is, and there may not be a right answer, but I think it's still perfectly at home here. Getting advice from better programmers than I is the second best way to learn. Subjective questions are exactly the kind of thing Google can't answer.

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >