Search Results

Search found 5915 results on 237 pages for 'practices'.

Page 51/237 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Yii: Multi-language website - best practices.

    - by michal
    Hi, I find Yii great framework, and the example website created with yiic shell is a good point to start... however it doesn't cover the topic of multi-language websites, unfortunately. The docs covers the topic of translating short messages, but not keeping the multi-lingual content ... I'm about to start working on a website which needs to be in at least two languages, and I'm wondering what is the best way to keep content for that ... The problem is that the content is mixed extensively with common elements (like embedded video files). I need to avoid duplicating those commons ... so far I used to have an array of arrays containing texts (usually no more than 1-2 short paragraphs), then the view file was just rendering the text from an array. Now I'd like to avoid keeping it in arrays (which requires some attention when putting double quotations " " and is inconvenient in general...). So, what is the best way to keep those short paragraphs? Should I keep them in DB like (id | msg_id | language | content ) and then select them by msg_id & language? That still requires me to create some msg_id's and embed them into view file ... Is there any recommended paradigm for which Yii has some solutions? Thanks, m.

    Read the article

  • Git branching / rebasing good practices

    - by Pawel Krupinski
    I have a following scenario: 3 branches: - Master - MyBranch branched off Master for the purpose of developing a new feature of the system - MyBranchLocal branched off MyBranch as my local copy of the branch MyBranch is being rebased against and pushed to by other developers (who are working on the same feature as I am). As the owner of the MyBranch branch I want to keep it in sync with Master by rebasing. I also need to merge the changes I make to MyBranchLocal with MyBranch. What is a good way to do that? Couple of possible scenarios I tried so far: I. 1. Commit change to MyBranchLocal 2. Rebase MyBranch against Master 3. Rebase MyBranchLocal against MyBranch 4. Merge MyBranch with MyBranchLocal II. 1. Commit change to MyBranchLocal 2. Merge MyBranch with MyBranchLocal 3. Rebase MyBranch against Master 4. Rebase MyBranchLocal against MyBranch III. 1. Commit change to MyBranchLocal 2. Rebase MyBranch against Master 3. Merge MyBranch with MyBranchLocal 4. Rebase MyBranchLocal against MyBranch I already know that scenario III seems to be messing the commit history up a lot, potentially duplicating commits. What is your experience? What scenarios do you recommend?

    Read the article

  • Best practices for defining and initializing variables in web.xml and then accessing them from Java

    - by DutrowLLC
    I would like to define and initialize some variables in web.xml and the access the values of these variables inside my Java application. The reason I want to do this is because I would like to be able to change the values of these variables without having to recompile the code. What is the best practice for doing this? Most of the variables are just strings, maybe some numbers as well. Does the class that accesses the variables have to be a servlet? Thanks! Chris

    Read the article

  • Best Practices for Sanitizing SQL inputs Using JavaScript?

    - by Greg Bulmash
    So, with HTML5 giving us local SQL databases on the client side, if you want to write a select or insert, you no longer have the ability to sanitize third party input by saying $buddski = mysql_real_escape_string($tuddski) because the PHP parser and MySQL bridge are far away. It's a whole new world of SQLite where you compose your queries and parse your results with JavaScript. But while you may not have your whole site's database go down, the user who gets his/her database corrupted or wiped due to a malicious injection attack is going to be rather upset. So, what's the best way, in pure JavaScript, to escape/sanitize your inputs so they will not wreak havoc with your user's built-in database? Scriptlets? specifications? Anyone?

    Read the article

  • Partials vs for loop — best practices

    - by Mike
    In coding up your view templates you can render a partial and pass an array of objects to be rendered once per object. OR you can use a For blank in @blank loop. How do you decide when to do which? It seems that if you use a partial for every iterable object you will end up having to modify tons of separate files to make changes to potentially one view. With the loops you can see everything right there in one file.

    Read the article

  • H.264 / FLV best practices for HTML

    - by Steve Murch
    I run a website with about 700 videos (And no, it's not porn -- get your mind out of the gutter :-) ). The videos are currently in FLV format. We use the JWPlayer to render those videos. IIS6 hosted. Everything works just fine. As I understand it, H.264 (not FLV and likely not OGG) is the emerging preferred HTML5 video standard. Today, the iPad really only respects H.264 or YouTube. Presumably, soon many more important browsers will follow Apple's lead and respect only the HTML5 tag. OK, so I think I can figure out how to convert my existing videos into the proper H.264 format. There are various tools available, including ffmpeg.exe. I haven't tried it yet, but I don't think that's going to be a problem after fiddling with the codec settings. My question is more about the container itself -- that is, planning graceful transition for all users. What's the best-practice recommendation for rendering these videos? If I just use the HTML5 tag, then presumably any browser that doesn't yet support HTML5 won't see the videos. And if I render them in Flash format via the JWPlayer or some other player, then they won't be playable on the iPad. Do I have to do ugly UserAgent detection here to figure out what to render? I know the JWPlayer supports H.264 media, but isn't the player itself a Flash component and therefore not playable on the iPad? Sorry if I'm not being clear, but I'm scratching my head on a graceful transition plan that will work for current browsers, the iPad and the upcoming HTML5 wave. I'm not a video expert, so any advice would be most welcome, thanks.

    Read the article

  • App.Config Best Practices ?

    - by abmv
    Normally when you have a application configuration file in your application and your application is expected to read from it. Is it good to check initially at start up if this file exists and raise an error and not to proceed at all ? (Worse case senarios) Or leave it to the unhandled exception manager to handle it and shut down the application? (WPF/Winforms etc)

    Read the article

  • Best practices for dimensioning control panels in WPF

    - by vizcaynot
    Hello: I defined a Window in WPF, into this one I put a "stack panel" and inside this panel I put a "tab control" and some "button controls". When executing the program, I would like that when I have to resize the window using the mouse, the stack panel and all controls inside it can also be resized automatically and proportionally to the window. How can I get this? Thanks!!

    Read the article

  • Best practices: Sending email on behalf of users

    - by Ben Doom
    The company I work for provides testing services for the healthcare industry. As part of our services, we need to send email to our clients' employees. Typically, these are temp, part-time, or contract employees, and so have private email addresses (eg Hotmail, GMail, Yahoo!, etc). Up to now, we've been sending from an internal address, but this means that replies come back to us when employees aren't paying attention or don't know to send queries to our clients. I'd like to change this, so that the person who requests that the email is sent is the person that is replied to. We've used reply-to: in the past, but it seemed to cause additional mail to be trapped by spam filters. I've been reading about sender: and on-behalf-of: headers, and was wondering what the current best-practice was for sending email in a scenario where we need to send email such that the reply goes to a domain we don't control.

    Read the article

  • ODP.NET Code Example Critque or best practices

    - by andrewWinn
    I currently have a DataAccess Layer in Vb.Net. I am not too happy with my implementation of both my ExecuteQuery (as DataSet) and ExecuteNonQuery functions. Does anyone have any code that I could see? My code just doesn't look clean. Any thoughts or critiques on it would be appreciated also. Using odpConn As OracleConnection = New OracleConnection(_myConnString) odpConn.Open() If _beginTransaction Then txn = odpConn.BeginTransaction(IsolationLevel.Serializable) End If Try Using odpCmd As OracleCommand = odpConn.CreateCommand() odpCmd.CommandType = CommandType.Text odpCmd.CommandText = sSql For i = 0 To parameters.Parameters.Count - 1 Dim prm As New OracleParameter prm = DirectCast(parameters.Parameters(i), ICloneable).Clone odpCmd.Parameters.Add(prm) Next If (odpConn.State = ConnectionState.Closed) Then odpConn.Open() End If iToReturn = odpCmd.ExecuteNonQuery() If _beginTransaction Then txn.Commit() End If End Using Catch txn.Rollback() End Try End Using

    Read the article

  • Best practices for Java logging from multiple threads?

    - by Jason S
    I want to have a diagnostic log that is produced by several tasks managing data. These tasks may be in multiple threads. Each task needs to write an element (possibly with subelements) to the log; get in and get out quickly. If this were a single-task situation I'd use XMLStreamWriter as it seems like the best match for simplicity/functionality without having to hold a ballooning XML document in memory. But it's not a single-task situation, and I'm not sure how to best make sure this is "threadsafe", where "threadsafe" in this application means that each log element should be written to the log correctly and serially (one after the other and not interleaved in any way). Any suggestions? I have a vague intuition that the way to go is to use a queue of log elements (with each one able to be produced quickly: my application is busy doing real work that's performance-sensitive), and have a separate thread which handles the log elements and sends them to a file so the logging doesn't interrupt the producers. The logging doesn't necessarily have to be XML, but I do want it to be structured and machine-readable. edit: I put "threadsafe" in quotes. Log4j seems to be the obvious choice (new to me but old to the community), why reinvent the wheel...

    Read the article

  • Deploying a Rails App to Multiple Servers using Capistrano - Best Practices

    - by Louise
    I have a rails application that I need to deploy to 3 servers - machine1.com, machine2.com and machine3.com. I want to be able to deploy it to all machines at once and each machine individually. Can someone help me out with a skeleton Capistrano config file / recipe? Should it all be in deploy.rb or should I break it out in machine1.rb, etc? I thought I was on the right track getting Capistrano to take in command line arguments, but it choked when I tried set the roles within the namespaces. I'd pass in 'hosts=1,2,3' as an argument and set the role:app/web/db to "machine#{host}.com" after splitting on the command and going into an each do |host| {}... Anyway, other than creating 4 different deploy.rb files and renaming it before running cap:deploy each time, I'm stumped. I'd like to be able to do the following: cap deploy:machine1:latest_version_from_svn cap deploy:all_machines:latest:version_from_svn Just don't know if it should all be in deploy.rb split up with namespaces or if it should be broken into multiple deploy*.rb files.

    Read the article

  • Best Practices / Patterns for Enterprise Protection/Remediation of SSNs (Social Security Numbers)

    - by Erik Neu
    I am interested in hearing about enterprise solutions for SSN handling. (I looked pretty hard for any pre-existing post on SO, including reviewing the terriffic SO automated "Related Questions" list, and did not find anything, so hopefully this is not a repeat.) First, I think it is important to enumerate the reasons systems/databases use SSNs: (note—these are reasons for de facto current state—I understand that many of them are not good reasons) Required for Interaction with External Entities. This is the most valid case—where external entities your system interfaces with require an SSN. This would typically be government, tax and financial. SSN is used to ensure system-wide uniqueness. SSN has become the default foreign key used internally within the enterprise, to perform cross-system joins. SSN is used for user authentication (e.g., log-on) The enterprise solution that seems optimum to me is to create a single SSN repository that is accessed by all applications needing to look up SSN info. This repository substitutes a globally unique, random 9-digit number (ASN) for the true SSN. I see many benefits to this approach. First of all, it is obviously highly backwards-compatible—all your systems "just" have to go through a major, synchronized, one-time data-cleansing exercise, where they replace the real SSN with the alternate ASN. Also, it is centralized, so it minimizes the scope for inspection and compliance. (Obviously, as a negative, it also creates a single point of failure.) This approach would solve issues 2 and 3, without ever requiring lookups to get the real SSN. For issue #1, authorized systems could provide an ASN, and be returned the real SSN. This would of course be done over secure connections, and the requesting systems would never persist the full SSN. Also, if the requesting system only needs the last 4 digits of the SSN, then that is all that would ever be passed. Issue #4 could be handled the same way as issue #1, though obviously the best thing would be to move away from having users supply an SSN for log-on. There are a couple of papers on this: UC Berkely: http://bit.ly/bdZPjQ Oracle Vault: bit.ly/cikbi1

    Read the article

  • export and import utf8 data in mysql: best practices

    - by ChrisRamakers
    We're often faced with the need to send a data file to one of our clients with data from the database he/she needs to translate. Most of the time this export is CSV or XLS. Most of the time we create a csv dump with phpmyadmin and get an xls file in return with the translated data. The problem is that most of the time the data is UTF8 and when the file is returned as xls each and every time we load the data into mysql again we end up with utf8 problems, characters not being displayed properly, etc ... We've already doublechecked everything in mysql from my.conf to column charactersets and everything is set correctly to UTF8. My question is not how to fix the encoding issue since that's been solved but how we would best proceed in the future handling this situation? What export format should we hand over? How should we import (just mysql load data infile or our own processing scripts). What is the general consensus on how to handle this situation? We would like to continue using excel if possible since that's the format almost everybody expects including our clients' translation agencies. Our clients' ease of use is the most important factor here, without overloading us with major issues each time. The best of both worlds :)

    Read the article

  • Any best practices with feedback colours?

    - by alex
    I have a few that I think are correct. These are background colours for messages. ERROR: red; INFO: blue; SUCCESS: green; NOT IMPORTANT INFO: yellow Have I got the blue and yellow around the wrong way? Any hex values that are a de facto standard for these? I am curious considering web development, but I think the answers will be agnostic. Here is an interesting thought (I'm sure I've read about it in an article). What colours would the errors be on Target's website, considering all their branding is red?

    Read the article

  • Cache Auth Tokens (or Caching HTTP headers in General) - Best Practices

    - by viatropos
    I'm using the Ruby GData Library to access Google Docs and I recently got the GData::Client::CaptchaError because I was re-logging in with every request. Reading this post, it recommends not logging in with every request, but caching the authentication token. How do I go about doing that correctly? Google says it expires every 24 hours, and it doesn't seem like I should store it in the session, so what should I do? I'm using Ruby on Rails with all this. Thanks so much

    Read the article

  • ASP.NET MVC: MetaTags; setting methodology, best practices

    - by MVCDummy09
    When I created a default MVC application in VS2K10, the master view (Site.Master) had a ContentPlaceHolder for the <title> tag. Is there a better way to set metatags like title and description than using a ContentPlaceHolder in the master and setting that ContentPlaceHolder's value in each view? How do you configure your views' metatags in a large-scale site with dozens and dozens of pages?

    Read the article

  • ASP.NET MVC static-asset aides/practices

    - by shannon
    I want to keep assets that are only used by one view in a view-specific folder, so my Search.aspx properly finds images/*.jpg, and helps me maintain my convention: ~/Areas/Candidate/Views/Job/Search.aspx -> ~/Assets/Candidate/Job/Search/images/*.jpg Perhaps with the ability to easily reference controller- or area-common assets manually or automatically: ~/Assets/Candidate/Job/images/*.jpg ~/Assets/Candidate/images/*.jpg If you wonder why I'm doing this, then speak up; I'm probably missing something. But here's why: I don't want stale static assets sitting in my ASP.NET MVC projects, which I expect to be an automatic outcome of the ~/Assets/Images folder: i.e. As a shared asset loses its last reference-count, who knows to delete it, especially with it being so difficult to trace content link validity in MVC projects? How do you, personally, do this? I can imagine, for example: Implement HtmlHelper extension methods for URL-generation. Extending ViewPage and ViewMasterPage with URL-generation methods. Implementing an inbound request filter to search related folders for static assets. and, are there good libraries out there for this? For example, something that also automatically appends timestamps for .JS and .CSS files, writes the / tags for me, and maybe even that allows me to inject includes in the head section from outside head code?

    Read the article

  • Good practices for intialising properties ?

    - by Rubans
    HI, I have a class property that is a list of strings, List. Sometimes this property is null or if it has been set but the list is empty then count is 0. However elsewhere in my code I need to check whether this property is set, so currently my code check whether it's null and count is 0 which seems messy. if(objectA.folders is null) { if(objectA.folders.count == 0) { // do something } } Any recommendation on how this should be handled? Maybe I should always initialise the property so that it's never null? Appolgies if this is a silly question.

    Read the article

  • Best practices for using memcached in Rails?

    - by Matt
    Hello everybody, as database transcations in our app are getting more and more time consuming, we have started to use memcached to reduce the amount of queries passed to MySQL. All in all, it works fine and really saves a lot of time. But as caching was "silently appearing" as a workaround to give the app more juice, a lot of our models now contain code like this: def self.all_cached Rails.cache.fetch('object_name') { find( :all, :include => [associations]) } end This is getting more and more a pain as filling and flushing the cache happens in several classes accross the application. Now, I was wondering if there was a better way to abstract memcached logic to make it more powerful and easy to use across all needed models? I was thinking about having some kind of memcached-module which is included in all needed modules. But before playing around, I thought: Let's ask experts first :-) Thanks Matt

    Read the article

  • Commenting practices?

    - by Tarmon
    Hey Everyone, As a student in computer engineering I have been pressured to type up very detailed comments for everything I do. I can see this being very useful for group projects or in the work place but when you work on your own projects do you spend as much time commenting? As a personal project I am working on grows more and more complicated I sometimes feel as though I should be commenting more but I also feel as though it's a waste of time since I will probably be the only one working on it. Is it worth the time and cluttered code? Thoughts?

    Read the article

  • Rails - asynchronous tasks, forked processes, best practices

    - by LisaPatton
    I'm using a Observer on my classes. When one of the records is created/updated I need to notfify another service (via a URL call). What is the best way to do this to avoid slowing down my class? Would using a gem liked delayed_job be overkill? In my Observer's after_update() / after_create() I just want to launch a thread that calls the URL...

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >