Search Results

Search found 40339 results on 1614 pages for 'best settings'.

Page 222/1614 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Best way to track the stages of a form across different controllers - $_GET or routing

    - by chrisj
    Hi, I am in a bit of a dilemma about how best to handle the following situation. I have a long registration process on a site, where there are around 10 form sections to fill in. Some of these forms relate specifically to the user and their own personal data, while most of them relate to the user's pets - my current set up handles user specific forms in a User_Controller (e.g via methods like user/profile, user/household etc), and similarly the pet related forms are handled in a Pet_Controller (e.g pet/health). Whether or not all of these methods should be combined into a single Registration_Controller, I'm not sure - I'm open to any advice on that. Anyway, my main issue is that I want to generate a progress bar which shows how far along in the registration process each user is. As the urls in each form section can potentially be mapping to different controllers, I'm trying to find a clean way to extract which stage a person is at in the overall process. I could just use the query string to pass a stage parameter with each request, e.g user/profile?stage=1. Another way to do it potentially is to use routing - e.g the urls for each section of the form could be set up to be registration/stage/1, registration/stage/2 - then i could just map these urls to the appropriate controller/method behind the scenes. If this makes any sense at all, does anyone have any advice for me?

    Read the article

  • Ruby and jQuery -- $(document).ajaxSend() not modifying the params as expected

    - by Jason
    I cannot get jquery's ajaxSend (http://api.jquery.com/ajaxSend/) to properly modify the parameters. I have: $(document).ajaxSend(function(event, request, settings) { settings.data = $.deparam(settings.data); settings.data['random'] = new Date().getTime(); settings.data['_method'] = 'get'; settings.data = $.param(settings.data) $.log(settings); }); $(document).ready(function() { //...snip... $.ajaxSetup({ data : { remote : 1, authenticity_token : encodeURIComponent(AUTH_TOKEN) } }); }); The idea here is that we always want 4 param sent across: remote and auth_token always get set properly. However, random and _method (both needed for IE issues) do not get set. Logging settings inside ajaxSend shows me that they are set to settings.data: "remote=1&authenticity_token=6GA9R_snip_253D&random=1270584905846&_method=get" but when it gets sent across the wire, I only have the following: authenticity_token 6GA9R_snip_253D remote 1 Why in the world is this not working?

    Read the article

  • Best Method For High Data Availability for SQL Server

    - by omatase
    I have a web service that runs 24/7. Periodically it needs to refresh its database with data from another web service. There is a lot of data. It's tens of thousands of rows. (no, I don't mean this is a lot of data for SQL Server, just trying to point out that I expect it to take some time to come down the pipe from the other web service) The data refresh can take between 5 and 10 minutes. The actual data update portion of that is between 1 and 2 minutes. This means the service would be down for all intents and purposes when consumers would be requesting this type of data. I would like to implement a system where data is always available. The only thing that comes to mind is some type of system where I maintain two separate databases. I populate the inactive one, swapping it to active before populating the other one. I'm not sure I know the best way to do this. My current ideas just revolve around two sets of the schema in a single database (using views to access the active set) or two databases each with the same schema. The application would rotate between the two databases. Any suggestions from someone who has done something like this before?

    Read the article

  • How do I implement a "sliding out of / into" effect on a settings menu similar to that in Angry Birds?

    - by VictorB
    I'm trying to implement a settings menu component similar to that in Angry Birds - a button control that makes an options menu slide out of it and back into it when clicked on. I use scene2d.ui to build the UI components: a Button in a Table to implement the button control, a Table to implement the options menu, and a Stack to lay these out one on top of the other and at this moment I have the following behavior: When the user hits the button control for the first time, then the alpha of the table component is set to 1; When the user hits the button control the second time, then the alpha of the table component is set to 0; And so on. Any ideas how I can get the sliding out of and into effect on user clicks with libgdx? Similar to what Angry Birds provides. Maybe using the TweenEngine, actions, interpolations, combinations of these? Thanks in advance.

    Read the article

  • When are predicates appropriate and what is the best pattern for usage

    - by Maxim Gershkovich
    When are predicates appropriate and what is the best pattern for usage? What are the advantages of predicates? It seems to me like most cases where a predicate can be employed a tight loop would accomplish the same functionality? I don’t see a reusability argument given you will probably only implement a predicate in one method right? They look and feel nice but besides that they seem like you would only employ them when you need a quick hack on the collection classes? UPDATE But why would you be rewriting the tight loop again and again? In my mind/code when it comes to collections I always end up with something like Class Person End Class Class PersonList Inherits List(Of Person) Function FindByName(Name) as Person tight loop.... End Function End Class @Ani By that same logic I could implement the method as such Class PersonList Inherits List(Of Person) Function FindByName(Name) as PersonList End Function Function FindByAge(Age) as PersonList End Function Function FindBySocialSecurityNumber(SocialSecurityNumber) as PersonList End Function End Class And call it as such Dim res as PersonList = MyList.FindByName("Max").FindByAge(25).FindBySocialSecurityNumber(1234) and the result along with the amount of code and its reusability is largely the same, no? I am not arguing just trying to understand.

    Read the article

  • What is the best way to store static data in C# that will never changes

    - by Luke101
    I have a class that stores data in asp.net c# application that never changes. I really don't want to put this data in the database - I would like it to stay in the application. Here is my way to store data in the application: public class PostVoteTypeFunctions { private List<PostVoteType> postVotes = new List<PostVoteType>(); public PostVoteTypeFunctions() { PostVoteType upvote = new PostVoteType(); upvote.ID = 0; upvote.Name = "UpVote"; upvote.PointValue = PostVotePointValue.UpVote; postVotes.Add(upvote); PostVoteType downvote = new PostVoteType(); downvote.ID = 1; downvote.Name = "DownVote"; downvote.PointValue = PostVotePointValue.DownVote; postVotes.Add(downvote); PostVoteType selectanswer = new PostVoteType(); selectanswer.ID = 2; selectanswer.Name = "SelectAnswer"; selectanswer.PointValue = PostVotePointValue.SelectAnswer; postVotes.Add(selectanswer); PostVoteType favorite = new PostVoteType(); favorite.ID = 3; favorite.Name = "Favorite"; favorite.PointValue = PostVotePointValue.Favorite; postVotes.Add(favorite); PostVoteType offensive = new PostVoteType(); offensive.ID = 4; offensive.Name = "Offensive"; offensive.PointValue = PostVotePointValue.Offensive; postVotes.Add(offensive); PostVoteType spam = new PostVoteType(); spam.ID = 0; spam.Name = "Spam"; spam.PointValue = PostVotePointValue.Spam; postVotes.Add(spam); } } When the constructor is called the code above is ran. I have some functions that can query the data above too. But is this the best way to store information in asp.net? if not what would you recommend?

    Read the article

  • Which method of 'clearfix' is best?

    - by Pickledegg
    I have the age old problem of a div wrapping a 2 column layout. My sidebar is floated so my container div fails to wrap the content & sidebar. <div id="container"> <div id="content"> </div> <div id="sidebar"> </div> </div> There seem to be numerous methods of fixing the clear bug in FF: <br clear="all"/> overflow:auto overflow:hidden etc. But in my situation, the only one that seems to work correctly is the <br clear="all"/> solution, which is a little bit scruffy. overflow:auto gives me nasty scrollbars, and overflow:hidden must surely have side effects. Also, apparently IE7 is supposed to not suffer from this problem due to its incorrect behaviour, but again, in my situation its suffering the same as FF. Whats the most reliable/best practice method currently available to us?

    Read the article

  • Internal Java code best practice for dealing with invalid REST API parameters

    - by user326389
    My colleague wrote the following stackoverflow question: other stack overflow question on this topic The question seems to have been misinterpreted and I want to find out the answer, so I'm starting this new question... hopefully a little more clear. Basically, we have a REST API. Users of our API call our methods with parameters. But sometimes users call them with the wrong parameters!! Maybe a mistake in their code, maybe they're just trying to play with us, maybe they're trying to see how we respond, who knows! We respond with HTTP status error codes and maybe a detailed description of the invalid parameter in the XML response. All is well. But internally we deal with these invalid parameters by throwing exceptions. For example, if someone looks up a Person object by giving us their profile id, but the profile id doesn't exist... we throw a PersonInvalidException when looking them up. Then we catch this exception in our API controller and send back an HTTP 400 status error code. Our question is... is this the best practice, throwing exceptions internally for this kind of user error? These exceptions never get propogated back to the user, this is a REST API. They only make our code cleaner. Otherwise we could have a validation method in each of our API controllers to make sure the parameters all make sense, but that seems inefficient. We have to look up things in our database potentially twice. Or we could return nulls and check for them, but that sucks... What are your thoughts?

    Read the article

  • Best way to sign data in web form with user certificate

    - by salgiza
    We have a C# web app where users will connect using a digital certificate stored in their browsers. From the examples that we have seen, verifying their identity will be easy once we enable SSL, as we can access the fields in the certificate, using Request.ClientCertificate, to check the user's name. We have also been requested, however, to sign the data sent by the user (a few simple fields and a binary file) so that we can prove, without doubt, which user entered each record in our database. Our first thought was creating a small text signature including the fields (and, if possible, the md5 of the file) and encrypt it with the private key of the certificate, but... As far as I know we can't access the private key of the certificate to sign the data, and I don't know if there is any way to sign the fields in the browser, or we have no other option than using a Java applet. And if it's the latter, how we would do it (Is there any open source applet we can use? Would it be better if we create one ourselves?) Of course, it would be better if there was any way to "sign" the fields received in the server, using the data that we can access from the user's certificate. But if not, any info on the best way to solve the problem would be appreciated.

    Read the article

  • best database design for city zip & state tables

    - by ryan a
    My application will need to reference addresses. Street info will be stored with my main objects but the rest needs to be stored seperately to reduce redundancy. How should I store/retrieve ZIPs, cities and states? Here are some of my ideas. single table solution (cant do relationships) [locations] locationID locationParent (FK for locationID - 0 for state entries) locationName (city, state) locationZIP two tables (with relationships, FK constraints, ref integrity) [state] stateID stateName [city] cityID stateID (FK for state.stateID) cityName zipCode three tables [state] stateID stateName [city] cityID stateID (FK for state.stateID) cityName [zip] zipID cityID (FK for city.cityID) zipName Then I read into ZIP codes amd how they are assigned. They aren't specifically related to cities. Some cities have more than one ZIP (ok will still work) but some ZIPs are in more than one city (oh snap) and some other ZIPs (very few) are in more than one state! Also some ZIPs are not even in the same state as the address they belong to at all. Seems ZIPs are made for carrier route identification and some remote places are best served by post offices in neighboring cities or states. Does anybody know of a good (not perfect) solution that takes this into consideration to minimize discrepencies as the database grows?

    Read the article

  • Best practice to hide/encrypt email adress in webpage

    - by Sebi
    I couldn't find a similar question, that's why here it is: Whats the best way to hide or encrypt an email link in a website, so that a crawler can't read it, but the user can nevertheless click it? I don't want to conufse the users by typing the email like this: john (at) mail.com or similar ways. (and i think this kind of links can nevertheless read by crawlers?) I also tried things like that: <script>// <![CDATA[eval(unescape('%76%61%72%20%73%3D%27%61%6D%6C%69%6F%74%72%3A%62%61%40%65%64%61%6E%6F%6C%2E%69%27%3B%76%61%72%20%72%3D%27%27%3B%66%6F%72%28%76%61%72%20%69%3D%30%3B%69%3C%73%2E%6C%65%6E%67%74%68%3B%69%2B%2B%2C%69%2B%2B%29%7B%72%3D%72%2B%73%2E%73%75%62%73%74%72%69%6E%67%28%69%2B%31%2C%69%2B%32%29%2B%73%2E%73%75%62%73%74%72%69%6E%67%28%69%2C%69%2B%31%29%7D%64%6F%63%75%6D%65%6E%74%2E%77%72%69%74%65%28%27%3C%61%20%68%72%65%66%3D%22%27%2B%72%2B%27%22%3E%4F%62%65%72%70%61%72%6C%65%69%74%65%72%3C%2F%61%3E%27%29%3B'))]]></script> but i heard this can also be read by crawler and it isn't really good practices are ther any common approaches?

    Read the article

  • changed plesk root name, what DNS settings get modified?

    - by NRGdallas
    we recently changed our plesk server's main URL from siteold.com to sitenew.com. many websites had their NS set to ns1.siteold.com - does plesk automatically update that to need ns1.sitenew.com? should I change the godaddy settings? attempting to change them states "Nameserver Not Registered" - is this simply the delay required? lastly, when adding a new domain to plesk, one would simply need to adjust the nameserver for that site in godaddy to ns1.sitenew.com or ns1.newdomain.com? (does plesk have a centralized name server, or does each site acquire its own?)

    Read the article

  • where is the best palce to count the lazy load property using JPA

    - by Ke
    Let's say we have a "Question" and "Answer" entity, @Entity public class Question extends IdEntity { @Lob private String content; @Transient private int answerTotal; @OneToMany(fetch = FetchType.LAZY) private List<Answer> answers = new ArrayList<Answer>(); ...... I need to tell how many answers for the question every time Question is queryed. So I need to do count: String count = "select count(o) from Answer o WHERE o.question=:q"; My question is, where is the best place to do the count? (Because I did a lot of query about Question entity, by date, by tag, by category, by asker, etc. It is obviously not a good solution to add count operation in each query. My first attempt is to implement a @PostLoad listener, so every time Question entity is loaded, I do count. However, EntityManager cannot be injected in listener. So this way does not work. Any hint?

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • Best ways to construct Dynamic Search Conditions for Sql

    - by CoolBeans
    I have always wondered what's the best way to achieve this task. In most web based applications you have to provide search options on many different criteria. Based on what criteria is chosen behind the scene you modify your SQL. Generally, this is how I tend to go about it:- Have a base SQL template. In the base template have conditions like this WHERE [#PRE_COND1] AND [#PRE_COND2] .. so on and so forth. So an example SQL might look something like SELECT NAME,AGE FROM PERSONS [,#TABLE2] [,#TABLE3] WHERE [#PRE_COND1] AND [#PRE_COND2] ORDER BY [#ORD_COND1] AND [#ORD_COND2] etc. During run time after figuring out the all the search criteria user has entered, I replace the [#PRE_COND1]s and [#ORD_COND1]s with the appropriate SQLs and then execute the query. I personally do not like this brute force method. However, I never came across a better approach either. How do you accomplish such tasks generally given you are either using native JDBC or Spring JDBC? It is almost like I need a C MACRO like functionality in Java to do this.

    Read the article

  • Best Practices For Secure APIs?

    - by Ferrett Steinmetz
    Let's say I have a website that has a lot of information on our products. I'd like some of our customers (including us!) to be able to look up our products for various methods, including: 1) Pulling data from AJAX calls that return data in cool, JavaScripty-ways 2) Creating iPhone applications that use that data; 3) Having other web applications use that data for their own end. Normally, I'd just create an API and be done with it. However, this data is in fact mildly confidential - which is to say that we don't want our competitors to be able to look up all our products every morning and then automatically set their prices to undercut us. And we also want to be able to look at who might be abusing the system, so if someone's making ten million complex calls to our API a day and bogging down our server, we can cut them off. My next logical step would be then to create a developers' key to restrict access - which would work fine for web apps, but not so much for any AJAX calls. (As I see it, they'd need to provide the key in the JavaScript, which is in plaintext and easily seen, and hence there's actually no security at all. Particularly if we'd be using our own developers' keys on our site to make these AJAX calls.) So my question: after looking around at Oauth and OpenID for some time, I'm not sure there is a solution that would handle all three of the above. Is there some sort of canonical "best practices" for developers' keys, or can Oauth and OpenID handle AJAX calls easily in some fashion I have yet to grok, or am I missing something entirely?

    Read the article

  • Best workflow with Git & Github

    - by Tom Schlick
    Hey guys, im looking for some advice on how to properly structure the workflow for my team with git & github. we are recent svn converts and its kind of confusing on how we should best setup our day-to-day workflow. Here is a little background, im comfortable with command line and my team is pretty new to it but can follow use commands. We all are working on the same project with 3 environments (development, staging, and production). We are a mix of developers & designers so some use the Git GUI and some command line. Our setup in svn went something like this. We had a branch for development, staging and production. When people were confident with code they would commit and then merge it into the staging. The server would update itself and on a release day (weekly) we would do a diff and push the changes to the production server. Now i setup those branches and got the process with the server running but its the actual workflow that is confusing the hell out of me. It seems like overkill that every time someone makes a change on a file they would create a new branch, commit, merge, and delete that branch... from what i have read they would be able to do it on a specific commit (using the hash), do i have that right? is this an acceptable way to go about things with git? any advice would be greatly appreciated.

    Read the article

  • Why Gsettings isn't enough to reset my nautilus settings?

    - by berdario
    I'm trying to narrow down a bug (to eventually report it or give up if it'll turn out that resetting a simple setting will be enough to get rid of it) I noticed that with a brand new user the bug doesn't pop up so I tried to reset the config of my user's nautilus I renamed .config/nautilus I renamed .nautilus2 (strange name... why the 2?) and I did a gsettings reset-recursively org.gnome.nautilus strangely enough, it didn't work: I previously set up to use my home as the desktop folder, and now the key has correctly been resetted $gsettings get org.gnome.nautilus.preferences desktop-is-home-dir false and yet on my desktop I see the contents of my home folder (meaning that the settings have not been resetted) I know that with the transition to gtk3 there've been lots of changes (before I should've used gconf... but now there's dconf and gsettings), so there're quite a bit of question about nautilus... but these unfortunately now seem to be outdated

    Read the article

  • Best way of creating lite and extended version of Git project

    - by Saif Bechan
    I have made a little framework for php. In this project I have the basic functionalities that I use for most of my projects. I have also inserted some sample data so I do not forget how it all works again. I have put the framework under version control using git. Everything works fine now and I want to further build on this. This is my first git project so I do not know which method I should use. Ok the first thing I want to do is creating 2 more versions of the project. As I explained before the version I have now has some sample data inside it. So the first version I want to create is a stripped down version, removing the sample data. I can use this version to create any new project. The second version I want to create is an extended version. This has the lite version, combined with the sample data, plus some more extensions on it. So in the end I have 3 version of the same project, small medium and large. Now what is the best way of doing this. Should I create 3 repositories for this, or can I use just one repository for all the versions.

    Read the article

  • Programming logic best practice - redundant checks

    - by eldblz
    I'm creating a large PHP project and I've a trivial doubt about how to proceed. Assume we got a class books, in this class I've the method ReturnInfo: function ReturnInfo($id) { if( is_numeric($id) ) { $query = "SELECT * FROM books WHERE id='" . $id . "' LIMIT 1;"; if( $row = $this->DBDrive->ExecuteQuery($query, $FetchResults=TRUE) ) { return $row; } else { return FALSE; } } else { throw new Exception('Books - ReturnInfo - id not valid.'); } } Then i have another method PrintInfo function PrintInfo($id) { print_r( $this->ReturnInfo($id) ); } Obviously the code sample are just for example and not actual production code. In the second method should I check (again) if id is numeric ? Or can I skip it because is already taken care in the first method and if it's not an exception will be thrown? Till now I always wrote code with redundant checks (no matter if already checked elsewhere i'll check it also here) Is there a best practice? Is just common sense? Thank you in advance for your kind replies.

    Read the article

  • Best practice on structuring asynchronous mailers (using Sidekiq)

    - by gbdev
    Just wondering what's the best way to go about structuring asynchronous mailers in my Rails app (using Sidekiq)? I have one ActionMailer class with multiple methods/emails... notifier.rb: class Notifier < ActionMailer::Base default from: "\"Company Name\" <[email protected]>" default_url_options[:host] = Rails.env.production? ? 'domain.com' : 'localhost:5000' def welcome_email(user) @user = user mail to: @user.email, subject: "Thanks for signing up!" end ... def password_reset(user) @user = user @edit_password_reset_url = edit_password_reset_url(user.perishable_token) mail to: @user.email, subject: "Password Reset" end end Then for example, the password_reset mail is sent in my User model by doing... user.rb: def deliver_password_reset_instructions! reset_perishable_token! NotifierWorker.perform_async(self) end notifier_worker.rb: class NotifierWorker include Sidekiq::Worker sidekiq_options queue: "mail" def perform(user) Notifier.password_reset(user).deliver end end So I guess I'm wondering a couple things here... Is it possible to define many "perform" actions in one single worker? By doing so I could keep things simple (one notifier/mail worker) as I have it and send many different emails through it. Or should I create many workers? One for each mailer (e.g. WelcomeEmailWorker, PasswordResetWorker, etc) and just assign them all to use the same "mail" queue with Sidekiq. I know it works as it is, but should I break out each of those mail methods (welcome_email, password_reset, etc) into individually mailer classes or is it ok to have them all under one class like Notifier? Really appreciate any advice here. Thanks!

    Read the article

  • What is the best way to store static data in C# that will never change

    - by Luke101
    I have a class that stores data in asp.net c# application that never changes. I really don't want to put this data in the database - I would like it to stay in the application. Here is my way to store data in the application: public class PostVoteTypeFunctions { private List<PostVoteType> postVotes = new List<PostVoteType>(); public PostVoteTypeFunctions() { PostVoteType upvote = new PostVoteType(); upvote.ID = 0; upvote.Name = "UpVote"; upvote.PointValue = PostVotePointValue.UpVote; postVotes.Add(upvote); PostVoteType downvote = new PostVoteType(); downvote.ID = 1; downvote.Name = "DownVote"; downvote.PointValue = PostVotePointValue.DownVote; postVotes.Add(downvote); PostVoteType selectanswer = new PostVoteType(); selectanswer.ID = 2; selectanswer.Name = "SelectAnswer"; selectanswer.PointValue = PostVotePointValue.SelectAnswer; postVotes.Add(selectanswer); PostVoteType favorite = new PostVoteType(); favorite.ID = 3; favorite.Name = "Favorite"; favorite.PointValue = PostVotePointValue.Favorite; postVotes.Add(favorite); PostVoteType offensive = new PostVoteType(); offensive.ID = 4; offensive.Name = "Offensive"; offensive.PointValue = PostVotePointValue.Offensive; postVotes.Add(offensive); PostVoteType spam = new PostVoteType(); spam.ID = 0; spam.Name = "Spam"; spam.PointValue = PostVotePointValue.Spam; postVotes.Add(spam); } } When the constructor is called the code above is ran. I have some functions that can query the data above too. But is this the best way to store information in asp.net? if not what would you recommend?

    Read the article

  • Best way to keep a .net client app updated with status of another application

    - by rwmnau
    I have a Windows service that's running all the time, and takes some action every 15 minutes. I also have a client WinForms app that displays some information about what the service is doing. I'd like the forms application to keep itself updated with a recent status, but I'm not sure if polling every second is a good move performance-wise. When it starts, my Windows Service opens a WCF named pipe to receive queries (from my client form) Every second, a timer on the winform sends a query to the pipe, and then displays the results. If the pipe isn't there, the form displays that the service isn't running. Is that the best way to do this? If my service opens the pipe when it starts, will it always stay open (until I close it or my service stops)? In addition to polling the service, maybe there's some way for the service to notify any watching applications of certain events, like starting and stopping processing? That way, I could poll less, since I'd presumably know about big events already, and would only be polling for progress. Anything that I'm missing?

    Read the article

  • C#.NET Reload configuration settings from an external config file during run-time.

    - by user569850
    I'm writing a game server in C#.Net and would like to reload or refresh settings from a config file while the server is running. Ideally I would like to save the settings in an XML file, have the ability to edit the file while the game server is running and then send the server the command to reload the settings from the file. I know I can use a database to do this as well, but the game server is fairly small and I think it would be more practical to just save settings in a flat-file. I will have file-level access to the machine the server will run on. What should I use?

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >