Search Results

Search found 5915 results on 237 pages for 'practices'.

Page 69/237 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • effective retrieve for a voting system in PHP and MySQL

    - by Adnan
    Hello, I have a system where registered users can vote up/vote down comment for a picture. Something very similar to SO's voting system. I store the votes in a table with values as such; vote_id | vote_comment_id | vote_user_id | vote_date | vote_type Now I have few a question concerned the speed and efficiency for the following; PROB: Once a user opens the picture page with comments I need, if that user has already voted UP/DOWN a comment to show it like; "you voted up" or "you voted down" next to the comment (in SO it the vote image is highlighted) MY POSSIBLE SOL: Right now when I open a picture page for each comment I loop thru, I loop thru my table of votes as well and check if a user has voted and show the status (I compare the vote_user_id with the user's session). How efficient is this? Anyone have a better approach to tackle this kind of problem?

    Read the article

  • Best Practice for Uploading Many (2000+) Images to A Server

    - by bob
    Hello, I have a general question about this. When you have a gallery, sometimes people need to upload 1000's of images at once. Most likely, it would be done through a .zip file. What is the best way to go about uploading this sort of thing to a server. Many times, server have timeouts etc. that need to be accounted for. I am wondering what kinds of things should I be looking out for and what is the best way to handle a large amount of images being uploaded. I'm guessing that you would allow a user to upload a zip file (assuming the timeout does not effect you), and this zip file is uploaded to a specific directory, lets assume in this case a directory is created for each user in the system. You would then unzip the directory on the server and scan the user's folder for any directories containing .jpg or .png or .gif files (etc.) and then import them into a table accordingly. I'm guessing labeled by folder name. What kind of server side troubles could I run into? I'm aware that there may be many issues. Even general ideas would be could so I can then research further. Thanks! Also, I would be programming in Ruby on Rails but I think this question applies accross any language.

    Read the article

  • Rails: find by day of week with timestamp

    - by Sleepycat
    I need to grab the records for same day of the week for the preceeding X days of the week. There must be a better way to do it than this: Transaction.find_by_sql "select * from transactions where EXTRACT(DOW from date) = 1 and organisation_id = 4 order by date desc limit 7" It gets me what I need but is Postgres specific and not very "Rails-y". Date is a timestamp. Anyone got suggestions?

    Read the article

  • Looking for exercises to learn SQL, using the Northwind database

    - by MedicineMan
    I am trying to become more familiar with SQL by writing queries against the Northwind database. I am looking for some exercises that would help me to learn SQL and features of SQL Server. It is important that the exercises have solutions, and in complicated cases, it would be great if there was an explanation for the query. Thanks for the answers so far but I still have not found what I am looking for: Is there any free resource, available online, without registration, that I can find a list of these exercises?

    Read the article

  • Should I store generated code in source control

    - by Ron Harlev
    This is a debate I'm taking a part in. I would like to get more opinions and points of view. We have some classes that are generated in build time to handle DB operations (in This specific case, with SubSonic, but I don't think it is very important for the question). The generation is set as a pre-build step in Visual Studio. So every time a developer (or the official build process) runs a build, these classes are generated, and then compiled into the project. Now some people are claiming, that having these classes saved in source control could cause confusion, in case the code you get, doesn't match what would have been generated in your own environment. I would like to have a way to trace back the history of the code, even if it is usually treated as a black box. Any arguments or counter arguments? UPDATE: I asked this question since I really believed there is one definitive answer. Looking at all the responses, I could say with high level of certainty, that there is no such answer. The decision should be made based on more than one parameter. Reading the answers below could provide a very good guideline to the types of questions you should be asking yourself when having to decide on this issue. I won't select an accepted answer at this point for the reasons mentioned above.

    Read the article

  • html & javascript: How to store data referring to html elements

    - by Dan
    Hello, I'm working on a web application that uses ajax to communicate to the server. My specific situation is the following: I have a list of users lined out in the html page. On each of these users i can do the following: change their 'status' or 'remove' them from the account. What's a good practice for storing information in the page about the following: the user id the current status of the user P.S.: I'm using jQuery.

    Read the article

  • Automatically create tag based on the string

    - by Gautam
    Hello, I am new to ruby on rails. Lets say i have this text.. Ashley Cole and Cheryl Cole Split. Is there a way to automatically tag this above text to Ashley Cole, Cheryl Code, ChelseaFC ( Ashley Cole plays football (Soccer) for that club. Please help.. Also which is the best tagging gem available? Looking forward for your help Thanks Gautam

    Read the article

  • Should I Prefer a Closed or Open List<> System?

    - by Tyler Murry
    Hey guys, I've got a class in my project that stores a List< of elements. I'm trying to figure out whether I should allow the user to add to that List directly (e.g. Calling the native add/remove methods) or lock it down by declaring the List private and only allowing a handful of methods I choose to actually alter the List. It's a framework, so I'm trying to design it as robustly as possible, but I also want to keep it as simple and error-free as possible. What's the best practice in this situation? Thanks, Tyler

    Read the article

  • Unit Testing in ASP.NET MVC: Minimising the number of asserts per test

    - by Neil Barnwell
    I'm trying out TDD on a greenfield hobby app in ASP.NET MVC, and have started to get test methods such as the following: [Test] public void Index_GetRequest_ShouldReturnPopulatedIndexViewModel() { var controller = new EmployeeController(); controller.EmployeeService = GetPrePopulatedEmployeeService(); var actionResult = (ViewResult)controller.Index(); var employeeIndexViewModel = (EmployeeIndexViewModel)actionResult.ViewData.Model; EmployeeDetailsViewModel employeeViewModel = employeeIndexViewModel.Items[0]; Assert.AreEqual(1, employeeViewModel.ID); Assert.AreEqual("Neil Barnwell", employeeViewModel.Name); Assert.AreEqual("ABC123", employeeViewModel.PayrollNumber); } Now I'm aware that ideally tests will only have one Assert.xxx() call, but does that mean I should refactor the above to separate tests with names such as: Index_GetRequest_ShouldReturnPopulatedIndexViewModelWithCorrectID Index_GetRequest_ShouldReturnPopulatedIndexViewModelWithCorrectName Index_GetRequest_ShouldReturnPopulatedIndexViewModelWithCorrectPayrollNumber ...where the majority of the test is duplicated code (which therefore is being tested more than once and violates the "keep tests fast" advice)? That seems to be taking it to the extreme to me, so if I'm right as I am, what is the real-world meaning of the "one assert per test" advice?

    Read the article

  • How do I remove a folder from Windows Distributed File System?

    - by digiguru
    We recently moved to a webfarm and setup dfs, only to find a beta application was creating files like there was no tomorrow. 1.2 million files were replicated across the farm, and since then we have prevented the application from creating new files, but every time we try to remove the files, it replaces them on each server because of replication. The process of replacing them actually causes to server to run slowly and in some cases stall. Is there any way we can stop replication at a folder level?

    Read the article

  • What's the use of Ant's extension-point if/unless attributes?

    - by Robert Menteer
    When you define an extension-point in an Ant build file you can have it conditional by using the if or unless attribute. On a target the if/unless prevent it's tasks from being run. But an extension-point doesn't have any tasks to conditionally run, so what does the condition do? My thought (which proved to be incorrect in Ant 1.8.0) is it would prevent any tasks that extend the extension-point from being run. Here is an example build script showing the problem: <project name = "ext-test" default = "main"> <property name = "do.it" value = "false" /> <extension-point name = "init"/> <extension-point name = "doit" depends = "init" if = "${do.it}" /> <target name = "extend-init" extensionOf = "init"> <echo message = "Doing extend-init." /> </target> <target name = "extend-doit" extensionOf = "doit"> <echo message = "Do It! (${do.it})" /> </target> <target name = "main" depends = "doit"> <echo message = "Doing main." /> </target> </project> Using the command: ant -v Relults in: Apache Ant version 1.8.0 compiled on February 1 2010 Trying the default build file: build.xml Buildfile: /Users/bob/build.xml Detected Java version: 1.6 in: /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home Detected OS: Mac OS X parsing buildfile /Users/bob/build.xml with URI = file:/Users/bob/build.xml Project base dir set to: /Users/bob parsing buildfile jar:file:/Users/bob/Documents/Development/3P-Tools/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml with URI = jar:file:/Users/bob/Documents/Development/3P-Tools/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml from a zip file Build sequence for target(s) `main' is [extend-init, init, extend-doit, doit, main] Complete build sequence is [extend-init, init, extend-doit, doit, main, ] extend-init: [echo] Doing extend-init. init: extend-doit: [echo] Do It! (false) doit: Skipped because property 'false' not set. main: [echo] Doing main. BUILD SUCCESSFUL Total time: 0 seconds You will notice the target extend-doit is executed but the extention-point itself is skipped. Since an extention-point doesn't have any tasks exactly what has been skipped? Any targets that depend on the extention-point still get executed since a skipped target is a successful target. What is the value of the if/unless attributes on an extention-point?

    Read the article

  • Reordering arguments using recursion (pro, cons, alternatives)

    - by polygenelubricants
    I find that I often make a recursive call just to reorder arguments. For example, here's my solution for endOther from codingbat.com: Given two strings, return true if either of the strings appears at the very end of the other string, ignoring upper/lower case differences (in other words, the computation should not be "case sensitive"). Note: str.toLowerCase() returns the lowercase version of a string. public boolean endOther(String a, String b) { return a.length() < b.length() ? endOther(b, a) : a.toLowerCase().endsWith(b.toLowerCase()); } I'm very comfortable with recursions, but I can certainly understand why some perhaps would object to it. There are two obvious alternatives to this recursion technique: Swap a and b traditionally public boolean endOther(String a, String b) { if (a.length() < b.length()) { String t = a; a = b; b = t; } return a.toLowerCase().endsWith(b.toLowerCase()); } Not convenient in a language like Java that doesn't pass by reference Lots of code just to do a simple operation An extra if statement breaks the "flow" Repeat code public boolean endOther(String a, String b) { return (a.length() < b.length()) ? b.toLowerCase().endsWith(a.toLowerCase()) : a.toLowerCase().endsWith(b.toLowerCase()); } Explicit symmetry may be a nice thing (or not?) Bad idea unless the repeated code is very simple ...though in this case you can get rid of the ternary and just || the two expressions So my questions are: Is there a name for these 3 techniques? (Are there more?) Is there a name for what they achieve? (e.g. "parameter normalization", perhaps?) Are there official recommendations on which technique to use (when)? What are other pros/cons that I may have missed?

    Read the article

  • GAE modeling relationship options

    - by Sway
    Hi there, I need to model the following situation and I can't seem to find a consistent example on how to do it "correctly" for the google app engine. Suppose I've got a simple situation like the following: [Company] 1 ----- M [Stare] A company has one to many stores. Each store has an address made up of a address line 1, city, state, country, postcode etc. Ok. Lets say we need to create say an "Audit". An Audit is for a company and can be across one to many stares. So something like: [Audit] 1 ------ 1 [Company] 1 ------ M [Store] Now we need to query all of the "audits" based on the Store "addresses" in order to send the "Auditors" to the right locations. There seem to be numerous articles like this one: http://code.google.com/appengine/articles/modeling.html Which give examples of creating a "ContactCompany" model class. However they also say that you should use this kind of relationship only when you "really need to" and with "care" for performance. I've also read - frequently - that you should denormalize as much as possible thereby moving all of the "query-able" data into the Audit class. So what would you suggest as the best way to solve this? I've seen that there is an Expando class but I'm not sure if that is the "best" option for this. Any help or thoughts on this would be totally appreciated. Thanks in advance, Matt

    Read the article

  • What damage is done by document.write()?

    - by Simon Gibbs
    What bad things happen at the moment document.write() is invoked? I've heard bits and peices about document.write having an adverse impact on the DOM or on the use of Javascript libraries. I have an issue in front of me that I suspect is related, but have not been able to find a concise summary of what damage the method does.

    Read the article

  • Best practice when using WebMethods and session

    - by Abdel Olakara
    Hi all, I want to reduce postback in one of my application page and use ajax instead. I used the WebMethod to do so.. I have a static WebMethod that needs to access the session variables and modify. and on the client side, i am calling this method using jQuery. I tried accessing the session as follows: [WebMethod] public static void TestWebMethod() { if (HttpContext.Current.Session["pitems"] != null) { log.Debug("Using the existing list"); Product prod = (Product)HttpContext.Current.Session["pitems"]; List<Configs> confs = cart.GetConfigs(); foreach (Configs citem in confis) { log.Info(citem.Description); } } log.Info("Inside the method!"); } The values are displayed correctly and seems to work.. but i would like to know if this practice is allowed as the method is a static methods and would like to know how it will behave if multiple people access the application. I would also like to know how developers do these kind of tasks in ASP if this is not the right method. Thanks in advance for your suggestions and ideas, Abdel Olakara

    Read the article

  • Including libraries in project. Best practise.

    - by mridang
    Hi guys, I'm writing a Python open-source app. My app uses some open source Python libraries. These libraries in turn use other open-source libraries. I intend to release my code at Sourceforge or Google Code but do I need to include the sources of the other libraries? Is this a good practice? ...or should I simply write this information into a README file informing the use about the other required libraries. I've placed all these libraries into a libs sub folder in my source directory. When checking my code into SVN, should I use something called svn:externals to link to other sources? Is there a way to dynamically update my libraries to the latest version or is this something I have to do manually when I release a new version. My sincerest apologies if my question sounds vague but I'm pretty lost in this matter and I don't know what to Google for. Thanks all.

    Read the article

  • Use of @keyword in C# -- bad idea?

    - by Robert Fraser
    In my naming convention, I use _name for private member variables. I noticed that if I auto-generate a constructor with ReSharper, if the member is a keyword, it will generate an escaped keyword. For example: class IntrinsicFunctionCall { private Parameter[] _params; public IntrinsicFunctionCall(Parameter[] @params) { _params = @params; } } Is this generally considered bad practice or is it OK? It happens quite frequently with @params and @interface.

    Read the article

  • Centering a percent-based div

    - by Sarfraz
    Hello, Recently, a client asked that his site be percent-based rather than pixel-based. The percent was to be set to 80%. As you guys know, it is very easy to center the container if it is pixel-based but how do you center a percent-based main container? #container { width:80%; margin:0px auto; } That does not center the container :(

    Read the article

  • A format for storing personal contacts in a database

    - by Gart
    I'm thinking of the best way to store personal contacts in a database for a business application. The traditional and straightforward approach would be to create a table with columns for each element, i.e. Name, Telephone Number, Job title, Address, etc... However, there are known industry standards for this kind of data, like for example vCard, or hCard, or vCard-RDF/XML or even Windows Contacts XML Schema. Utilizing an standard format would offer some benefits, like inter-operablilty with other systems. But how can I decide which method to use? The requirements are mainly to store the data. Search and ordering queries are highly unlikely but possible. The volume of the data is 100,000 records at maximum. My database engine supports native XML columns. I have been thinking to use some XML-based format to store the personal contacts. Then it will be possible to utilize XML indexes on this data, if searching and ordering is needed. Is this a good approach? Which contacts format and schema would you recommend for this? Edited after first answers Here is why I think the straightforward approach is bad. This is due to the nature of this kind of data - it is not that simple. The personal contacts it is not well-structured data, it may be called semi-structured. Each contact may have different data fields, maybe even such fields which I cannot anticipate. In my opinion, each piece of this data should be treated as important information, i.e. no piece of data could be discarded just because there was no relevant column in the database. If we took it further, assuming that no data may be lost, then we could create a big text column named Comment or Description or Other and put there everything which cannot be fitted well into table columns. But then again - the data would lose structure - this might be bad. If we wanted structured data then - according to the database design principles - the data should be decomposed into entities, and relations should be established between the entities. But this adds complexity - there are just too many entities, and lots of design desicions should be made, like "How do we store Address? Personal Name? Phone number? How do we encode home phone numbers and mobile phone numbers? How about other contact info?.." The relations between entities are complex and multiple, and each relation is a table in the database. Each relation needs to be documented in the design papers. That is a lot of work to do. But it is possible to avoid the complexity entirely - just document that the data is stored according to such and such standard schema, period. Then anybody who would be reading that document should easily understand what it was all about. Finally, this is all about using an industry standard. The standard is, hopefully, designed by some clever people who anticipated and described the structure of personal contacts information much better than I ever could. Why should we all reinvent the wheel?? It's much easier to use a standard schema. The problem is, there are just too many standards - it's not easy to decide which one to use!

    Read the article

  • Is there added overhead to looking up a column in a DataTable by name rather than by index?

    - by Ben McCormack
    In a DataTable object, is there added overhead to looking up a column value by name thisRow("ColumnA") rather than by the column index thisRow(0)? In which scenarios might this be an issue. I work on a team that has lots of experience writing VB6 code and I noticed that didn't do column lookups by name for DataTable objects or data grids. Even in .NET code, we use a set of integer constants to reference column names in these types of objects. I asked our team lead why this was so, and he mentioned that in VB6, there was a lot of overhead in looking up data by column name rather than by index. Is this still true for .NET? Example code (in VB.NET, but same applies to C#): Public Sub TestADOData() Dim dt As New DataTable 'Set up the columns in the DataTable ' dt.Columns.Add(New DataColumn("ID", GetType(Integer))) dt.Columns.Add(New DataColumn("Name", GetType(String))) dt.Columns.Add(New DataColumn("Description", GetType(String))) 'Add some data to the data table ' dt.Rows.Add(1, "Fred", "Pitcher") dt.Rows.Add(3, "Hank", "Center Field") 'Method 1: By Column Name ' For Each r As DataRow In dt.Rows Console.WriteLine( _ "{0,-2} {1,-10} {2,-30}", r("ID"), r("Name"), r("Description")) Next Console.WriteLine() 'Method 2: By Column Name ' For Each r As DataRow In dt.Rows Console.WriteLine("{0,-2} {1,-10} {2,-30}", r(0), r(1), r(2)) Next End Sub Is there an case where method 2 provides a performance advantage over method 1?

    Read the article

  • When should I be cautious using about data binding in .NET?

    - by Ben McCormack
    I just started working on a small team of .NET programmers about a month ago and recently got in a discussion with our team lead regarding why we don't use databinding at all in our code. Every time we work with a data grid, we iterate through a data table and populate the grid row by row; the code usually looks something like this: Dim dt as DataTable = FuncLib.GetData("spGetTheData ...") Dim i As Integer For i = 0 To dt.Rows.Length - 1 '(not sure why we do not use a for each here)' gridRow = grid.Rows.Add() gridRow(constantProductID).Value = dt("ProductID").Value gridRow(constantProductDesc).Value = dt("ProductDescription").Value Next '(I am probably missing something in the code, but that is basically it)' Our team lead was saying that he got burned using data binding when working with Sheridan Grid controls, VB6, and ADO recordsets back in the nineties. He's not sure what the exact problem was, but he remembers that binding didn't work as expected and caused him some major problems. Since then, they haven't trusted data binding and load the data for all their controls by hand. The reason the conversation even came up was because I found data binding to be very simple and really liked separating the data presentation (in this case, the data grid) from the in-memory data source (in this case, the data table). "Loading" the data row by row into the grid seemed to break this distinction. I also observed that with the advent of XAML in WPF and Silverlight, data-binding seems like a must-have in order to be able to cleanly wire up a designer's XAML code with your data. When should I be cautious of using data-binding in .NET?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >