Search Results

Search found 720 results on 29 pages for 'conventions'.

Page 3/29 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • Patterns and conventions to document changes while developing

    - by Talysson
    Let's say I'm developing a second version of an API, and there's some changes in method names and so on from the previous version. What's a good way to document these changes ? I mean, is it better to document while changing (but, maybe, there will be more changes before the release, so I think it could be more work than necessary) or write down some topic and document after all the changes are done ?

    Read the article

  • Conventions for the behavior of double or triple "click to select text" features?

    - by John Sullivan
    Almost any mature program that involves text implements "double click to select the word" and, in some cases, "triple click to select additional stuff like an entire line" as a feature. I find these features useful but they are often inconsistent between programs. Example - some programs' double clicks do not select the ending space after a word, but most do. Some recognize the - character as the end of a word, others do not. SO likes to select the entire paragraph as I write this post when I triple click it, VS web developer 2005 has no triple click support, and ultra-edit 32 will select one line upon triple clicking. We could come up with innumerable inconsistencies about how double and triple click pattern matching is implemented across programs. I am concerned about how to implement this behavior in my program if nobody else has achieved a convention about how the pattern matching should work. My question is, does a convention (conventions? maybe an MS or Linux convention?) exist that dictates how these features are supposed to behave to the end user? What, if any, are they?

    Read the article

  • f# naming conventions.. WTF??!!

    - by Peter Goras
    let w t f = have I missed something? do all value names in F# have to be a single char? preferably x? and do all method names have to abbreviated to a cryptic four chars?? we've had it rammed down our throats for years about descriptive variable/method names in other languages but now this doesnt apply to F#? or it is some coding 'style' bollox? Learning from code examples is hard enough with type inference. why make it harder?

    Read the article

  • How do you deal with naming conventions for rails partials?

    - by DJTripleThreat
    For example, I might have an partial something like: <div> <%= f.label :some_field %><br/> <%= f.text_field :some_field %> </div> which works for edit AND new actions. I also will have one like: <div> <%=h some_field %> </div> for the show action. So you would think that all your partials go under one directory like shared or something. The problem that I see with this is that both of these would cause a conflict since they are essentially the same partial but for different actions so what I do is: <!-- for edit and new actions --> <%= render "shared_edit/some_partial" ... %> <!-- for show action --> <%= render "shared_show/some_partial" ... %> How do you handle this? Is a good idea or even possible to maybe combine all of these actions into one partial and render different parts by determining what the current action is?

    Read the article

  • What are some Maven project naming conventions for web application module?

    - by Jared Pearson
    When creating a project with the webapp archetype in Maven, they subtly advise not putting any Java source in the webapp project by not including the "src/main/java" folder. What do you name your Maven projects? project-webapp for the project that contains the JSP, CSS, Images, etc. project for the project that contains domain specific entities ? for the project that contains the web application files like Servlets, Listeners, etc. My first inclination would be to use "webapp" for the project containing the web application files (Servlets/Listeners), however the archetype uses "webapp" to convey the JSP/CSS/Images project and would cause confusion to other developers.

    Read the article

  • Are there any Objective-C / Cocoa naming conventions for properties that are acronyms?

    - by t6d
    I have an object that models an online resource. Therefore, my object has to store the URL of the resource it belongs to. How would I name the getter and setter? MyObject *myObject = [[MyObject alloc] init]; // Set values [myObject setURL:url]; myObject.URL = url; // Get values [myObject URL]; myObject.URL; or would the following be better: MyObject *myObject = [[MyObject alloc] init]; // Set values [myObject setUrl:url]; myObject.url = url; // Get values [myObject url]; myObject.url; The former example would of course require to define the property in the following way: @property (retain, getter = URL, setter = setURL:) NSURL *url;

    Read the article

  • Case convention- Why the variation between languages?

    - by Jason
    Coming from a Java background, I'm very used to camelCase. When writing C, using the underscore wasn't a big adjustment, since it was only used sparingly when writing simple Unix apps. In the meantime, I stuck with camelCase as my style, as did most of the class. However, now that I'm teaching myself C# in preparation for my upcoming Usability Design class in the fall, the PascalCase convention of the language is really tripping me up and I'm having to rely on intellisense a great deal in order to make sure the correct API method is being used. To be honest, switching to the PascalCase layout hasn't quite sunk in the muscle memory just yet, and that is frustrating from my point of view. Since C# and Java are considered to be brother languages, as both are descended from C++, why the variation in the language conventions? Was it a personal decision by the creators based on their comfort level, or was it just to play mindgames with new introductees to the language?

    Read the article

  • Assignments in mock return values

    - by zerkms
    (I will show examples using php and phpunit but this may be applied to any programming language) The case: let's say we have a method A::foo that delegates some work to class M and returns the value as-is. Which of these solutions would you choose: $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue('baz')); $obj = new A($mock); $this->assertEquals('baz', $obj->foo()); or $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue($result = 'baz')); $obj = new A($mock); $this->assertEquals($result, $obj->foo()); or $result = 'baz'; $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue($result)); $obj = new A($mock); $this->assertEquals($result, $obj->foo()); Personally I always follow the 2nd solution, but just 10 minutes ago I had a conversation with couple of developers who said that it is "too tricky" and chose 3rd or 1st. So what would you usually do? And do you have any conventions to follow in such cases?

    Read the article

  • Naming conventions for private members of .NET types

    - by Joan Venge
    Normally when I have a private field inside a class or a struct, I use camelCasing, so it would be obvious that it's indeed private when you see the name of it, but in some of my colleagues' C# code, I see that they use m_ mostly or sometimes _, like there is some sort of convention. Aren't .NET naming conventions prevent you from using underscores for member names? And when you mention the MS naming conventions or what not, they tell you it's the best way, but don't explain the reasoning behind it.

    Read the article

  • Jira Conventions and Best-Practices.

    - by Amby
    I have been using Jira since 6months but haven;t been through any document related to various options available and how to use them for maximum output. There must be some conventions that help in better tracking of the issue. For instance, Logging work, Linking issues, creating sub-tasks. It would be of help if you can share some of the features (and the conventions) that you follow while using Jira. It may vary from team-to-team but there must be some generic rules which can be followed. Any feedback would be of help. Thanks.

    Read the article

  • How to force myself to follow naming and other conventions

    - by The King
    Hi All, I believe, I program good, atleast my code produces results... But I feel I have drawback... I hardly follow any naming conventions... neither for variables.. nor for methods... nor for classes... nor for tables, columns, SPs... Further to this, I hardly comment anything while programming... I always think that, Let me first see the results and then I will come and correct the var names and other things later... (Thanks to visual studio's reflection here)... But the later does not come... So, I need tips, to force myself to adopt to the practice of following naming conventions, and commenting... Thanks for your time

    Read the article

  • What do we call to "non-programmers" ? ( Like "muggle" in HP ) [closed]

    - by OscarRyz
    Sometimes I want to refer to people without coding powers as Muggles. But it doesn't quite feel right. Gamers have n00b ( but still a n00b has some notion of gaming ) I mean, for all those who Windows in the only OS in the world ( what's an OS ? would they ask ) For project manager who can't distinguish between excel and a database. For those who exclaim "Wooow! when you show them the ctrl-right click to see the webpage source code. What would be a good word to describe to these "persons without lack of coding ability?" Background I didn't mean to be disrespectful with ordinary people. It's just, sometimes it drives me nuts seeing coworkers struggling trying to explain to these "people" some concept. For instance, recently we were asked, what a "ear" was (in Java). My coworker was struggling on how to explain what is was, and how it differ from .war, .jar, etc. and talking about EJB's application server, deployment etc, and our "people"1 was like o_O. I realize a better way to explain was "Think about it as an installer for the application, similar to install.exe" and he understood immediately. This is none's fault, it is sometimes our "poeple" come from different background, that's it. Is our responsibility to talk at a level they can understand, some coworkers, don't get it and try very hard to explain programming concepts ( like the source code in the browser ). But I get the point, we I don't need to be disrespectful. ... But, I'm considering call them pebkac's 1As suggested

    Read the article

  • The curious case(s) of the Microsoft product naming department

    - by AaronBertrand
    A long time ago, in a galaxy far, far away... Okay, it was here on earth, a little over 5 years ago. With SQL Server 2005, Microsoft introduced a very useful feature called the DAC. DAC stands for "dedicated administrator connection"... you can read about it here , but essentially, it allows you a single connection into the server with priority resource allocation - so you can actually get in and kill a rogue process that is otherwise taking over the server. On its own this was a fine acronym choice,...(read more)

    Read the article

  • Pure virtual or abstract, what's in a name?

    - by Steven Jeuris
    While discussing a question about virtual functions on Stack Overflow, I wondered whether there was any official naming for pure (abstract) and non-pure virtual functions. I always relied on wikipedia for my information, which states that pure and non-pure virtual functions are the general term. Unfortunately, the article doesn't back it up with a origin or references. To quote Jon Skeet's answer to my reply that pure and non-pure are the general term used: @Steven: Hmm... possibly, but I've only ever seen it in the context of C++ before. I suspect anyone talking about them is likely to have a C++ background :) Did the terms originate from C++, or were they first defined or implemented in a earlier language, and are they the 'official' scientific terms?

    Read the article

  • How to name a bug?

    - by Pieter
    Bugs usually receive a descriptive name: "That X-Y synchronization issue", "That crash after actions A, B and D but not C", "Yesterday's update problem". Even the JIRA issue tracker has a field "Summary" instead of "Name". In discussing "big" bugs, I actually use JIRA id's to prevent confusion. There's a few restrictions to take into account: When reporting a bug, only the consequence of a bug is known. The root cause might never even be found. Several reported bugs might be found out to be duplicates, or might be completely different consequences of the same bug. In large projects, bugs will come at you by the dozens every month. Now, how would you name a bug? Name them like hurricanes perhaps?

    Read the article

  • Is there an established convention for separating Windows file names in a string?

    - by Heinzi
    I have a function which needs to output a string containing a list of file paths. I can choose the separation character but I cannot change the data type (e.g. I cannot return a List<string> or something like that). Wanting to use some well-established convention, my first intuition was to use the semicolon, similar to what Windows's PATH and Java's CLASSPATH (on Windows) environment variables do: C:\somedir\somefile.txt;C:\someotherdir\someotherfile.txt However, I was surprised to notice that ; is a valid character in an NTFS file name. So, is the established best practice to just ignore this fact (i.e. "no sane person should use ; in a file name and if they do, it's their own fault") or is there some other established character for separating Windows paths or files? (The pipe (|) might be a good choice, but I have not seen it used anywhere yet for this purpose.)

    Read the article

  • "My stuff" vs. "Your stuff" in UI texts

    - by John Isaacks
    When refering to a users stuff should you use My or Your, for example: My Cart | My Account | My Wishlist Or Your Cart | Your Account | Your Wishlist I found this article that argues for the use of your. It says flikr does this. It also says MySpace and MyYahoo are wrong. I also noticed today that Amazon uses the term Your. However, I have heard they are the masters at testing variations and finding the best one, so what you see on their site might be the best variation, or simply something they are currently testing. I personally like the way my looks better, but thats just my opinion. What do you think? What will hever the better impact on customers? Does it really even matter?

    Read the article

  • Naming a class that decides to retrieve things from cache or a service + architecture evaluation

    - by Thomas Stock
    Hi, I'm a junior developer and I'm working on a pet project that I want to learn as much as possible from. I have the following scenario: There's a WCF service that I use to retrieve and update data, lets say Cars. So it's called CarWCFService and has a GetCars(), SaveCar(), ... . It implements interface ICarService. This isn't the Actual WCF service but more like a wrapper around it. Upon retrieving data from the service, I want to store them in local memory, as cache. I have made a class for this called CarCacheService which also implements interface ICarService. (I will explain later why it implements ICarService) I don't want client code to be calling these implementations. Instead, I want to create a third implementation for ICarService that tries to read from the CarCacheService before calling the WCFCarService, stores retrieved data in the CarCacheService, etc. 3 questions: How do I name this third class? I was thinking about something as simple as CarService. This does not really says what the service does exactly, tho. Is the naming for the other classes good? Would this naming and architecture be obvious for future programmers? This is my biggest concern. Does this architecture make sense? The reason that I implement ICarService on the CarCacheService is mainly because it allows me to fake the WCFService while debugging. I can store dummy data in a CarCacheService instance and pass it to the CarService, together with an(other) empty CarCacheService. If I made CacheCarService and WCFService public I could let client code decide if they want to drop the caching and just work directly on the WCFService.

    Read the article

  • Scientific evidence that supports using long variable names instead of abbreviations?

    - by Sebastian Dietz
    Is there any scientific evidence that the human brain can read and understand fully written variable names better/faster than abbreviated ones? Like PersistenceManager persistenceManager; in contrast to PersistenceManager pm; I have the impression that I get a better grasp of code that does not use abbreviations, even if the abbreviations would have been commonly used throughout the codebase. Can this individual feeling be backed up by any studies?

    Read the article

  • [YYYY].[MM].[DD].[hh][mm] vs. [major].[minor].[revision] [closed]

    - by ef2011
    Possible Duplicate: What “version naming convention” do you use? I am currently debating between the traditional versioning convention [major].[minor].[revision] and my own, almost whimsical, [YYYY].[MM].[DD].[hh][mm] for a new project I am starting. I understand that [major].[minor].[revision] is probably the most popular versioning method on the planet and it is indeed pretty straightforward and reasonable, except that determining which changes merit the label "major", "minor" or even "revision" could be... subjective. A versioning system based on a timestamp is purely non-subjective and guarantees uniqueness. Which one would you choose for your project and why?

    Read the article

  • Should I use title case in URLs?

    - by Amadiere
    We are currently deciding on a consistent naming convention across a site with multiple web applications. Historically, I've been an advocate of the 'lowercase all the letters!' when creating URLs: http://example.com/mysystem/account/view/1551 However, within the last year or two, specifically since I began using ASP.NET MVC & had more dealings with REST based URLs, I've become a fan of capitalizing the first letter of each section/word within the URL as it makes it easier to read (imho). http://example.com/MySystem/Account/View/1551 We're not in a situation where people need to read or be able to understand the URLs, so that's not a driver per se. The main thing we are after is a consistent approach that is rational and makes sense. Are there any standards that declare it good to do one way or another, or issues that we may run into on (at least realistically modern) setups that would choose a preference over another? What is the general consensus for this debate currently?

    Read the article

  • Are very short or abbreviated method/function names that don't use full words bad practice or a matter of style.

    - by Alb
    Is there nowadays any case for brevity over clarity with method names? Tonight I came across the Python method repr() which seems like a bad name for a method to me. It's not an English word. It apparently is an abbreviation of 'representation' and even if you can deduce that, it still doesn't tell you what the method does. A good method name is subjective to a certain degree, but I had assumed that modern best practices agreed that names should be at least full words and descriptive enough to reveal enough about the method that you would easily find one when looking for it. Method names made from words help let your code read like English. repr() seems to have no advantages as a name other than being short and IDE auto-complete makes this a non-issue. An additional reason given in an answer is that python names are brief so that you can do many things on one line. Surely the better way is to just extract the many things to their own function, and repeat until lines are not too long. Are these just a hangover from the unix way of doing things? Commands with names like ls, rm, ps and du (if you could call those names) were hard to find and hard to remember. I know that the everyday usage of commands such as these is different than methods in code so the matter of whether those are bad names is a different matter.

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • Test descriptions/name, say what the test is? or what it means when it fails?

    - by xenoterracide
    The API docs for Test::More::ok is ok($got eq $expected, $test_name); right now in one of my apps I have $test_name print what the test is testing. So for example in one of my tests I have set this to 'filename exists'. What I realized after I got a bug report recently, and realized that the only time I ever see this message is when the test is failing, if the test is failing that means the file doesn't exist. In your opinion, do you think these $test_name's should say what the test means if successful? what it means if it failed? or do you think it should say something else? please explain why?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >