Search Results

Search found 2703 results on 109 pages for 'just curious'.

Page 57/109 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Japanese Multiplication simulation - is a program actually capable of improving calculation speed?

    - by jt0dd
    On SuperUser, I asked a (possibly silly) question about processors using mathematical shortcuts and would like to have a look at the possibility at the software application of that concept. I'd like to write a simulation of Japanese Multiplication to get benchmarks on large calculations utilizing the shortcut vs traditional CPU multiplication. I'm curious as to whether it makes sense to try this. My Question: I'd like to know whether or not a software math shortcut, as described above is actually a shortcut at all. This is a question of programming concept. By utilizing the simulation of Japanese Multiplication, is a program actually capable of improving calculation speed? Or am I doomed from the start? The answer to this question isn't required to determine whether or not the experiment will succeed, but rather whether or not it's logically possible for such a thing to occur in any program, using this concept as an example. My theory is that since addition is computed faster than multiplication, a simulation of Japanese multiplication may actually allow a program to multiply (large) numbers faster than the CPU arithmetic unit can. I think this would be a very interesting finding, if it proves to be true. If, in the multiplication of numbers of any immense size, the shortcut were to calculate the result via less instructions (or faster) than traditional ALU multiplication, I would consider the experiment a success.

    Read the article

  • Hobbyist programmer releasing software with a donate button

    - by espais
    I'd like to start this with a disclaimer that I realize that a full, clear-cut answer should be sought out by a lawyer. I am more so curious about what other users of this community have done Say that I had a small program that I had developed for fun, that I wished to release to the public. I'll drop it out there with one of the various open-source licenses, and probably put it up on SourceForge or Git in case if anybody should ever want to fork/maintain/check out code. Also say that I wanted to accept donations for the project, with absolutely 0 expectation that people will send any money. However, if somebody donated in order to buy me a beer or a pizza for the work that they liked, I would accept gladly. The question, then, is what are the general requirements of accepting donations? Can it go into a personal account with no questions asked as a "gift," or do I need to setup an LLC to avoid any taxation issues? (US citizen here). Again, yes this should be lawyer discussed, but I also know that many projects that I see have the ability to donate, and assume that the community probably has a decent amount of experience in this regard.

    Read the article

  • Interesting 3d zooming technique

    - by stark
    Is it possible to zoom to a certain point on screen by modifying the field of view and rotating the camera as to keep that point/object in the same place on screen while zooming ? Changing the camera position is not allowed.. I projected the 3d pos of the object on screen and remembered it. Then on each frame I calculate the direction to it in camera space and then I construct a rotation matrix to align this direction to Z axis (in cam space). After this, I calculate the direction from the camera to the object in world space and transform this vector with the matrix I obtained earlier and then use this final vector as the camera's new direction. And it's actually "kinda working", the problem is that it is more/less off than the camera's rotation before starting to zoom depending on the area you are trying to zoom in (larger error on edges/corners). It looks acceptable, but I'm not settling for only this. Any suggestions/resources for doing this technique perfectly ? If some of you want to explain the math in detail, be my guests, I can understand these things well. Thanks. Edit: I'll check often for responses, I'm really curious about this :D

    Read the article

  • Free space in tmpfs partition not adding up

    - by Dean Herbert
    I have had my /tmp/ partition filling up recently when it should not be anywhere near full. On further investigation, I found that the partition was listing free space a lot lower than it should be. I am guessing a remount will fix this, but am very curious as to why this has happened and where this space has gone. du output: root@odoroki:/tmp# du --summarize -h 3.3M . df output: root@odoroki:/tmp# df -h /tmp Filesystem Size Used Avail Use% Mounted on tmpfs 3.9G 3.3G 653M 84% /tmp Update: after deleting some files it has happened again. du output: root@odoroki:/tmp# du -h --summarize 11M . df output: root@odoroki:/tmp# df -h /tmp Filesystem Size Used Avail Use% Mounted on tmpfs 3.9G 3.9G 0 100% /tmp I have a feeling this has started since a recent apt-get upgrade, but it still seems like strange behaviour. I did do a quick scan over lsof output and couldn't see any open/stuck file handles. Unfortunately due to the seriousness of the issue I had to reboot the server, after which usage seems to match correctly.

    Read the article

  • Should components have sub-components in a component-based system like Artemis?

    - by Daniel Ingraham
    I am designing a game using Artemis, although this is more of philosophical question about component-based design in general. Let's say I have non-primitive data which applies to a given component (a Component "animal" may have qualities such as "teeth" or "diet"). There are three ways to approach this in data-driven design, as I see it: 1) Generate classes for these qualities using "traditional" OOP. I imagine this has negative implications for performance, as systems then must be made aware of these qualities in order to process them. It also seems counter to the overall philosophy of data-driven design. 2) Include these qualities as sub-components. This seems off, in that we are now confusing the role of components with that of entities. Moreover out of the box Artemis isn't capable of mapping these subcomponents onto their parent components. 3) Add "teeth", "diet", etc. as components to the overall entity alongside "animal". While this feels odd hierarchically, it may simply be a peculiarity of component-based systems. I suspect 3 is the correct way to think about things, but I was curious about other ideas.

    Read the article

  • Tumblr custom domain not redirecting properly

    - by Manic
    I decided to host my blog at Tumblr, using their custom domain setup (http://blog.smokingfishgames.com/ instead of http://smokingfishgames.tumblr.com). However, it's been 72 hours and I'm still getting spotty redirection. It works some of the time--I go and see the page and blog, and it's all fine. However, it occasionally just stops working and redirects back to my web host, which is a directory with nothing but a single file called BUGGER.html (which I stuck in to make sure that it was my web host and not some Tumblr empty directory). Clearing the Chrome DNS cache makes the problem go away--for a while. After a few minutes, or an hour, or however long, I'll start seeing BUGGER.html again. I clear the cache, and poof, the blog shows up. The thing that's curious to me is that when I clear the cache and get BUGGER.html again (which happens occasionally), I can look at my Chrome DNS cache and see assets.tumblr.com UNSPECIFIED blog.smokingfishgames.com UNSPECIFIED www.tumblr.com UNSPECIFIED IP addresses and expiration times omitted for brevity's sake--if they're important I'm sure I can replicate the issue. This implies, to me anyway, that my browser is reaching Tumblr but getting bounced back to my web host. Any reason why this would be happening, or is this a normal symptom of DNS propagation? If it is a problem, should I be bothering Tumblr or my host with it, or is this something I can fix myself?

    Read the article

  • What is the target of Unity?

    - by burli
    First Unity was developed for Netbooks. But the Netbook Market is shrinking. Unity is not specialized for tablet pcs like Android 3, but it may work well with some specialized Apps for those devices. Unity is still nice for Notebooks with small displays, but there is no big advantage on the desktop compared with other desktop environments like Gnome 2/3 or KDE. So what's the point? My first suggenstion was a hybrid between tablet pc and a desktop, for example for a manager. He can plug the tablet in a docking station in his office and he can work at a normal desktop, whats not possible with iOS or Android. If he is in a meeting he can use it as a tablet to make notes, for example. Or if he is somewhere else outside the office or the company. Same for normal users. They can dock the tablet and use it like a normal desktop pc or they can lie on the couch and browse in the web, read a book or chat with friend. So, thats my suggestion. But what is the real plan for Unity or Ubuntu in general? I'm curious ;)

    Read the article

  • First Shard for SQL Azure and SQL Server

    - by Herve Roggero
    That's it!!!!! It's ready to go and be tested, abused and improved! It requires .NET 4.0 and uses some cool technologies, like caching (the new System.Runtime.Caching) and the Task Parallel Library (System.Threading.Tasks). With this library you can: Define a shard of 1, 2 or 100 SQL databases (a mix of SQL Server and SQL Azure) Read from the shard in parallel or sequentially, and cache resultsets Update, Delete a record from the shard Insert records quickly in the shard with a round-robin load Reset the cache You can download the source code and a sample application here: http://enzosqlshard.codeplex.com/  Note about the breadcrumbs: I had to add a connection GUID in order for the library to know which database a record came from. The GUID is currently calculated on the fly in the library using some of the parameters of the connection string. The GUID is also dynamically added to the result set so the client can pass it back to the library. I am curious to get your feedback on this approach. ** Correction from my previous post: this is a library for a Horizontal Partition Shard (HPS): tables are split across databases horizontally. So in essence, the tables need to have the same schema across the databases.

    Read the article

  • Why is math taught "backwards"? [closed]

    - by Yorirou
    A friend of mine showed me a pretty practical Java example. It was a riddle. I got excited and quickly solved the problem. After it, he showed me the mathematical explanation of my solution (he proved why is it good), and it was completely clear for me. This seems like natural approach for me: solve problems, and generalize. This is very familiar to me, I do it all the time when I am programming: I write a function. When I have to write a similar function, I generalize the problem, grab the generic parts, and refactor them to a function, and solve the original problems as a specialization of the general function. At the university (or at least where I study), things work backwards. The professors shows just the highest possible level of the solutions ("cryptic" mathematical formulas). My problem is that this is too abstract for me. There is no connection of my previous knowledge (== reality in my sense), so even if I can understand it, I can't really learn it properly. Others are learning these formulas word-by-word, and get good grades, since they can write exactly the same to the test, but this is not an option for me. I am a curious person, I can learn interesting things, but I can't learn just text. My brain is for storing toughts, not strings. There are proofs for the theories, but they are also really hard to understand because of this, and in most of the cases they are omitted. What is the reason for this? I don't understand why is it a good idea to show the really high level of abstraction and then leave the practical connections (or some important ideas / practical motivations) out?

    Read the article

  • Is the Alternate Ubuntu installer still required for LVM or Software RAID setup?

    - by jimp
    Over the past 5 years, I have been setting up Ubuntu servers using the Alternate installer. I need to provision a new server today, and I'm curious if the Alternate CD is still the only way to setup LVM/RAID at installation time. I'm my limited experience with Red Hat Enterprise Linux, I noticed it's single installer configures LVM automatically. Has Ubuntu's installer, at least the standard "Server" installer, added support for LVM/RAID, or is the Alternate installer still required for that kind of server setup? http://mirror.anl.gov/pub/ubuntu-iso/DVDs/ubuntu/12.04.1/release/ Alternate install CD The alternate install CD allows you to perform certain specialist installations of Ubuntu. It provides for the following situations: setting up automated deployments; upgrading from older installations without network access; LVM and/or RAID partitioning; installs on systems with less than about 384MiB of RAM (although note that low-memory systems may not be able to run a full desktop environment reasonably). LVM has always been fundamental for our server needs, so I'm surprised if it is still not considered a server-worthy feature.

    Read the article

  • Count function on tree structure (non-binary)

    - by Spevy
    I am implementing a tree Data structure in c# based (largely on Dan Vanderboom's Generic implementation). I am now considering approach on handling a Count property which Dan does not implement. The obvious and easy way would be to use a recursive call which Traverses the tree happily adding up nodes (or iteratively traversing the tree with a Queue and counting nodes if you prefer). It just seems expensive. (I also may want to lazy load some of my nodes down the road). I could maintain a count at the root node. All children would traverse up to and/or hold a reference to the root, and update a internally settable count property on changes. This would push the iteration problem to when ever I want to break off a branch or clear all children below a given node. Generally less expensive, and puts the heavy lifting what I think will be less frequently called functions. Seems a little brute force, and that usually means exception cases I haven't thought of yet, or bugs if you prefer. Does anyone have an example of an implementation which keeps a count for an Unbalanced and/or non-binary tree structure rather than counting on the fly? Don't worry about the lazy load, or language. I am sure I can adjust the example to fit my specific needs. EDIT: I am curious about an example, rather than instructions or discussion. I know this is not technically difficult...

    Read the article

  • What is the "default" software license?

    - by Tesserex
    If I release some code and binaries, but I don't include any license at all with it, what are the legal terms that apply by default (in the US, where I am). I know that I automatically have copyright without doing anything, but what restrictions are there on it? If I upload my code to github and announce it as a free download / contribute at will, then are people allowed to modify and close source my work? I haven't said that they cannot, as a GPL would, but I don't feel that it would by default be acceptable to steal my work either. So what can and cannot people do with code that is freely available, but has absolutely no licensing terms attached? By the way, I know that it would be a good idea for me to pick a license and apply it to my code soon, but I'm still curious about this. Edit Thanks! So it looks like the consensus is that it starts out very restricted, and then my actions imply any further rights. If I just put software on my website with no security, it would be an infringement to download it. If I post a link to that download on a forum, then that would implicitly give permission to use it for free, but not distribute it or its derivatives (but you can modify it for your own use). If I put it on GitHub, then it is conveyed as FOSS. Again, this is probably not codified exactly in law but may be enough to be defensible in court. It's still a good idea to post a complete license to be safe.

    Read the article

  • Should Developers Perform All Tasks or Should They Specialize?

    - by Bob Horn
    Disclaimer: The intent of this question isn't to discern what is better for the individual developer, but for the system as a whole. I've worked in environments where small teams managed certain areas. For example, there would be a small team for every one of these functions: UI Framework code Business/application logic Database I've also worked on teams where the developers were responsible for all of these areas and more (QA, analsyt, etc...). My current environment promotes agile development (specifically scrum) and everyone has their hands in every area mentioned above. While there are pros and cons to each approach, I'd be curious to know if there are more pros and cons than I list below, and also what the generally feeling is about which approach is better. Devs Do It All Pros 1. Developers may be more well-rounded 2. Developers know more of the system Cons 1. Everyone has their hands in all areas, increasing the probability of creating less-than-optimal results in that area 2. It can take longer to do something with which you are unfamiliar (jack of all trades, master of none) Devs Specialize Pros 1. Developers can create policies and procedures for their area of expertise and more easily enforce them 2. Developers have more of a chance to become deeply knowledgeable about their specific area and make it the best it can be 3. Other developers don't cross boundaries and degrade another area Cons 1. As one colleague put it: "Why would you want to pigeon-hole yourself like that?" (Meaning some developers won't get a chance to work in certain areas.) It's easy to say how wonderful agile is, and that we should do it all, but I'm somewhat of a fan of having areas of expertise. Without that expertise, I've seen code degrade, database schemas become difficult to manage, hack UI code, etc... Let's face it, some people make careers out of doing just UI work, or just database work. It's not that easy to just fill in and do as good of a job as an expert in that area.

    Read the article

  • How do you maintain focus when a particular aspect of programming takes 10+ seconds to complete?

    - by Jer
    I have a very difficult time focusing on what I'm doing (programming-wise) when something (compilation, startup time, etc.) takes more than just a few seconds. Anecdotally it seems that threshold is about 10 seconds (and I recall reading about study that said the same thing, though I can't find it now). So what typically happens is I make a change and then run the program to test it. That takes about 30 seconds, so I start reading something else, and before I know it 20 minutes have passed, and then it takes (if I'm lucky!) another 10+ minutes to deal with the context switch to getting back into programming. It's not an exaggeration to say that some things that should take me minutes literally take hours to complete. I'm very curious about what other programmers do to combat this tendency (or if I'm unique and they don't have this tendency?). Suggestions of any type at all are welcome - anything from "sit on your hands after hitting the compile button", to mental tricks, to "if it takes 30 seconds to start up something to test a change, then something's wrong with your development process!"

    Read the article

  • How to collaborate on features using github

    - by Robert Dailey
    github encourages 1 fork per user, so that that user can work independently on a feature and then request that feature to be accepted into the main repository via pull request. However, what if 2 developers need to collaborate on that feature? What is the ideal workflow for this? I could see a number of options: Both developers fork the original repository. Each developer pulls/pushes changes between each other's repository. This seems like a lot of work (tiny micro operations) and also creates a delay between changes, so increases the window for conflicts. Developer 1 forks from the main repository, developer 2 forks from developer 1. Same as #1 mainly but hopefully simplifies Developer 2's life a little? Developer 1 gives Developer 2 permissions to his own fork, so they both work out of the same central repository. Not sure if this is ideal. I'm also curious where branches come into this. Obviously there would be a branch for the feature itself but that branch can't exist in a single place, it would have to exist on multiple forks and be synchronized. Basically just really confused about this workflow, would like an approach for how this can be best accomplished.

    Read the article

  • Storing Tiled Level Data in J2ME game

    - by Alex
    I'm developing a J2ME game which uses tiled backgrounds for the levels. My question is how do I store this tile information in my game. At the moment it is stored as an array; with each number representing a different tile from the tile-sheet. This works well enough, however I don't like the fact that it is 'hard-coded' into the game because (at least in my opinion) it is harder to edit the levels, or design new ones. I was also thinking that it would be difficult if you wanted to add a 'level pack', I'm not sure on how this would be achieved though; it's not something I was planning on doing, I'm just curious. I was wondering if there was a way I could store level data in some external file and then load this in to the game. The problem is I don't know what the limitations are for J2ME regarding file I/O, can it read in any file like Java? I am aware of the RMS, but from my experience I don't think this would work (unless I am mistaken). Also, would loading the data in this way be too big a performance hit? Or is there another way I can achieve what I am trying to do. As I said, the way I have it at the moment works fine, and if this is the only viable option then it will suffice.

    Read the article

  • Is there a name for the Builder Pattern where the Builder is implemented via interfaces so certain parameters are required?

    - by Zipper
    So we implemented the builder pattern for most of our domain to help in understandability of what actually being passed to a constructor, and for the normal advantages that a builder gives. The one twist was that we exposed the builder through interfaces so we could chain required functions and unrequired functions to make sure that the correct parameters were passed. I was curious if there was an existing pattern like this. Example below: public class Foo { private int someThing; private int someThing2; private DateTime someThing3; private Foo(Builder builder) { this.someThing = builder.someThing; this.someThing2 = builder.someThing2; this.someThing3 = builder.someThing3; } public static RequiredSomething getBuilder() { return new Builder(); } public interface RequiredSomething { public RequiredDateTime withSomething (int value); } public interface RequiredDateTime { public OptionalParamters withDateTime (DateTime value); } public interface OptionalParamters { public OptionalParamters withSeomthing2 (int value); public Foo Build ();} public static class Builder implements RequiredSomething, RequiredDateTime, OptionalParamters { private int someThing; private int someThing2; private DateTime someThing3; public RequiredDateTime withSomething (int value) {someThing = value; return this;} public OptionalParamters withDateTime (int value) {someThing = value; return this;} public OptionalParamters withSeomthing2 (int value) {someThing = value; return this;} public Foo build(){return new Foo(this);} } } Example of how it's called: Foo foo = Foo.getBuilder().withSomething(1).withDateTime(DateTime.now()).build(); Foo foo2 = Foo.getBuilder().withSomething(1).withDateTime(DateTime.now()).withSomething2(3).build();

    Read the article

  • Teach Your Kid to Code Coming to Philly.NET

    - by Steve Michelotti
    Originally posted on: http://geekswithblogs.net/michelotti/archive/2014/05/20/teach-your-kid-to-code-coming-to-philly.net.aspxTomorrow night (Wednesday, May 21) my son and I will be at Philly.NET presenting Teach Your Kid to Code. Bring your kid out to Philly.NET with you for a fun evening! After our first talk, I’ll then be giving an introduction to TypeScript. Of any presentation I’ve ever given, this is my favorite: Have you ever wanted a way to teach your kid to code? For that matter, have you ever wanted to simply be able to explain to your kid what you do for a living? Putting things in a context that a kid can understand is not as easy as it sounds. If you are someone curious about these concepts, this is a “can’t miss” presentation that will be co-presented by Justin Michelotti (6th grader) and his father. Bring your kid with you to Philly.NET for this fun and educational session. We will show tools you may not have been aware of like SmallBasic and Kodu – we’ll even throw in a little Visual Studio and JavaScript. Concepts such as variables, conditionals, loops, and functions will be covered while we introduce object oriented concepts without any of the confusing words. Kids are not required for entry!

    Read the article

  • Hash Algorithm Randomness Visualization

    - by clstroud
    I'm curious if anyone here has any idea how the images were generated as shown in this response: Which hashing algorithm is best for uniqueness and speed? Ian posted a very well-received response but I can't seem to understand how he went about making the images. I hate to make a new question dedicated to this, but I can't find any means to ask him more directly. On the other hand, perhaps someone has an alternative perspective. The best I can personally come up with would be to have it almost like a bar graph, which would illustrate how evenly the buckets of the hash table are being generated. I have a working Cocoa program that does this, but it can't generate anything like what he showed there. So the question is two fold I suppose: A) How does one truly interpret the data he shows? Is it more than "less whitespace = better"? B) How does one generate such an image based on some set of inputs, a hash, and an index? Perhaps I'm misunderstanding entirely, but I really would like to know more about this particular visualization technique. Or maybe I'm mis-applying this to hash tables rather than just hashes in general, but in that case I don't know how it would be "bounded" for the image.

    Read the article

  • creative & complex vs simple and readable

    - by Shirish11
    Which is a better option? Its not always that when you have something creative your code is going to look ugly. But at times it does go a bit ugly. e.g. if ( (object1(0)==object2(0) && (object1(1)==object2(1) && (object1(2)==object2(2) && (object1(3)==object2(3)){ retval = true; else retval = false; is simple and readable bool retValue = (object1(0)==object2(0)) && (object1(1)==object2(1)) && (object1(2)==object2(2)) && (object1(3)==object2(3)); but having something like this will make some newbies scratch their heads. So what do I go for? including simple code everywhere might sometime hamper my performance. what I could think of was commenting wherever necessary but at times u get too curious to know what is actually happening. Any suggestions are welcome.

    Read the article

  • Question on methods in Object Oriented Programming

    - by mal
    I’m learning Java at the minute (first language), and as a project I’m looking at developing a simple puzzle game. My question relates to the methods within a class. I have my Block type class; it has its many attributes, set methods, get methods and just plain methods. There are quite a few. Then I have my main board class. At the moment it does most of the logic, positioning of sprites collision detection and then draws the sprites etc... As I am learning to program as much as I’m learning to program games I’m curious to know how much code is typically acceptable within a given method. Is there such thing as having too many methods? All my draw functionality happens in one method, should I break this into a few ‘sub’ methods? My thinking is if I find at a later stage that the for loop I’m using to cycle through the array of sprites searching for collisions in the spriteCollision() method is inefficient I code a new method and just replace the old method calls with the new one, leaving the old code intact. Is it bad practice to have a method that contains one if statement, and place the call for that method in the for loop? I’m very much in the early stages of coding/designing and I need all the help I can get! I find it a little intimidating when people are talking about throwing together a prototype in a day too! Can’t wait until I’m that good!

    Read the article

  • Including Microsoft.XNA.Framework.Input.Touch in a project?

    - by steven_desu
    So after running through tutorials by both Microsoft and www.xnadevelopment.com I feel very confident in my ability to get to work on my first game using the XNA Framework. I've manipulated sprites, added audio, changed game states, and even went a step further to apply the knowledge I had and figure out how to make animations and basic 2-dimensional physics (including impulses, force, acceleration, and speed calculations) But then shortly into the project I hit a curious bump that I've been unable to figure out. In wanting to implement menus, pause screens, and several different aspects of play (a "pre-level" prep screen, the level itself, and a screen after the level to review how well you did) I took a look at Microsoft's Game State Management sample. I understood the concept, although it was admittedly quite a lot to take in. Not wanting to recreate the entire concept by scratch (after all- what purpose would that serve?) I tried copying and pasting the sample code into my own ScreenManager class (as well as InputState and GameScreen classes) to try and borrow their ingenuity. When I did this, however, my project stopped compiling. I was getting the following error: The type or namespace name 'Touch' does not exist in the namespace 'Microsoft.Xna.Framework.Input' (are you missing an assembly reference?) Having read through their sample code already, I realized that this namespace and every function and class within it could be safely ripped from the code without losing functionality. It's a namespace simply for integrating with touchscreen devices (presumably Windows Phone 7, but maybe also tablets). But then I began to wonder- how come Microsoft's sample compiled but mine didn't? I copied their code exactly so there must be a setting somewhere that I need to change in Visual Studio in order to correct this. I tried creating a new project as a Windows Phone 7 game rather than a Windows game, however that only forced it to compile to a Windows Phone emulator and denied me the ability to change the resolution and other features which I clearly had the power to do in the sample code. So my question is simple - how do I properly use the namespace Microsoft.XNA.Framework.Input.Touch?

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • When can I publish a software tool written at work?

    - by AlexMA
    I'm working on a software problem at work that is fairly generic, but I can't find a library I like to solve it, so I'm considering writing one myself (at least a bare-bones version). I'll be writing some if not all of the 1.0 version at work, since I need it for the project. If turns out well I might want to bring the work home and polish it up just for fun, and maybe release it as an open-source project. However, I'm concerned that if I wrote the 1.0 version at work I may not be allowed to do this from a legal sense. Obviously I could ask my boss (who probably won't care), but I'm curious how other programmers have dealt with this issue and where the law stands here. My one sentence question is, When is it okay (legally/ethically) to open-source a software tool originally written by you for work at work? What if you have expanded the original source significantly during off-hours? Follow-up: Suppose I write the whole thing at home on my time then simply use it at work, does that change things drastically? Follow-up 2: Note that I'm not trying to rip off my employer (I understand that they're paying me to build products that they own)--I'm just wondering if there's a fair way of doing this for all involved... It would be nice if some nonprofit down the road could use my code and save them some time. Also, there's another issue at stake. If I write the library for a very simple, generic thing (like HTML tables in Javascript), does that mean I can never again do so on my own time without putting myself at legal risk (even if it was a whole new fresh rewrite or a segment of a larger project). Am I surrendering my right to write code for this sort of project for the rest of my life (without this company's permission), since the code at work might still be somewhere in my brain influencing me? This seems related to software patents, as a side-note.

    Read the article

  • Generating HTML Help files based on XML documentation

    - by geekrutherford
    Since discovering the XML commenting features built into .NET years ago I have been using it to help make my code more readable and simpler for other developers to understand exactly what the code is doing. Entering /// preceding a line of code causes Visual Studio to insert "summary" tags.  It also results in additional tags being generated if you are commenting a method with parameters and a return type. I already knew that Intellisense would pick up these comments and display them when coding and selecting properties, methods, etc. from a class.  I also knew that you could set Visual Studio to generate an XML file containing said comments.  Only recently did I begin to wonder if I could generate some kind of readable help files based on these comments I so diligently added. After searching the web I came across NDoc, an open source project which creates documentation for you based on the XML files generated by Visual Studio.  Unfortunately, NDoc has become stale and no longer supported (last release was back in 2005). Fortunately there is a little known tool from Microsoft themselves called "Sandcastle Help File Builder".  This nifty little tool gives you a graphical interface that allows you to specify multiple DLL and XML files from which to generate a MSDN like HTML Help File for your own projects! You can check it out here: http://shfb.codeplex.com/ If you are curious how to set Visual Studio to generate the above reference XML documentation files simply go to your projects property page and edit as shown below (my paths are specific, you can leave yours at the default values):

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >