Search Results

Search found 279 results on 12 pages for 'predict'.

Page 1/12 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Designing interfaces: predict methods needed, discipline yourself and deal with code that comes to m

    - by fireeyedboy
    Was: Design by contract: predict methods needed, discipline yourself and deal with code that comes to mind I like the idea of designing by contract a lot (at least, as far as I understand the principal). I believe it means you define intefaces first before you start implementing actual code, right? However, from my limited experience (3 OOP years now) I usually can't resist the urge to start coding pretty early, for several reasons: because my limited experience has shown me I am unable to predict what methods I will be needing in the interface, so I might as well start coding right away. or because I am simply too impatient to write out the whole interfaces first. or when I do try it, I still wind up implementing bits of code already, because I fear I might forget this or that imporant bit of code, that springs to mind when I am designing the interfaces. As you see, especially with the last two points, this leads to a very disorderly way of doing things. Tasks get mixed up. I should draw a clear line between designing interfaces and actual coding. If you, unlike me, are a good/disciplined planner, as intended above, how do you: ...know the majority of methods you will be needing up front so well? Especially if it's components that implement stuff you are not familiar with yet. ...resist the urge to start coding right away? ...deal with code that comes to mind when you are designing the interfaces? UPDATE: Thank you for the answers so far. Valuable insights! And... I stand corrected; it seems I misinterpreted the idea of Design By Contract. For clarity, what I actually meant was: "coming up with interface methods before implementing the actual components". An additional thing that came up in my mind is related to point 1): b) How do you know the majority of components you will be needing. How do you flesh out these things before you start actually coding? For arguments sake, let's say I'm a novice with the MVC pattern, and I wanted to implement such a component/architecture. A naive approach would be to think of: a front controller some abstract action controller some abstract view ... and be done with it, so to speak. But, being more familiar with the MVC pattern, I know now that it makes sense to also have: a request object a router a dispatcher a response object view helpers etc.. etc.. If you map this idea to some completely new component you want to develop, with which you have no experience yet; how do you come up with these sort of additional components without actually coding the thing, and stuble upon the ideas that way? How would you know up front how fine grained some components should be? Is this a matter of disciplining yourself to think it out thoroughly? Or is it a matter of being good at thinking in abstractions?

    Read the article

  • Predict Stock Market Values

    - by mrlinx
    I'm building a web semantic project that gathers the maximum ammount of historic data about a certain company and tries to predict its future market stock values. For data I have the historic stock values (not normalized), news (0 to 1 polarity) and subjective content (also a 0 to 1 polarity). What is the best AI system to train and use for this kind of objective? Is a simple NN with back-propagation training the best I can hope for? update: Everyone is concerned about the quality of this system. Although I'm pretty sure the system is as good as a random prediction (or even worse), this is a school project around artificial intelligence and web semantics. Therefore I'm only concerned in picking the best kind of train method for the data I have (NN, RBF, SVM, Bayes, neuro-fuzzy, etc). Its not about making money.

    Read the article

  • Using polyfit to predict where the object falls ?

    - by ZaZu
    Hi there, I have information of an object being thrown at a parabolic pattern. There are 30 images in total taken at specific intervals from the start position till the end. Now I have managed to extract the x,y coordinates of the object being thrown in all 30 images... I think that using polyfit (or maybe polyval ? ) may help me predict where the object will fall after the first 15 images ... I just want to know, how can polyfit be used with the 30 x,y coordinates I have ? ( I have a loop to extract each image from a mat file 1 row at a time until 30 .. and then plot that image .. so should I use polyfit in the same loop before/after the plot ??? Any ideas ?? Thanks !

    Read the article

  • Are there any well-known algorithms or computer models that computer scientists use to predict FIFA

    - by Khnle
    Occasionally I read news articles that mention about some computer models that computer scientists use to predict winners of some sporting events or the odds for betting which I think there must be a mathematical model behind it. I never bothered to think twice even though I am a "pseudo computer scientist" myself. With the 2010 FIFA World Cup just underway, and since I am also a "pseudo football/soccer player" myself, I just started to wonder about these calculations algorithms. For example, I know one factor is determining the strength of opponents, so that a win against a strong opponent can count more than a win against a weak opponent. But it now kind of gets in a circular loop, or at least how does one determine the strength of a team in the first place, before that team can be considered strong or weak? If it's based on a historical data then there's no way that could be accurate, because those players of the past are no longer on the fields so their impact is none (except maybe if they become coaches like Maradona) Anyway, long question short, if you're happen to be working in this field or have some knowledge, please shed some lights.

    Read the article

  • predict location of single-line text from a UITextView

    - by William Jockusch
    Is this possible? Specifically, I have a UITextView, and the text is short enough that it will fit on a single line. I want to predict where it will appear, so that (for example) if I wanted to, I could set up a UILabel that rendered the text in exactly the same location. Once I get that figured out, what I really want to do is pick contentInset and/or contentOffset so that the text of the UITextView and the left-justified UILabel will render in the same location. But I figure the above will let me do that. EDIT: In response to a comment, the fundamental problem I am trying to get around is that UITextField does not let you set the location of the cursor. It appears a lot of people have tried to get around this without success. I need to be able to move the cursor -- inserting/deleting text there is not enough. Control cursor position in UITextField Insert string at cursor position of UITextField Moving the cursor to the beginning of UITextField iOS -- dealing with the inability to set the cursor position in a UITextField

    Read the article

  • Design by contract: predict methods needed, discipline yourself and deal with code that comes to min

    - by fireeyedboy
    I like the idea of designing by contract a lot (at least, as far as I understand the principal). I believe it means you define intefaces first before you start implementing actual code, right? However, from my limited experience (3 OOP years now) I usually can't resist the urge to start coding pretty early, for several reasons: because my limited experience has shown me I am unable to predict what methods I will be needing in the interface, so I might as well start coding right away. or because I am simply too impatient to write out the whole interfaces first. or when I do try it, I still wind up implementing bits of code already, because I fear I might forget this or that imporant bit of code, that springs to mind when I am designing the interfaces. As you see, especially with the last two points, this leads to a very disorderly way of doing thing. Tasks get mixed up. I should draw a clear line between designing interfaces and actual coding. If you, unlike me, are a good/disciplined planner, as intended above, how do you: ...know the majority of methods you will be needing up front so well? Especially if it's components that implement stuff you are not familiar with yet. ...keep yourself from resisting the urge to start coding right away? ...deal with code that comes to mind when you are designing the intefaces?

    Read the article

  • Is it possible to predict future using machine learning and/or AI?

    - by Shekhar
    Recently I have started reading about machine learning. From 3000 feet view, machine learning seems really great thing but as if now I have found that machine learning is limited to only 3 types of algorithms namely classification, clustering and recommendations. I would like to know if my assumption about types of machine learning algorithms is correct or not and What is the extreme thing which we can do using machine learning and/or AI? Is it possible to predict future (same way we predict weather) using AI and/or machine learning?

    Read the article

  • Using Artificial Intelligence (AI) to predict Stock Prices

    - by akaphenom
    Given a set of datavery similar to the Motley Fool CAPS system, where individual users enter BUY and SELL recommendations on various equities. What I would like to do is show each recommendation and I guess some how rate (1-5) as to whether it was good predictor<5 (ie corellation coeffient = 1) of the future stock price (or eps or whatever) or a horrible predictor (ie corellation coeffient = -1) or somewhere inbetween. Each recommendation is tagged to a particular user, so that can be tracked over time. I can also track market direction (bullish / bearish) based off of something like sp500 price. The components I think that would make sense in the model would be: user direction (long/short) market direction sector of stock The thought is that some users are better in bull markets than bear (and vice versa), and some are better at shorts than longs- and then a cobination the above. I can automatically tag the market direction and sector (based off the market at the time and the equity being recommended). The thought is that I could present a series of screens and allow me to rank each individual recommendation by displaying available data absolute, market and sector out performance for a specfic time period out. I would follow a detailed list for ranking the stocks so that the ranking is as objective as possible. My assumtion is that a single user is right no more than 57% of the time - but who knows. I could load the system and say "Lets rank the recommendation as a predictor of stock value 90 days forward"; and that would represent a very explicit set of rankings. NOW here is the crux - I want to create some sort of machine learning algorithm that can identify patterns over a series of time so that as recommendations stream into the application we maintain a ranking of that stock (ie. similar to correlation coeeficient) as to the likelihood of that recommendation (in addition to the past series of recommendations ) will affect the price. Now here is the super crux. I have never taken an AI class / read an AI book / never mind specific to machine learning. So I cam looking for guidance - sample or description of a similar system I could adapt. Place to look for info or any general help. Or even push me in the right direction to get started... My hope is to implment this with F# and be able to impress my friends with a new skillset in F# with an implementation of machine learnign and potentially something (application / source) I can include in a tech portfolio or blog space; Thank you for any advice in advance.

    Read the article

  • how to predict which section have to put in critical section in threading

    - by Lalit Dhake
    Hi , I am using the console application i used multi threading in the same. I just want to know which section have to put inside critical section my code is : .------------------------------------------------------------------------------. public class SendBusReachSMS { public void SchedularEntryPoint() { try { List<ActiveBusAndItsPathInfo> ActiveBusAndItsPathInfoList = BusinessLayer.GetActiveBusAndItsPathInfoList(); if (ActiveBusAndItsPathInfoList != null) { //SMSThreadEntryPoint smsentrypoint = new SMSThreadEntryPoint(); while (true) { foreach (ActiveBusAndItsPathInfo ActiveBusAndItsPathInfoObj in ActiveBusAndItsPathInfoList) { if (ActiveBusAndItsPathInfoObj.isSMSThreadActive == false) { DateTime CurrentTime = System.DateTime.Now; DateTime Bustime = Convert.ToDateTime(ActiveBusAndItsPathInfoObj.busObj.Timing); TimeSpan tsa = Bustime - CurrentTime; if (tsa.TotalMinutes > 0 && tsa.TotalMinutes < 5) { ThreadStart starter = delegate { SMSThreadEntryPointFunction(ActiveBusAndItsPathInfoObj); }; Thread t = new Thread(starter); t.Start(); t.Join(); } } } } } } catch (Exception ex) { Console.WriteLine("==========================================="); Console.WriteLine(ex.Message); Console.WriteLine(ex.InnerException); Console.WriteLine("==========================================="); } } public void SMSThreadEntryPointFunction(ActiveBusAndItsPathInfo objActiveBusAndItsPathInfo) { try { //mutThrd.WaitOne(); String consoleString = "Thread for " + objActiveBusAndItsPathInfo.busObj.Number + "\t" + " on path " + "\t" + objActiveBusAndItsPathInfo.pathObj.PathId; Console.WriteLine(consoleString); TrackingInfo trackingObj = new TrackingInfo(); string strTempBusTime = objActiveBusAndItsPathInfo.busObj.Timing; while (true) { trackingObj = BusinessLayer.get_TrackingInfoForSendingSMS(objActiveBusAndItsPathInfo.busObj.Number); if (trackingObj.latitude != 0.0 && trackingObj.longitude != 0.0) { //calculate distance double distanceOfCurrentToDestination = 4.45; TimeSpan CurrentTime = System.DateTime.Now.TimeOfDay; TimeSpan timeLimit = objActiveBusAndItsPathInfo.sessionInTime - CurrentTime; if ((distanceOfCurrentToDestination <= 5) && (timeLimit.TotalMinutes <= 5)) { Console.WriteLine("Message sent to bus number's parents: " + objActiveBusAndItsPathInfo.busObj.Number); break; } } } // mutThrd.ReleaseMutex(); } catch (Exception ex) { //throw; Console.WriteLine("==========================================="); Console.WriteLine(ex.Message); Console.WriteLine(ex.InnerException); Console.WriteLine("==========================================="); } } } Please help me in multithreading. new topic for me in .net

    Read the article

  • How to predict result set row count?

    - by Saurabh Kumar
    I have an application where I create a big SQL query dynamically for SQL server 2008. This query is based on various search criteria which the user might give such as search by lastname, firstname, ssn etc. The requirement is that if the user gives a condition due to which the formed query might return a lot of rows(configurable for max N rows), then the application must send back a message instead to the user saying that he needs to refine his search query as the existing query will return too many rows. I would not want to bring back say, 5000 rows to the client and then discard that data just to show the user an error. What is an efficient way to tackle this issue?

    Read the article

  • .NET Neural Network or AI for Future Predictions

    - by Ian
    Hi All. I am looking for some kind of intelligent (I was thinking AI or Neural network) library that I can feed a list of historical data and this will predict the next sequence of outputs. As an example I would like to feed the library the following figures 1,2,3,4,5 and based on this, it should predict the next sequence is 6,7,8,9,10 etc. The inputs will be a lot more complex and contain much more information. This will be used in a C# application. If you have any recommendations or warning that will be great. Thanks

    Read the article

  • How can you predict the time it will take for two processes in two different machines in a cluster to communicate?

    - by Dokkat
    I am trying to develop a computing application which needs a lot of memory (500gb). Buying a single machine for that is overly expensive. I can, though, buy ~100 small instances on Digital Ocean or similar, divide the memory in blocks and use TCP to emulate shared memory between the instances. Now, my question is: how can I measure/predict the time it will take for two processes in two different machines like that to share information, in comparison to IPC and shared memory? Are there rules of thumb? I don't want exact values, but knowing more or less how much faster one is would be very helpful in visualising the feasibility of this approach.

    Read the article

  • Using PhysX, how can I predict where I will need to generate procedural terrain collision shapes?

    - by Sion Sheevok
    In this situation, I have terrain height values I generate procedurally. For rendering, I use the camera's position to generate an appropriate sized height map. For collision, however, I need to have height fields generated in areas where objects may intersect. My current potential solution, which may be naive, is to iterate over all "awake" physics actors, use their bounds/extents and velocities to generate spheres in which they may reside after a physics update, then generate height values for ranges encompassing clustered groups of actors. Much of that data is likely already calculated by PhysX already, however. Is there some API, maybe a set of queries, even callbacks from the spatial system, that I could use to predict where terrain height values will be needed?

    Read the article

  • How to calculate/predict width after a browsers zoom?

    - by aaron b11
    Specifically, how do I predict/calculate the effect any of the browsers' zoom will have, for example, on width:950px? Are there any tools I can use to determine the new widths? edit: If I have a 950px div that is visually rendered 875px in, say, chrome, I could say chrome reduces fixed widths by approx. 92.1% after one crtl-. (950*.921= approx .875).

    Read the article

  • The best way to predict performance without actually porting the code?

    - by ardiyu07
    I believe there are people with the same experience with me, where he/she must give a (estimated) performance report of porting a program from sequential to parallel with some designated multicore hardwares, with a very few amount of time given. For instance, if a 10K LoC sequential program was given and executes on Intel i7-3770k (not vectorized) in 100 ms, how long would it take to run if one parallelizes the code to a Tesla C2075 with NVIDIA CUDA, given that all kinds of parallelizing optimization techniques were done? (but you're only given 2-4 days to report the performance? assume that you didn't know the algorithm at all. Or perhaps it'd be safer if we just assume that it's an impossible situation to finish the job) Therefore, I'm wondering, what most likely be the fastest way to give such performance report? Is it safe to calculate solely by the hardware's capability, such as GFLOPs peak and memory bandwidth rate? Is there a mathematical way to calculate it? If there is, please prove your method with the corresponding problem description and the algorithm, and also the target hardwares' specifications. Or perhaps there already exists such tool to (roughly) estimate code porting? (Please don't the answer: 'kill yourself is the fastest way.')

    Read the article

  • How can I view the source code for a particular `predict` function?

    - by merlin2011
    Based on the documentation, predict is a polymorphic function in R and a different function is actually called depending on what is passed as the first argument. However, the documentation does not give any information about the names of the functions that predict actually invokes for any particular class. Normally, one could type the name of a function to get its source, but this does not work with predict. If I want to view the source code for the predict function when invoked on objects of the type glmnet, what is the easiest way?

    Read the article

  • Can IF be used to start a MySQL query?

    - by Littledot
    Hi there, I have a query that looks like this: mysql_query("IF EXISTS(SELECT * FROM predict WHERE uid=$i AND bid=$j) THEN UPDATE predict SET predict_tfidf=$predict_tfidf WHERE uid=$i AND bid=$j ELSE INSERT INTO predict (uid, bid, predict_tfidf) VALUES('$i','$j','$predict_tfidf') END IF")or die(mysql_error()); But it dies and mysql tells me to check the syntax near IF EXISTS(....) Can we not use an IF statement to start a mysql query? Thank you in advance.

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • What is the Future of Search Engine Optimisation?

    Though those who are into Internet marketing would like to know what the future holds for them, but frankly, it is very difficult to predict this accurately. Forget about the future of SEO, actually it is very difficult to even predict the future of Internet and computers in general. For instance, if 40 years back anyone had predicted that a computer would be sitting on a table of almost every home in the country, then everyone would have thought that he or she was crazy.

    Read the article

  • Algorithm for a lucky game [on hold]

    - by Ronnie
    Assume we have the following Keno(lottery type) game: From 80 numbers(from 1 to 80), 20 are being drawn. The players choose 1 or 2 or 3..... or 12 numbers to play(12 categories). If they choose for example 4 then they win if they predict correctly a certain amount of numbers(2,3 or 4) from the 4 they have played and lose if the predict only 1 or 0 numbers. They win X times their money accordingly to some predefined factor depending on how many numbers they predict from each category. The same with the other categories. And e.g 11 out of 11 gives 250000 times your money and 12 out of 12 gives 1000000 your money. So the company would want to avoid winnings so high. Every draw by the company is being made every 5 minutes and in each draw around 120000 (let's say) different predictions(Keno tickets) are being played. Let's assume 12000 are being played in category 10 and 12000 in category 11 and also 12000 in category 12. I'm wondering if there is an algorithm to allow the company that provides the game in the 5 minutes between the drawings, to find a 20 number set, in order to avoid any "12 out of 12" and "11 out of 11" and "11 out of 12" and "10 out of 11" and "10 out of 10" winning ticket. That means is there any algorithm, where in a time of less than 1 minute approximately(in todays hardware), to be able to find a 20 number set so that none of the 12000 12 and 11 and 10 number sets that the players played(in categories 10,11 and 12) contains any winning of "12 out of 12" and "11 out of 11" and "11 out of 12" and "10 out of 11" and "10 out of 10"? Or even better the generalization of the problem: What is the best algorithm(from a perspective of minimal time), to be able to find a Y number set from numbers 1 to Z(e.g Y=20, Z=80) so that none of the X sets of K-numbers that are being played(in category K) contains more than K-m numbers from the Y-set? (Note that for Y=K and m=1 there is a practical algorithm.)

    Read the article

  • How is determined an impact of a requirement change on the existing code?

    - by MainMa
    Hi, How companies working on large projects evaluate an impact of a single modification on an existing code? Since my question is probably not very clear, here's an example: Let's take a sample business application which deals with tasks. In the database, each task has a state, 0 being "Pending", ... 5 - "Finished". A new requirement adds a new state, between 2nd and 3rd one. It means that: A constraint on the values 1 - 5 in the database must be changed, Business layer and code contracts must be changed to add a new state, Data access layer must be changed to take in account that, for example the state StateReady is now 6 instead of 5, etc. The application must implement a new state visually, add new controls for it, new localized strings for tool-tips, etc. When an application is written recently by one developer, it's more or less easy to predict every change to do. On the other hand, when an application was written for years by many people, no single person can anticipate every change immediately, without any investigation. So since this situation (such changes in requirements) is very frequent, I imagine there are already some clever techniques and ways to predict the impact. Is there any? Do you know any books which deal about this subject? Note: my question is not related to How do you deal with changing requirements? question. In fact, I'm not interested in evaluating the cost of a change, but rather the way to predict the parts of an application which will be concerned by the change. What will be those changes and how difficult they are really doesn't matter in my question.

    Read the article

  • What machine learning algorithms can be used in this scenario?

    - by ExceptionHandler
    My data consists of objects as follows. Obj1 - Color - shape - size - price - ranking So I want to be able to predict what combination of color/shape/size/price is a good combination to get high ranking. Or even a combination could work like for eg: in order to get good ranking, the alg predicts best performance for this color and this shape. Something like that. What are the advisable algorithms for such a prediction? Also may be if you can briefly explain how I can approach towards the model building I would really appreciate it. Say for eg: my data looks like Blue pentagon small $50.00 #5 Red Squre large $30.00 #3 So what is a useful prediction model that I should look at? What algorithm should I try to predict like say highest weightage is for price followed by color and then size. What if I wanted to predict in combinations like a Red small shape is less likely to higher rank compared to pink small shape . (In essence trying to combine more than one nominal values column to make the prediction)

    Read the article

  • E 2.0 Value Metaphors

    - by Tom Tonkin
    I guess I have been doing this too long. I can easily see the value of Enterprise 2.0 technology for an organization, but find it a challenge at times to convey that same value to others. I also know that I'm not the only one that has that issue. Others, that have that same passion, also suffer from being, perhaps, too close to the market. I was having this same discussion with a few colleagues when one of them suggested that metaphors might be a good vehicle to communicate the value to those that are not as familiar.  One such metaphor was discussed.Apparently,back in the early 50's, there was a great Air Force aviator and military strategist by the name of John Boyd.  Without going into a ton of detail (you can search him on the internet), what made Colonel Boyd great was that he never lost a dog fight.  As a matter of fact, they called him 'Forty-Second Boyd' since he claimed to be able to beat anyone in any type of aircraft in less than forty seconds, even if his aircraft was inferior to his opponents.His approach as was unique.  He observed over time that there was a pattern on how aviators  engaged in a dogfight.  He called this method OODA.   It describes how a person or, in our case, an organization, would react to an event.  OODA is an acrostic for Observation, Orientation, Decision and Action.  Again, there is a lot more on the internet about this.A pilot would go through this loop several times during a dogfight and Boyd would try to predict this loop and interrupt it by changing the landscape of the actual dogfight.  This would give Boyd an advantage and be able to predict what his opponent would do and then counterattack.Boyd went on to say that many companies have a similar reaction loop and that by understanding that loop, organizations would be able to adjust better to market conditions, predict what the competition is doing and reposition themselves to gain competitive advantages. So, our metaphor would be that Enterprise 2.0 provides companies greater visibility of their business by connecting to employees, customers and partners in a collaborative fashion.  This, in turn, helps them navigate through the tough times and provide lines of sight to more innovative ideas.  Innovation is that last tool for companies to achieve competitive advantage (maybe a discusion for another post).Perhaps this is more wordy than some other metaphor, but it does allow for an interesting  dialogue to start and maybe even a framwork to fullfill the promise of E 2.0. So, I'm sure there are many more metaphors for the value that E 2.0 brings to organzaitons. Do you have one to share? Please comment below and thanks for stopping by.

    Read the article

  • Predicting advantages of database denormalization

    - by Janus Troelsen
    I was always taught to strive for the highest Normal Form of database normalization, and we were taught Bernstein's Synthesis algorithm to achieve 3NF. This is all very well and it feels nice to normalize your database, knowing that fields can be modified while retaining consistency. However, performance may suffer. That's why I am wondering whether there is any way to predict the speedup/slowdown when denormalizing. That way, you can build your list of FD's featuring 3NF and then denormalize as little as possible. I imagine that denormalizing too much would waste space and time, because e.g. giant blobs are duplicated or it because harder to maintain consistency because you have to update multiple fields using a transaction. Summary: Given a 3NF FD set, and a set of queries, how do I predict the speedup/slowdown of denormalization? Link to papers appreciated too.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >