Search Results

Search found 279 results on 12 pages for 'predict'.

Page 3/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • Big Data – Basics of Big Data Analytics – Day 18 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the various components in Big Data Story. In this article we will understand what are the various analytics tasks we try to achieve with the Big Data and the list of the important tools in Big Data Story. When you have plenty of the data around you what is the first thing which comes to your mind? “What do all these data means?” Exactly – the same thought comes to my mind as well. I always wanted to know what all the data means and what meaningful information I can receive out of it. Most of the Big Data projects are built to retrieve various intelligence all this data contains within it. Let us take example of Facebook. When I look at my friends list of Facebook, I always want to ask many questions such as - On which date my maximum friends have a birthday? What is the most favorite film of my most of the friends so I can talk about it and engage them? What is the most liked placed to travel my friends? Which is the most disliked cousin for my friends in India and USA so when they travel, I do not take them there. There are many more questions I can think of. This illustrates that how important it is to have analysis of Big Data. Here are few of the kind of analysis listed which you can use with Big Data. Slicing and Dicing: This means breaking down your data into smaller set and understanding them one set at a time. This also helps to present various information in a variety of different user digestible ways. For example if you have data related to movies, you can use different slide and dice data in various formats like actors, movie length etc. Real Time Monitoring: This is very crucial in social media when there are any events happening and you wanted to measure the impact at the time when the event is happening. For example, if you are using twitter when there is a football match, you can watch what fans are talking about football match on twitter when the event is happening. Anomaly Predication and Modeling: If the business is running normal it is alright but if there are signs of trouble, everyone wants to know them early on the hand. Big Data analysis of various patterns can be very much helpful to predict future. Though it may not be always accurate but certain hints and signals can be very helpful. For example, lots of data can help conclude that if there is lots of rain it can increase the sell of umbrella. Text and Unstructured Data Analysis: unstructured data are now getting norm in the new world and they are a big part of the Big Data revolution. It is very important that we Extract, Transform and Load the unstructured data and make meaningful data out of it. For example, analysis of lots of images, one can predict that people like to use certain colors in certain months in their cloths. Big Data Analytics Solutions There are many different Big Data Analystics Solutions out in the market. It is impossible to list all of them so I will list a few of them over here. Tableau – This has to be one of the most popular visualization tools out in the big data market. SAS – A high performance analytics and infrastructure company IBM and Oracle – They have a range of tools for Big Data Analysis Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Data Scientist. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Cutting Subscriber Churn with Media Intelligence

    - by Oracle M&E
    There's lots of talk in media and entertainment companies about using "big data".  But it's often hard to see through the hype and understand how big data brings benefits in the real world.  How about being able to predict with 92% accuracy which subscribers intend to cancel their subscription - and put in place a renewal strategy to dramatically reduce that churn?  That's what Belgian media company De Persgroep has achieved with Oracle's Media Intelligence solution.  "One of the areas in which we're able to achieve beautiful results using big data is the churn prediction," De Persgroep's CIO Luc Verbist explains in a new Oracle video.  "Based on all the data that we collect on websites and all your behavior, payment behavior and so on, we're able to make a prediction model, which, with an accuracy of 92 percent, is able to predict that you probably won't renew your newspaper, anymore. So our approach to renewal is completely different to the people in that segment than towards the other people. And this has brought us a lot of value and a lot of customers who didn't stop their newspaper where else they would have done so." De Persgroep is using Oracle's Big Data Appliance, along with software from Oracle partner NGDATA to build up a detailed "DNA profile" of each individual customer, based on every interaction, in real time.  This means that any change in behavior - a drop in content consumption, a late subscription payment, a negative social media comment - is captured.  Applying advanced data modeling techniques automatically converts those raw interactions into data with real business meaning - like that customer's risk of churning. The very same data profile - comprising hundreds if individual dimensions - can simultaneously drive targeted marketing campaigns - informing audience about new content that's most relevant and encouraging them to subscribe.  It can power content recommendations and personalization right in the content sites and apps. And it can link directly into digital advertising networks via platforms like Oracle's BlueKai data management platform (DMP), to drive increased advertising CPMs. Using Oracle's Media Intelligence solution enables this across De Persgroep's business - comprising eight newspapers and 25 magazines published in Belgium and The Netherlands, and digital properties including websites with 6m daily unique visitors, along with TV and radio stations. "The company strategy is in fact a customer-centric strategy, so we want to get a 360-view about our customers, about our prospects. And the big data project helped us to achieve that goal," says Verbist. Using Oracle's Big Data Appliance to underpin the solution created huge savings.   "The selection of the Big Data Appliance was quite easy.  It was very quick to install, very easy to install, as well. And it was far cheaper than building our own Hadoop cluster. So it was in fact a non-brainer," Verbist explains. Applying Media Intelligence approach has yielded incredible results for De Persgroep, including: Improved products - with a new understanding of how readers are consuming print and digital content across the day Improved customer segmentation - driving a 6X improvement in customer prospecting and acquisition when contacting a specific segment Having the project up and running in three months And that has led to competitive benefits for De Persgroep, as Luc Verbist explains: "one of the results we saw since we started using big data is that we're able to increase the gap between we as the market leader, and the second [by] more than 20 percent."

    Read the article

  • Is there a design pattern that expresses objects (an their operations) in various states?

    - by darren
    Hi I have a design question about the evolution of an object (and its state) after some sequence of methods complete. I'm having trouble articulating what I mean so I may need to clean up the question based on feedback. Consider an object called Classifier. It has the following methods: void initialise() void populateTrainingSet(TrainingSet t) void pupulateTestingSet(TestingSet t) void train() void test() Result predict(Instance i) My problem is that these methods need to be called in a certain order. Futher, some methods are invalid until a previous method is called, and some methods are invalid after a method has been called. For example, it would be invalid to call predict() before test() was called, and it would be invalid to call train() after test() was called. My approach so far has been to maintain a private enum that represents the current stateof the object: private static enum STATE{ NEW, TRAINED, TESTED, READY}; But this seems a bit cloogy. Is there a design pattern for such a problem type? Maybe something related to the template method.

    Read the article

  • Autoscaling EC2 with NFS mounts

    - by Jamie Taylor
    I'm trying to set up a shared filesystem on EC2 and I've read tutorials such as this: http://blog.ronaldmccollam.com/2012/07/configuring-nfs-on-ubuntu-in-amazon-ec2.html In step 2 it talks about configuring the exports, for this I need an IP range but when I'm auto-scaling I can't predict what the IP will be before it scales. Is there any other way of doing this while still staying secure? Thanks Edit: Just tried s3fs, didn't seem to work properly

    Read the article

  • DVD-Player "Simulation"

    - by SjoerdV
    This may sound like a strange question, but I was wondering if there is software available which can emulate the behaviour of standalone dvd-players. I'm currently debugging a DVD we're creating, and I can't afford to go hopping to my house every time to check. The reason I'm asking is because the problems just appear on 'some' dvd-players which I cannot predict. Other option maybe, is there software that can check a VIDEO_TS folder or iso file for errors?

    Read the article

  • Oracle Communications Data Model

    - by jean-pierre.dijcks
    I've mentioned OCDM in previous posts but found the following (see end of the post) podcast on the topic and figured it is worthwhile to spread the news some more. ORetailDM and OCommunicationsDM are the two data models currently available from Oracle. Both are intended to capture: Business best practices and industry knowledge Pre-built advanced analytics intended to predict future events before they happen (like the Churn model shown below) Oracle technology best practices to ensure optimal performance of the model All of this typically comes with a reduced time to implementation, or as the marketing slogan goes, reduced time to value. Here are the links: Podcast on OCDM OTN pages for OCDM and ORDM

    Read the article

  • Adaptative interface with Open GL and machine learning in C#

    - by Afnan
    For my Semester project I have to go for any Adaptative Interface Design. My language is C# and I have to Use OpenTK (wrapper for OpenGL). I have an idea that I should show two points and some obstacles and my subject (user) would drag an object from one place to the final place avoiding the Obstacles. Also (s)he can place obstacles randomly. My software should be able to learn some paths by doing test runs and then after learning it should be able to predict the shortest path. I do not know how stupid this idea sounds but it is just an idea. I need help regarding any ideas for adaptative interface possible small projects or if my idea is ok then please can you tell me what should be used to implement it? I mean that along with OpenGl for the Graphics what can I use for machine learning?

    Read the article

  • Adaptative Interface with Open GL and Machine Learning in c#

    - by Afnan
    For my Semester project i have to go for any Adaptative Interface Design. My language is c# and i have to Use OpenTK(Wrapper for Open GL). I have an idea that I should show Two points and some obstacles. and my subject which is user would drag an object from one place to the final place avoiding the Obstacles. and he can place obstacles randomly.My software should be made to learn some paths by doing test runs and then after learning program should be able to predict the shortest path. I donot know how stupid this idea sounds but it is just an idea.I need help regarding any ideas for adaptative interface possible small projects or if my idea is ok then please can you tell me what should be used to implement it? I mean that along with OpenGl for the Graphics what can i use for machine learning that helps me Thanks

    Read the article

  • When there's no TCO, when to worry about blowing the stack?

    - by Cedric Martin
    Every single time there's a discussion about a new programming language targetting the JVM, there are inevitably people saying things like: "The JVM doesn't support tail-call optimization, so I predict lots of exploding stacks" There are thousands of variations on that theme. Now I know that some language, like Clojure for example, have a special recur construct that you can use. What I don't understand is: how serious is the lack of tail-call optimization? When should I worry about it? My main source of confusion probably comes from the fact that Java is one of the most succesful languages ever and quite a few of the JVM languages seems to be doing fairly well. How is that possible if the lack of TCO is really of any concern?

    Read the article

  • How can one manage thousands of IF...THEN...ELSE rules?

    - by David
    I am considering building an application, which, at its core, would consist of thousands of if...then...else statements. The purpose of the application is to be able to predict how cows move around in any landscape. They are affected by things like the sun, wind, food source, sudden events etc. How can such an application be managed? I imagine that after a few hundred IF-statements, it would be as good as unpredictable how the program would react and debugging what lead to a certain reaction would mean that one would have to traverse the whole IF-statement tree every time. I have read a bit about rules engines, but I do not see how they would get around this complexity.

    Read the article

  • Acceptable GC frequency for a SlimDX/Windows/.NET game?

    - by Rei Miyasaka
    I understand that the Windows GC is much better than the Xbox/WP7 GC, being that it's generational and multithreaded -- so I don't need to worry quite as much about avoiding memory allocation. SlimDX even has some unavoidable functions that generate some amount of garbage (specifically, MapSubresource creates DataBoxes), yet people don't seem to be too upset about it. I'd like to use some functional paradigms to write my code too, which also means creating objects like closures and monads. I know premature optimization isn't a good thing, but are there rules of thumb or metrics that I can follow to know whether I need to cut down on allocations? Is, say, one gen 0 GC per frame too much? One thing that has me stumped is object promotions. Gen 0 GCs will supposedly finish within a millisecond or two, but if I'm understanding correctly, it's the gen 1 and 2 promotions that start to hurt. I'm not too sure how I can predict/prevent these.

    Read the article

  • Community to discuss project ideas

    - by Auxiliary
    Although I already predict the down votes but the question has stuck in my throat for a while now. I think this has happened to many of us. Sometimes we find a great idea for a project and obviously think this is THE GREATEST idea ever but then one of the following things will happen: The project is a small one, so you might actually give it a try and see how it goes. The project is a big one, even a risk, and you just need a good programmer's community that you could just discuss your idea with them and see what they say and even get some help to make it happen. And there's always the possibility of others stealing your idea which is really bad. So could anyone suggest an online community or place or even method of talking about ideas and the ways of developing them? and do you think it's a good thing to tell others about your idea?

    Read the article

  • Which game logic should run when doing prediction for PNP state updates

    - by spaceOwl
    We are writing a multiplayer game, where each game client (player) is responsible for sending state updates regarding its "owned" objects to other players. Each message that arrives to other (remote) clients is processed as such: Figure out when the message was sent. Create a diff between NOW and that time. Run game specific logic to bring the received state to "current" time. I am wondering which sort of logic should execute as part of step #3 ? Our game is composed of a physical update (position, speed, acceleration, etc) and many other components that can update an object's state and occur regularly (locally). There's a trade off here - Getting the new state quickly or remaining "faithful" to the true state representation and executing the whole thing to predict the "true" state when receiving state updates from remote clients. Which one is recommended to be used? and why?

    Read the article

  • How do Expires headers and cache manifest rules work together?

    - by Robert K
    I find the W3C's official Offline Web Applications specification to be rather vague about how the cache manifest interacts with headers such as ETag, Expires, or Pragma on cached assets. I know that the manifest should be checked with each request so that the browser knows when to check the other assets for updates. But because the specification doesn't define how the cache manifest interacts with normal cache instructions, I can't predict precisely how the browser will react. Will assets with a future expiration date be refreshed (no matter the cache headers) when the cache manifest is updated? Or, will those assets obey the normal caching rules? Which caching mechanism, HTTP cache versus cache manifest, will take precedence, and when?

    Read the article

  • What is an acceptable GC frequency for a SlimDX/Windows/.NET game?

    - by Rei Miyasaka
    I understand that the Windows GC is much better than the Xbox/WP7 GC, being that it's generational and multithreaded -- so I don't need to worry quite as much about avoiding memory allocation. SlimDX even has some unavoidable functions that generate some amount of garbage (specifically, MapSubresource creates DataBoxes), yet people don't seem to be too upset about it. I'd like to use some functional paradigms to write my code too, which also means creating objects like closures and monads. I know premature optimization isn't a good thing, but are there rules of thumb or metrics that I can follow to know whether I need to cut down on allocations? Is, say, one gen 0 GC per frame too much? One thing that has me stumped is object promotions. Gen 0 GCs will supposedly finish within a millisecond or two, but if I'm understanding correctly, it's the gen 1 and 2 promotions that start to hurt. I'm not too sure how I can predict/prevent these.

    Read the article

  • Microsoft Surface - my take

    - by Sahil Malik
    SharePoint 2010 Training: more information Okay so the news has sunk in. Microsoft talked about two tablets, one that runs WinRT, the other than runs full Win8 pro. I thought I’d compare the two, and put on my clairvoyance hat to predict where this will go. In fairness I think, you can compare the WinRT Surface to iPAD, and Win8Pro Surface to Macbook Air. So here is a bang by bang comparison, WinRT Surface iPad Verdict 676 grams 652 grams Equal 9.3mm 9.4mm Equal Read full article ....

    Read the article

  • Figuring out what object is closer to a certain point?

    - by user1157885
    I'm trying to create fog of war, I have the visual effect created but I'm not sure how to deal with the hiding of other players if they're within the fog of war. So right now the thing I'm trying to do is if another player is hiding behind a wall then not to render that player. I was thinking of doing it by sending a ray in the direction of all the players, and then creating a list of all the obstacles that ray collides with and then trying to figure out if an obstacle was closer than the player in order to predict the distance. But then I realized I'm not really sure how to figure out if the obstacle is infact closer or not because I have to account for all the dimensions, so I'm kind of stuck. First of all is this approach the correct way to go about it and secondly how would I calculate if the obstacle was infact closer taking into account the X Y and Z. Thanks

    Read the article

  • Prediction happening on (sending) client side

    - by Daniel
    This seems like a simple enough concept, but I haven't seen this implemented anywhere yet. Assuming that the server just forwards and verifies data... I'm using mouse-based movement, so it's not too difficult to predict the location of the player 150ms from when the event is sent. I'm thinking it is more accurate than using old data and older data on the receiving clients' side. The question I have, is why can I not find any examples of this? Is there something fundamentally wrong with this that I cannot find anyone implementing or talking about implementing this.

    Read the article

  • Number Game Algorithm

    - by 7Aces
    Problem Link - http://www.iarcs.org.in/inoi/2011/zco2011/zco2011-1b.php The task is to find the maximum score you can get in the game. Such problems, based on games, where you have to simulate, predict the result, or obtain the maximum possible score always seem to puzzle me. I can do it with recursion by considering two cases - first number picked or last number picked, each of which again branches into two states similarly, and so on... which finally can yield the max possible result. But it's a very time-inefficient approach, since time increases exponentially, due to the large test cases. What is the most pragmatic approach to the problem, and to such problems in general?

    Read the article

  • Extracting, Transforming, and Loading (ETL) Process

    The process of Extracting, Transforming, and Loading data in to a data warehouse is called Extract Transform Load (ETL) process.  This process can be used to obtain, analyze, and clean data from various data sources so that it can be stored in a uniform manner within a data warehouse. This data can then be used by various business intelligence processes to provide an organization with more of an in depth analysis of the current state of the company and where it is heading. A standard ETL process that might be used by a health care system may include importing all of their patients names, diagnoses and prescriptions in to a unified data warehouse so that trends can be spotted in regards to outbreaks like the flu and also predict potential illness that a patient might be affected by based on other patients with similar symptoms.

    Read the article

  • Microsoft Releasing Windows 8 in Late October

    The one thing Microsoft did not give was the exact date in October that this latest operating system would become available. But that may be difficult to predict. Brandon LeBlanc, Microsoft communications manager, stated only that Windows 8 is on track for a release to manufacturers (RTM) in August. The company, on average, produces a new version of Windows every three years; the last one to come out was Windows 7, back in October of 2009. The operating system will enjoy a widespread release, coming out in 109 languages across 231 markets throughout the world. It will be used not only in PCs...

    Read the article

  • Collaborative Filtering Techniques

    - by user95261
    Good Day! I am in need of help about collaborative filtering techniques implementation in predicting psychopathy of twitter users. I have two data set, training set and test set. Training set users have already scores in psychopathy, I need any collaborative filtering techniques to predict scores of test set users. Collaborative Filtering such as Item/User-Based CF, Bayesian Belief Nets, Clustering, Latent Semantic, etc. Please help me. :( I am very confused on how to implement any of these. Thank you!

    Read the article

  • code cowboy on the team

    - by MK01
    How do you deal with a team member who is senior to you and always jumps on other people's projects and completes them over night or over the weekend? She seems to work 80 hour weeks whether there is an emergency or not and it is somewhat difficult to predict which part of your todo list she is going to strike next. Sometimes days of your work are wasted because on Monday morning you find a checkin completing the project you've spent most of the previous week working on. To people asking of the quality: Usually it is quite good but: there is also a lot of refactoring of code involved, including code 'owned' by other team members, w/o regard for the test coverage, with the obvious results.

    Read the article

  • How Microsoft copy the result from user for Bing.

    - by anirudha
    from some days before i read about the problem google show with another search engine Bing who come from Microsoft. well google show that bing copycat and Microsoft show that they not. i am not known matter much more but predict something that maybe true if google are right in what they tell. if MS really copy the search from User then they use Live service who used by all user who use the internet or on the internet. see the snapshot i have in this snapshot they tell that Help improved search results,internet safety and Microsoft software by allowing Microsoft to collect and retain information about your system, the searches you do and the websites you visit. Microsoft will not use this information to personally identify or contact you. Learn more so why they need the research on what user search on internet even the not contact but user their is no reason as well as bing. well their is a example who maybe possible they  use for bing.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >