Search Results

Search found 439 results on 18 pages for 'accuracy'.

Page 6/18 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • To sample or not to sample...

    - by [email protected]
    Ideally, we would know the exact answer to every question. How many people support presidential candidate A vs. B? How many people suffer from H1N1 in a given state? Does this batch of manufactured widgets have any defective parts? Knowing exact answers is expensive in terms of time and money and, in most cases, is impractical if not impossible. Consider asking every person in a region for their candidate preference, testing every person with flu symptoms for H1N1 (assuming every person reported when they had flu symptoms), or destructively testing widgets to determine if they are "good" (leaving no product to sell). Knowing exact answers, fortunately, isn't necessary or even useful in many situations. Understanding the direction of a trend or statistically significant results may be sufficient to answer the underlying question: who is likely to win the election, have we likely reached a critical threshold for flu, or is this batch of widgets good enough to ship? Statistics help us to answer these questions with a certain degree of confidence. This focuses on how we collect data. In data mining, we focus on the use of data, that is data that has already been collected. In some cases, we may have all the data (all purchases made by all customers), in others the data may have been collected using sampling (voters, their demographics and candidate choice). Building data mining models on all of your data can be expensive in terms of time and hardware resources. Consider a company with 40 million customers. Do we need to mine all 40 million customers to get useful data mining models? The quality of models built on all data may be no better than models built on a relatively small sample. Determining how much is a reasonable amount of data involves experimentation. When starting the model building process on large datasets, it is often more efficient to begin with a small sample, perhaps 1000 - 10,000 cases (records) depending on the algorithm, source data, and hardware. This allows you to see quickly what issues might arise with choice of algorithm, algorithm settings, data quality, and need for further data preparation. Instead of waiting for a model on a large dataset to build only to find that the results don't meet expectations, once you are satisfied with the results on the initial sample, you can  take a larger sample to see if model quality improves, and to get a sense of how the algorithm scales to the particular dataset. If model accuracy or quality continues to improve, consider increasing the sample size. Sampling in data mining is also used to produce a held-aside or test dataset for assessing classification and regression model accuracy. Here, we reserve some of the build data (data that includes known target values) to be used for an honest estimate of model error using data the model has not seen before. This sampling transformation is often called a split because the build data is split into two randomly selected sets, often with 60% of the records being used for model building and 40% for testing. Sampling must be performed with care, as it can adversely affect model quality and usability. Even a truly random sample doesn't guarantee that all values are represented in a given attribute. This is particularly troublesome when the attribute with omitted values is the target. A predictive model that has not seen any examples for a particular target value can never predict that target value! For other attributes, values may consist of a single value (a constant attribute) or all unique values (an identifier attribute), each of which may be excluded during mining. Values from categorical predictor attributes that didn't appear in the training data are not used when testing or scoring datasets. In subsequent posts, we'll talk about three sampling techniques using Oracle Database: simple random sampling without replacement, stratified sampling, and simple random sampling with replacement.

    Read the article

  • Cutting Subscriber Churn with Media Intelligence

    - by Oracle M&E
    There's lots of talk in media and entertainment companies about using "big data".  But it's often hard to see through the hype and understand how big data brings benefits in the real world.  How about being able to predict with 92% accuracy which subscribers intend to cancel their subscription - and put in place a renewal strategy to dramatically reduce that churn?  That's what Belgian media company De Persgroep has achieved with Oracle's Media Intelligence solution.  "One of the areas in which we're able to achieve beautiful results using big data is the churn prediction," De Persgroep's CIO Luc Verbist explains in a new Oracle video.  "Based on all the data that we collect on websites and all your behavior, payment behavior and so on, we're able to make a prediction model, which, with an accuracy of 92 percent, is able to predict that you probably won't renew your newspaper, anymore. So our approach to renewal is completely different to the people in that segment than towards the other people. And this has brought us a lot of value and a lot of customers who didn't stop their newspaper where else they would have done so." De Persgroep is using Oracle's Big Data Appliance, along with software from Oracle partner NGDATA to build up a detailed "DNA profile" of each individual customer, based on every interaction, in real time.  This means that any change in behavior - a drop in content consumption, a late subscription payment, a negative social media comment - is captured.  Applying advanced data modeling techniques automatically converts those raw interactions into data with real business meaning - like that customer's risk of churning. The very same data profile - comprising hundreds if individual dimensions - can simultaneously drive targeted marketing campaigns - informing audience about new content that's most relevant and encouraging them to subscribe.  It can power content recommendations and personalization right in the content sites and apps. And it can link directly into digital advertising networks via platforms like Oracle's BlueKai data management platform (DMP), to drive increased advertising CPMs. Using Oracle's Media Intelligence solution enables this across De Persgroep's business - comprising eight newspapers and 25 magazines published in Belgium and The Netherlands, and digital properties including websites with 6m daily unique visitors, along with TV and radio stations. "The company strategy is in fact a customer-centric strategy, so we want to get a 360-view about our customers, about our prospects. And the big data project helped us to achieve that goal," says Verbist. Using Oracle's Big Data Appliance to underpin the solution created huge savings.   "The selection of the Big Data Appliance was quite easy.  It was very quick to install, very easy to install, as well. And it was far cheaper than building our own Hadoop cluster. So it was in fact a non-brainer," Verbist explains. Applying Media Intelligence approach has yielded incredible results for De Persgroep, including: Improved products - with a new understanding of how readers are consuming print and digital content across the day Improved customer segmentation - driving a 6X improvement in customer prospecting and acquisition when contacting a specific segment Having the project up and running in three months And that has led to competitive benefits for De Persgroep, as Luc Verbist explains: "one of the results we saw since we started using big data is that we're able to increase the gap between we as the market leader, and the second [by] more than 20 percent."

    Read the article

  • Removing Barriers to Create Effective Data Models

    After years of creating and maintaining data models, I have started to notice common barriers that decrease the accuracy and usefulness of models. In my opinion, the main causes of these barriers are the lack of knowledge and communication from within a company. The lack of knowledge in regards to data models or data modeling can take many forms. Company Culture Knowledge Whether documented or undocumented, existing business rules of a company can affect how data is modeled. For example, if a company only allows 1 assigned person per customer to be able to manipulate a customer’s record then then a data model that includes an associated table that joins customers and employee’s would be unneeded because that would allow for the possibility of multiple employees to handle a customer because of the potential for a many to many relationship between Customers and Employees. Technical Knowledge Depending on the data modeler’s proficiency in modeling data they can inadvertently cause issues and/or complications with a design without even noticing. It is important that companies share data modeling responsibilities so that the models are developed from multiple perspectives of a system, company and the original problem.  In addition, the tools that a company selects to create data models can also affect the accuracy of the model if designer are not familiar with the tools or the tools are too complex to use for the designer. Existing System Knowledge In order for a data modeler to model data for an existing system so that new changes can be applied to a system then they need to at least know the basic concepts of a system so that they can work within it. This will promote reusability of data and prevent the chance of duplicating data. Project Knowledge This should be pretty obvious, but it is very hard to create an accurate data model without knowing what data needs to be modeled. I have always found it strange that I have been asked to start modeling data prior to a client formalizing any requirements. Usually when this happens I have to make several iterations to a model, and the client still does not know exactly what they want.  In addition additional issues can arise when certain stakeholders of a project are not consulted prior to the design or after the project is over because it can cause miss understandings and confusion by the end user as well as possibly not solving the original problem for which a project is intended to solve. One common thread between each type of knowledge is that they can all be avoided through the use of good communication. For example, if a modeler is new to a company then they should ask older employees about any business specific rules that may be documented or undocumented that must be applied to projects in general. Furthermore, if a modeler is not really familiar with a specific data modeling software then they need to speak up and ask for help form other employees or their manager. This will not only help the modeler in the project, but also help them in future projects that they do for the company. Additionally, if a project is not clearly defined prior to a data modeler being assigned the modeling project then it is their responsibility to communicate with the other stakeholders to clarify any part of a project that is unclear so that the data model that is created is accurately aligned with a project.

    Read the article

  • Where Have All the Ugly Forms Gone? Users and ADF Took Care Of It

    - by ultan o'broin
    Sometimes I hear that our application demos are a bit too "cutsey" and that we never talk about with any user roles that have lots of data entry as a requirement. Some (no names) consider those old clunker forms, with the myriad rows of fields, to be super-productive for data clerks. We do have such roles covered in Oracle Fusion Applications for sure. But consider what is really the issue here: productivity. Check out how the Oracle Fusion Financials Applications User Experience team went about designing for productivity when receiving and entering invoice data, for example. See how Fusion Financials caters so well for input and control of data? Central to all this is knowing the users and how they work: what tasks do they need to perform, and when. Read more about Fusion Financials productivity in the white paper, Get It Done Fast, Get It Done Right: The Oracle Fusion Financials User Experience. Now and then, I see forms that weren't designed for end user activity at all. Instead, they were designed by developers or by the IT department around the database schema. Forms with literally dozens of fields on the same page, sometimes. Forms that give the impression there was only task involved, when there may have been several. At times, completing one of these huge forms accurately became so tedious that, under pressure, it made more sense for the user to complete it quickly as possible and then let somebody else check it for accuracy and fill in the gaps from data emailed along in spreadsheet form. Data accuracy is critical in our business. Not good. Not efficient. Not productive. So here are a few basics on forms design for data entry-type user roles. A great place for developers to start exploring what is possible with forms layout is the Rich Client User Experience (RCUX) guidance on Form Layout, using ADF components. User-Centered Forms Design Considerations The starting point--something you must always keep in mind with your own design--is design for the end user. Find a representative end user, and keep that user engaged throughout the design, deployment, and test process. Consider these points in user testing those forms: Are there automated or technical solutions to entering the data that avoid manual input in the first place? For example, imports, uploads, OCR, whatever. Some day we will be able to tell Siri to do it, but leave that for now. Design your form to reflect the task involved (i.e., the business process) and not the database schema. On the form, group like fields together, logically. Eliminate duplicate data entry or prepopulate from previous data entry. Allow users to complete fields in the order they wish (i.e., no interdependency). Allow for tabbing between fields (keyboard is faster than mouse), so know how the browser supports this (see that RCUX guideline). Allow for final validation at the page level not at field-level entry. Way better for heads-down users. For example, ADF messages allow you to see a list of all validation errors on a page on a final submit or navigation action and to easily navigate to the point of error. Better still, be error tolerant. Allow users to enter data in formats they comfortable with. Bind any relevant user preference setting to the input format allowed (for example, the locale date format). Explore what data entry conversion can do for you automatically too (see the ADF converter demos, convenience patterns can also be written). Only ask for data input when it's needed. Get rid of, or hide optional fields. Cut down on the number of mandatory fields, and mark them clearly (use a *). Clearly label the fields in plain language. I am sure you may have a few more tips on forms design for data entry users. Remember the user before finding the comments.

    Read the article

  • Code Golf: Leibniz formula for Pi

    - by Greg Beech
    I recently posted one of my favourite interview whiteboard coding questions in "What's your more controversial programming opinion", which is to write a function that computes Pi using the Leibniz formula. It can be approached in a number of different ways, and the exit condition takes a bit of thought, so I thought it might make an interesting code golf question. Shortest code wins! Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to within 0.00001. Edit: 3 Jan 2008 As suggested in the comments I changed the exit condition to be within 0.00001 as that's what I really meant (an accuracy 5 decimal places is much harder due to rounding and so I wouldn't want to ask that in an interview, whereas within 0.00001 is an easier to understand and implement exit condition). Also, to answer the comments, I guess my intention was that the solution should compute the number of iterations, or check when it had done enough, but there's nothing to prevent you from pre-computing the number of iterations and using that number. I really asked the question out of interest to see what people would come up with.

    Read the article

  • Milliseconds in DateTime.Now on .NET Compact Framework always zero? [SOLVED]

    - by Marcel
    Hi all, i want to have a time stamp for logs on a Windows Mobile project. The accuracy must be in the range a hundred milliseconds at least. However my call to DateTime.Now returns a DateTime object with the Millisecond property set to zero. Also the Ticks property is rounded accordingly. How to get better time accuracy? Remember, that my code runs on on the Compact Framework, version 3.5. I use a HTC touch Pro 2 device. Based on the answer from MusiGenesis i have created the following class which solved this problem: /// <summary> /// A more precisely implementation of some DateTime properties on mobile devices. /// </summary> /// <devdoc>Tested on a HTC Touch Pro2.</devdoc> public static class DateTimePrecisely { /// <summary> /// Remembers the start time when this model was created. /// </summary> private static DateTime _start = DateTime.Now; /// <summary> /// Remembers the system uptime ticks when this model was created. This /// serves as a more precise time provider as DateTime.Now can do. /// </summary> private static int _startTick = Environment.TickCount; /// <summary> /// Gets a DateTime object that is set exactly to the current date and time on this computer, expressed as the local time. /// </summary> /// <returns></returns> public static DateTime Now { get { return _start.AddMilliseconds(Environment.TickCount - _startTick); } } }

    Read the article

  • Geolocation Firefox accurate than iPhone Safari?

    - by johnz
    I just tested Geolocation on Firefox 3.6 and iPhone Safari (os 3.1.3), the result is interesting, firefox is more accurate than safari. any one got idea how to make iPhone Safari result more accurate. this is the code for testing: navigator.geolocation.getCurrentPosition(handler, {enableHighAccuracy: true}); function handler(location) { var message = document.getElementById("message"); message.innerHTML = "<img src='http://maps.google.com/staticmap?sensor=true&center=" + location.coords.latitude + "," + location.coords.longitude + "&size=300x300&maptype=street&zoom=16&key=ABQIAAAAZrVtlT2df2pkfI_RZB_6WBRWTAkRKJS7h1XjKaOTqACHuw1n0BT5cATkkKFnZNGHmrwUw9IilQK0Eg' />"; message.innerHTML+="<p>Longitude: " + location.coords.longitude + "</p>"; message.innerHTML+="<p>Latitude: " + location.coords.latitude + "</p>"; message.innerHTML += "<p>Accuracy: " + location.coords.accuracy + "</p>"; // call the function with my current lat/lon getPlaceFromFlickr(location.coords.latitude, location.coords.longitude, 'output'); } . . test from here

    Read the article

  • SVM Classification - minimum number of input sets for each class

    - by Amol Joshi
    Im trying to build an app to detect images which are advertisements from the webpages. Once I detect those Ill not be allowing those to be displayed on the client side. From the help that I got here in stackoverflow, I thought SVM is the best approach to my aim. So, I have coded SVM and an SMO myself. The dataset which I have got from UCI data repository has 3280 instances ( Link to Dataset- http://archive.ics.uci.edu/ml/datasets/Internet+Advertisements )where around 400 of them are from class representing Advertisement images and rest of them representing non-advertisement images. Right now Im taking the first 2800 input sets and training the SVM. But after looking at the accuracy rate I realised that most of those 2800 input sets are from non-advertisement image class. So Im getting very good accuracy for that class. So what can I do here? About how many input set shall I give to SVM to train and how many of them for each class? Thanks. Cheers. ( Basically made a new question because the context was different from my previous question. http://stackoverflow.com/questions/1991113/optimization-of-neural-network-input-data )

    Read the article

  • Accurate least-squares fit algorithm needed

    - by ggkmath
    I've experimented with the two ways of implementing a least-squares fit (LSF) algorithm shown here. The first code is simply the textbook approach, as described by Wolfram's page on LSF. The second code re-arranges the equation to minimize machine errors. Both codes produce similar results for my data. I compared these results with Matlab's p=polyfit(x,y,1) function, using correlation coefficients to measure the "goodness" of fit and compare each of the 3 routines. I observed that while all 3 methods produced good results, at least for my data, Matlab's routine had the best fit (the other 2 routines had similar results to each other). Matlab's p=polyfit(x,y,1) function uses a Vandermonde matrix, V (n x 2 matrix) and QR factorization to solve the least-squares problem. In Matlab code, it looks like: V = [x1,1; x2,1; x3,1; ... xn,1] % this line is pseudo-code [Q,R] = qr(V,0); p = R\(Q'*y); % performs same as p = V\y I'm not a mathematician, so I don't understand why it would be more accurate. Although the difference is slight, in my case I need to obtain the slope from the LSF and multiply it by a large number, so any improvement in accuracy shows up in my results. For reasons I can't get into, I cannot use Matlab's routine in my work. So, I'm wondering if anyone has a more accurate equation-based approach recommendation I could use that is an improvement over the above two approaches, in terms of rounding errors/machine accuracy/etc. Any comments appreciated! thanks in advance.

    Read the article

  • Calculating Nearest Match to Mean/Stddev Pair With LibSVM

    - by Chris S
    I'm new to SVMs, and I'm trying to use the Python interface to libsvm to classify a sample containing a mean and stddev. However, I'm getting nonsensical results. Is this task inappropriate for SVMs or is there an error in my use of libsvm? Below is the simple Python script I'm using to test: #!/usr/bin/env python # Simple classifier test. # Adapted from the svm_test.py file included in the standard libsvm distribution. from collections import defaultdict from svm import * # Define our sparse data formatted training and testing sets. labels = [1,2,3,4] train = [ # key: 0=mean, 1=stddev {0:2.5,1:3.5}, {0:5,1:1.2}, {0:7,1:3.3}, {0:10.3,1:0.3}, ] problem = svm_problem(labels, train) test = [ ({0:3, 1:3.11},1), ({0:7.3,1:3.1},3), ({0:7,1:3.3},3), ({0:9.8,1:0.5},4), ] # Test classifiers. kernels = [LINEAR, POLY, RBF] kname = ['linear','polynomial','rbf'] correct = defaultdict(int) for kn,kt in zip(kname,kernels): print kt param = svm_parameter(kernel_type = kt, C=10, probability = 1) model = svm_model(problem, param) for test_sample,correct_label in test: pred_label, pred_probability = model.predict_probability(test_sample) correct[kn] += pred_label == correct_label # Show results. print '-'*80 print 'Accuracy:' for kn,correct_count in correct.iteritems(): print '\t',kn, '%.6f (%i of %i)' % (correct_count/float(len(test)), correct_count, len(test)) The domain seems fairly simple. I'd expect that if it's trained to know a mean of 2.5 means label 1, then when it sees a mean of 2.4, it should return label 1 as the most likely classification. However, each kernel has an accuracy of 0%. Why is this? On a side note, is there a way to hide all the verbose training output dumped by libsvm in the terminal? I've searched libsvm's docs and code, but I can't find any way to turn this off.

    Read the article

  • More than one location provider at same time

    - by Rabarama
    I have some problems with location systems. I have a service that implements locationlistener. I want to get the best location using network when possible, gps if network is not enough accurate (accuracy greater than 300mt). The problem is this. I need location (accurate if possible, inaccuarte otherways) every 5 minutes. I start with a : LocationManager lm=(LocationManager)getApplicationContext().getSystemService(LOCATION_SERVICE); Criteria criteria = new Criteria(); criteria.setAccuracy(Criteria.ACCURACY_COARSE); criteria.setAltitudeRequired(false); criteria.setBearingRequired(false); String provider=lm.getBestProvider(criteria, true); if(provider!=null){ lm.requestLocationUpdates( provider,5*60*1000,0,this); In "onLocationChanged" i listen to locations and when i get a location with accuracy greater than 300mt, i want to change to gps location system. If I remove allupdates and then request for gps updates, like this: lm.removeUpdates((android.location.LocationListener) this); Criteria criteria = new Criteria(); criteria.setAccuracy(Criteria.ACCURACY_FINE); criteria.setAltitudeRequired(false); criteria.setBearingRequired(false); String provider=lm.getBestProvider(criteria, true); if(provider!=null){ lm.requestLocationUpdates( provider,5*60*1000,0,this); } system stops waiting for gpsupdate, and if i'm in a close room it can stay without location updates for hours, ignoring timeupdate indications. Is there a way to tell locationprovider to switch to network if gps is not giving a location in "x" seconds? or how to understand when gps is not localizing? or if i requestlocationupdates from 2 providers at same time (network and gps), can be a problem? Any suggestion?

    Read the article

  • Matlab-Bisection-Newton-Secant , finding roots?

    - by i z
    Hello and thanks in advance for your possible help ! Here's my problem: I have 2 functions f1(x)=14.*x*exp(x-2)-12.*exp(x-2)-7.*x.^3+20.*x.^2-26.*x+12 f2(x)=54.*x.^6+45.*x.^5-102.*x.^4-69.*x.^3+35.*x.^2+16.*x-4 Make the graph for those 2, the first one in [0,3] and the 2nd one in [-2,2]. Find the 3 roots with accuracy of 6 decimal digits using a) bisection ,b) newton,c)secant.For each root find the number of iterations that have been made. For Newton-Raphson, find which roots have quadratic congruence and which don't. What is the main common thing that roots with no quadratic congruence (Newton's method)? Why ? Excuse me if i ask silly things, but i'm asked to do this with no Matlab courses and I'm trying to learn it myself. There are many issues i have with this exercise . Questions : 1.I only see 2 roots in the graph for the f1 function and 4-5 (?) roots for the function f2 and not 3 roots as the exercise says. Here's the 2 graphs : http://postimage.org/image/cltihi9kh/ http://postimage.org/image/gsn4sg97f/ Am i wrong ? Do both have only 3 roots in [0,3] and [-2,2] ? Concerning the Newton's method , how am i supposed to check out which roots have quadratic congruence and which not??? Accuracy means tolerance e=10^(-6), right ?

    Read the article

  • how to 'scale' these three tables?

    - by iddqd
    I have the following Tables: Players id playerName Weapons id type otherData Weapons2Player id playersID_reference weaponsID_reference That was nice and simple. Now I need to SELECT items from the Weapons table, according to some of their characteristics that i previously just packed into the otherData column (since it was only needed on the client side). The problem is, that the types have varying characteristics - but also a lot of similar data. So I'm trying to decide on the following possibilities, all of which have their pros and cons. Solution A Kill the Weapons table, and create a new table for each Weapon-Type: Weapons_Swords id bladeType damage otherData Weapons_Guns id accuracy damage ammoType otherData But how will i Link these to the Players ? create Weapons_Swords2Players, Weapons_Guns2Players for each weapon-type? (Will result in a lot more JOINS when loading the player with all his weapons...and it's also more complicated to insert a new player) or add another column to Weapons2Players called WeaponsTypeTable, then do sub-selects to the correct Weapons sub-table (seems easier, but not really right, slightly easier insert i guess) Solution B Keep the Weapons table, and add all the fields i need to it. The Problem is that then there will be NULL fields, since not all Weapon-Types use all fields (can't be right) Weapons id type accuracy damage ammoType bladeType otherData This seems to be pretty basic stuff, but i just can't decide what's best. Or is there a correct Solution C? many thanks.

    Read the article

  • New TabItem Content ActualHeight crashes Xaml Window

    - by Jack Navarro
    I am able to create new TabItems with Content dynamically to a new window by streaming the Xaml with XamlReader: NewWindow newWindow = new NewWindow(); newWindow.Show(); TabControl myTabCntrol = newWindow.FindName("GBtabControl") as TabControl; StringReader stringReader = new StringReader(XamlGrid); XmlReader xmlReader = XmlReader.Create(stringReader); TabItem myTabItem = new TabItem(); myTabItem.Header = qDealName; myTabItem.Content = (UIElement)XamlReader.Load(xmlReader); myTabCntrol.Items.Add(myTabItem); This works fine. It displays a new grid wrapped in a scrollviewer. The problem is access the TabItem content from the newWindow. TabItem ti = GBtabControl.SelectedItem as TabItem; string scrollvwnm = "scrollViewer" + ti.Header.ToString(); MessageBox.Show(ti.ActualHeight.ToString()); // returns 21.5 ScrollViewer scrlvwr = this.FindName(scrollvwnm) as ScrollViewer; MessageBox.Show(scrollvwnm); // Displays name double checked for accuracy MessageBox.Show(scrlvwr.ActualHeight.ToString()); //Crashes ScrollViewer scrlvwr = ti.FindName(scrollvwnm) as ScrollViewer; MessageBox.Show(scrollvwnm); // Displays name double checked for accuracy MessageBox.Show(scrlvwr.ActualHeight.ToString()); //Also Crashes Is there a method to refresh UI in XAML so the new window is able to access the newly loaded tab item content? Thanks

    Read the article

  • A couple of questions about NHibernate's GuidCombGenerator

    - by Eyvind
    The following code can be found in the NHibernate.Id.GuidCombGenerator class. The algorithm creates sequential (comb) guids based on combining a "random" guid with a DateTime. I have a couple of questions related to the lines that I have marked with *1) and *2) below: private Guid GenerateComb() { byte[] guidArray = Guid.NewGuid().ToByteArray(); // *1) DateTime baseDate = new DateTime(1900, 1, 1); DateTime now = DateTime.Now; // Get the days and milliseconds which will be used to build the byte string TimeSpan days = new TimeSpan(now.Ticks - baseDate.Ticks); TimeSpan msecs = now.TimeOfDay; // *2) // Convert to a byte array // Note that SQL Server is accurate to 1/300th of a millisecond so we divide by 3.333333 byte[] daysArray = BitConverter.GetBytes(days.Days); byte[] msecsArray = BitConverter.GetBytes((long) (msecs.TotalMilliseconds / 3.333333)); // Reverse the bytes to match SQL Servers ordering Array.Reverse(daysArray); Array.Reverse(msecsArray); // Copy the bytes into the guid Array.Copy(daysArray, daysArray.Length - 2, guidArray, guidArray.Length - 6, 2); Array.Copy(msecsArray, msecsArray.Length - 4, guidArray, guidArray.Length - 4, 4); return new Guid(guidArray); } First of all, for *1), wouldn't it be better to have a more recent date as the baseDate, e.g. 2000-01-01, so as to make room for more values in the future? Regarding *2), why would we care about the accuracy for DateTimes in SQL Server, when we only are interested in the bytes of the datetime anyway, and never intend to store the value in an SQL Server datetime field? Wouldn't it be better to use all the accuracy available from DateTime.Now?

    Read the article

  • MySQL Join/Comparison on a DATETIME column (<5.6.4 and > 5.6.4)

    - by Simon
    Suppose i have two tables like so: Events ID (PK int autoInc), Time (datetime), Caption (varchar) Position ID (PK int autoinc), Time (datetime), Easting (float), Northing (float) Is it safe to, for example, list all the events and their position if I am using the Time field as my joining criteria? I.e.: SELECT E.*,P.* FROM Events E JOIN Position P ON E.Time = P.Time OR, even just simply comparing a datetime value (taking into consideration that the parameterized value may contain the fractional seconds part - which MySQL has always accepted) e.g. SELECT E.* FROM Events E WHERE E.Time = @Time I understand MySQL (before version 5.6.4) only stores datetime fields WITHOUT milliseconds. So I would assume this query would function OK. However as of version 5.6.4, I have read MySQL can now store milliseconds with the datetime field. Assuming datetime values are inserted using functions such as NOW(), the milliseconds are truncated (<5.6.4) which I would assume allow the above query to work. However, with version 5.6.4 and later, this could potentially NOT work. I am, and only ever will be interested in second accuracy. If anyone could answer the following questions would be greatly appreciated: In General, how does MySQL compare datetime fields against one another (consider the above query). Is the above query fine, and does it make use of indexes on the time fields? (MySQL < 5.6.4) Is there any way to exclude milliseconds? I.e. when inserting and in conditional joins/selects etc? (MySQL 5.6.4) Will the join query above work? (MySQL 5.6.4) EDIT I know i can cast the datetimes, thanks for those that answered, but i'm trying to tackle the root of the problem here (the fact that the storage type/definition has been changed) and i DO NOT want to use functions in my queries. This negates all my work of optimizing queries applying indexes etc, not to mention having to rewrite all my queries. EDIT2 Can anyone out there suggest a reason NOT to join on a DATETIME field using second accuracy?

    Read the article

  • Scan all domain workstations for specific registry key/environmental variable

    - by Trevor
    I'm looking for scripts or software that can scan workstations on a domain for a particular environmental variable (for interest, it was used to store the SOE build version) and generate a report. Accuracy is key, I don't want any workstations skipped or missed. And considering workstations will need to be powered on for anything to remotely read from the registry (and there's no guarantee they will be), that means something that can sit and run continuously for a while, updating its own records as it goes. Does anyone know of such a beast?

    Read the article

  • measuring microseconds a process runs in Linux

    - by John Kube
    I'm looking to get the number of microseconds that a process takes to execute. Does anybody know how to do this on a Linux system? (I would settle for milliseconds, if that's as good as I can get.) NOTE: I don't think the time command will work to the accuracy I'm looking for...

    Read the article

  • Blackberry device GPS hardware specs [closed]

    - by colemanm
    I'm looking to find out detailed specifications for the built-in GPS hardware in the Blackberry Bold and Curve devices (9000 and 8350). RIM's documentation includes just a rudimentary description of the specs, but I'm looking for things like the actual detailed hardware/chipset info so we can research the accuracy needs for some upcoming projects we have. Knowing simply "A-GPS support" isn't really good enough... Does anyone know of any resources for finding advanced specs for built-in Blackberry hardware?

    Read the article

  • What is 'FizzBuzz' for system administrators?

    - by docgnome
    FizzBuzz is a simple test of programing ability, often used by employers to weed out people who can't actually program. Is there an equivalent test for system administrators and general IT guys? Clarification I'm looking for things that can be tested in an interview setting with some accuracy. Obviously, this isn't going to clearly determine the right person, just as FizzBuzz doesn't for programmers. I'm just looking to weed out people who think they can work as a system administrator/IT Person because they can surf the web.

    Read the article

  • What program will monitor the chords I'm playing through my MIDI keyboard live?

    - by Jasper
    Leaving Cubase 5 out of the picture, when I use Reason 4.0 to compose something, I need a program running that will monitor the chords I'm pressing WHILE I'm pressing them. See, I'm a new keyboardist, and a thing like this will improve my accuracy by 100%. At one point, I'll stop needing such programs even, but for now, I'd REALLY like to have something of that sort. Free or commercial suggestions.

    Read the article

  • monitoring services, CPU, memory remotely on a Windows server machine

    - by ToastMan
    I'm looking for a tool that is able to (remotely) monitor CPU and Memory in a Windows server but most importantly, which service/process is using it. Or-- is it possible to monitor a specific running service? We got a server that freezes on regular basis and we're trying to find the culprit without using a local debugger. Would be great if the monitoring software came with an agent that we can install on the remote clients for maximum accuracy. Any suggestions are very much appreciated.

    Read the article

  • Time sync in data center

    - by ak
    We currently have setting to sync time when spread is more than 5 mins, but it's getting to a point where some applications don't accept it. What is best practice out there to sync time for all windows and unix boxes to sync with time server or domain controller. Windows time service is not made for high accuracy less then 10 secs. What are alternatives ?

    Read the article

  • CRM@Oracle Series: Forecasting

    - by tony.berk
    What do you trust more: the weather forecast or your sales forecast? I hope the answer is your sales forecast! Either way, would your sales forecast be more accurate if sales management had visibility into what the sales reps are forecasting and what has changed since the last forecast? What if management could adjust forecasts for accuracy based on analytic tools? Today's slidecast discusses sales forecasting and how Oracle implemented forecasting in our global implementation of Siebel CRM, including the steps involved to roll up the forecast. CRM@Oracle - Forecasting Click here to learn more about Oracle CRM products and here to learn about other customers using Oracle CRM. Are you enjoying the CRM@Oracle Series? If you have a particular CRM area or function which you'd like to hear how Oracle implemented it internally, let us know and we'll get it on our list.

    Read the article

  • Welch's Juices-up Its Inventory Management with Oracle Supply Chain

    - by [email protected]
    Supply & Demand Chain Executive published recently a great success story about Welch's implementation of "Take Supply Chain and G.SI to work with Oracle Process Manufacturing". The company says it's been able to improve operational control, inventory accuracy, visibility and order fulfillment by automating its processes across three production/warehousing locations nationwide. Improving warehouse and inventory management operations creates efficiencies across a high-velocity nationwide supply chain Welch's production facilities were collecting more information than ever before on the flow of materials and inventory, but the company needed an effective and accurate method to organize and manage these data.   Article found at: http://www.sdcexec.com/publication/article.jsp?pubId=1&id=12256&pageNum=2     

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >