Search Results

Search found 1106 results on 45 pages for 'accurate'.

Page 7/45 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Features of Visual Studio 2010

    Visualize your workspace with new multiple monitor support, powerful Web development, new SharePoint support with tons of templates and Web parts, and more accurate targeting of any version of the .NET Framework. Get set to unleash your creativity.

    Read the article

  • Easy QueryBuilder - A User-Friendly Ad-Hoc Advanced Search Solution

    Constructing an easy and powerful QueryBuilder interface becomes more important for complex data grid filtering and accurate reporting services. In this article, I'll discuss how to build a query search engine using ASP.NET AJAX and dynamic SQL. The main goal is to provide an interactive interface to allow users select query attributes, operators, attribute values, and T-SQL operators so that the data context query list can be easily composed and a search engine is invoked.

    Read the article

  • Engage Customers & Empower Employees

    - by Michelle Kimihira
    Oracle WebCenter and Oracle Identity Management help people collaborate more efficiently with social tools that optimize connections between people, information and applications while ensuring timely, relevant and accurate information is always available. See this demo with Andy Kershaw, Senior Director of Product Management, Oracle WebCenter. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook and YouTube Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Help with MVC design pattern?

    - by user3681240
    I am trying to build a java program for user login but I am not sure if my MVC design is accurate. I have the following classes: LoginControl - servlet LoginBean - data holder java class with private variables getters and setters LoginDAO - concrete java class where I am running my SQL queries and doing rest of the logical work. Connection class - java class just to connect to the database view - jsp to display the results html - used for form Is this how you design a java program based on MVC design pattern? Please provide some suggestions?

    Read the article

  • CloudCruiser Chargeback in the cloud

    - by llaszews
    Another company that does chargeback has just been pointed out to me: CloudCruiser There is interesting quote on this company's web site: "Accurate and transparent chargeback is a key requirement in this age of cloud computing. By 2015, we forecast more than 50% of the Global 2000 will charge back most IT costs using service-based pricing, up from less than 10% today. New integrated tools will be needed to implement IT service-based chargeback." - Jay Pultz, Vice President and Distinguished Analyst, Gartner

    Read the article

  • Fusion Product Hub for Supply Chain Management

    Oracle Fusion Product Hub is a key component of Oracle's Supply Chain and Master Data Management strategy. Using a revolutionary approach to managing product master data management processes, Product Hub delivers: 1) A unified and accurate product definition that is harmonized within and across the enterprise value chain 2) Flexible and robust Data Governance workflows and policies to govern product master data 3) Product Dashboard and Embedded Analytics to enable informed and quick decisions

    Read the article

  • How would I detect if two 2D arrays of any shape collided?

    - by user2104648
    Say there's two or more moveable objects of any shape in 2D plane, each object has its own 2D boolean array to act as a bounds box which can range from 10 to 100 pixels, the program then reads each pixel from a image that represents it, and appropriatly changes the array to true(pixel has a alpha more then 1) or false(pixel has a alpha less than one). Each time one of these objects moves, what would be the best accurate way to test if they hit another object in Java using as few APIs/libraries as possible?

    Read the article

  • Video Search Engine Optimization - Advanced Techniques For Best SEO Services

    Everyone is aware of the fact that a site whose configuration is best for the purpose of search engines is the one that attracts most of the internet traffic. People usually access such types of sites only. But not a single person on this planet has ever thought of optimizing the videos just like the optimization of the sites. As the video content is increasingly becoming more and more popular among the masses, so it is essential that the transmission of the data involves accurate optimization of the video search engine.

    Read the article

  • Estimates, constraint and design [closed]

    - by user65964
    For your next two software projects (assuming that you're getting programming assignments, otherwise consider the program to find the min and max of a set of rational numbers) estimate how much effort they would take before doing them, then keep track of the actual time spent. How accurate were your estimates? State the requirements, constraint, design, estimate (your original estimate and the actual time it took), implementation (conventions used, implement/test path followed.

    Read the article

  • Improving the Extended Financial Close and Reporting Process

    Coming out of the recession, many organizations need to build or re-build trust with key stakeholders by delivering more timely and accurate financial and operating results. In this podcast, hear about new capabilities Oracle is delivering through its Enterprise Performance Management products to help organizations coordinate and improve the extended financial close and reporting process, from closing the sub-ledgers to regulatory filings.

    Read the article

  • Does Wordpress have any SEO benifits on sites coded from scratch? [closed]

    - by Glycan
    Possible Duplicate: Do wordpress websites get indexed quicker by SE than a regular website? I have heard SEO experts suggest moving sites to Wordpress for SEO optimization. Googling offers some perspectives that confirm this (like this or somewhat more dubiously, this). Is this accurate? Some other places say that it doesn't matter, except that Wordpress configures all the meta correctly. In that case, what exactly must be configured?

    Read the article

  • Feature: Lead with Intelligence

    Business efficiency depends on business decisions, and business decisions depend on current, accurate information and powerful analysis. See how Oracle data warehousing, business intelligence, and enterprise performance management solutions deliver the information, analysis, and efficiencies to propel your business ahead of the competition.

    Read the article

  • Google Stats, how to get More info?

    - by Ant's
    I have created a blog very recently and i'm seeing my traffic and audience using Google Stats that is in built in google blogger. I have few question on google stats: 1) Is number of visitor shown by stat is rough or accurate? 2) How i do find whether people have visited my site or search engines? 3) Is google stats is best for beginners like me? or any other tool? Correct me if am wrong.

    Read the article

  • SEO Training - Easy Ways to Be a SE Optimizer?

    SEO is one of the most famous methods for web promotion and for this promotion cause, three usual SEO training methods are castigated in order to get desired trainers. Some of these courses are free but those do not give proper accurate training. It is better to spend some money for thorough training. SEO training guides a Search Engine Optimizer how to achieve benefits from Search engines.

    Read the article

  • How to find collision detection side between two objects?

    - by user2362369
    I am using box2D and I have two objects, one is bouncy ball and the other one is block. I'd like to find which side of the block is collided with, so I can only make the ball bounce when it hits the top. I tried to implement many things like fixture data and by detecting position, using manifold but not get the accurate result. I also tried to calculate distance between two object but all went wrong.

    Read the article

  • Improving the Extended Financial Close and Reporting Process

    Coming out of the recession, many organizations need to build or re-build trust with key stakeholders by delivering more timely and accurate financial and operating results. In this podcast, hear about new capabilities Oracle is delivering through its Enterprise Performance Management products to help organizations coordinate and improve the extended financial close and reporting process, from closing the sub-ledgers to regulatory filings.

    Read the article

  • SEO Tips For Yahoo!

    At first, Yahoo! was the most popular search engine until Google entered the scene. Yahoo! said goodbye to its throne with Google's amazingly accurate results.

    Read the article

  • Find a non-case-sensitive text string within a range of cells

    - by Iszi
    I've got a bit of a problem to solve in Excel, and I'm not quite sure how to go about doing it. I've done a few searches online, and haven't really found any formulas that seem to be useful. Here's the situation (simplified just a bit, for the purpose of this question): I have data in columns A-E. I need to match data in the cells in A and B, with data in C-E, and return TRUE or FALSE to column F. Return TRUE if: - The string in A is found within any string in C-E. OR - The string in B is found within any string in C-E. Otherwise, return FALSE. The strings must be exact matches for whole or partial strings within the range, but the matching function must be case-insensitive. I've taken a screenshot of an example sheet for reference. I'm fairly sure I'll need to use IF or on the outermost layer of the formula, probably followed by OR. Then, for the arguments to OR, I'm expecting there will be some use of IFERROR involved. But what I'm at a loss for is the function I could most efficiently use to handle the text string searches. VLOOKUP is very limited in this regard, I think. It may be workable to do whole-string against whole-string comparisons, but I'm fairly certain it won't return accurate results for partial string matches. FIND and SEARCH appear limited to only single-target searches, and are also case-sensitive. I suppose I could use UPPER or LOWER to force case-insensitivity in the search, but I still need something that can do accurate partial matching and search a specified range of cells. Is there any function, or combination of functions, that could work here? Ideally, I want to do this with a straight Excel formula. I'm not at all familiar with VBScript or similar tools, nor do I have time to learn it for this project.

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • When will EBS 12.2 be released?

    - by Steven Chan (Oracle Development)
    The most frequently asked question at OpenWorld this year was, "When will EBS 12.2 be released?" Sadly, Oracle's communication policies prohibit us from speculating about release dates for unreleased software. We are not permitted to give estimates, rough timelines, guesses, or anything else that remotely resembles specific guidance on release dates. You can monitor My Oracle Support and this blog for updates on EBS 12.2.  I'll post them here as soon as they're available.  I'm embedding an old favourite from 2007 in its entirety here, since it applies equally to new releases as well as certifications. "Loose Lips Sink Ships" (March 20, 2007)If I were to sort emails in my inbox into groups, the biggest -- by far -- would be the one for emails that start with, "When will _____ be certified with the E-Business Suite?"  I answer these dutifully but know that my replies can sometimes be maddening, for two reasons:  technical uncertainty, and Oracle's rules for such communications. On the Spiral Model of CertificationsTechnology stack certifications tend to be highly iterative in nature.  As a result, statements about certification dates tend to be accurate only when made in hindsight.  Laypeople are horrified to hear this, but it's the ugly truth.  Uncertainty is simply inherent to the process.  I've become inured to it over the years, but it might come as a surprise to you that it can take many cycles to get fully-released software to work together.  Take this scenario: We test a particular combination of Component A and B. If we encounter a problem, say, with Component A, we log a bug. We receive a new version of Component A. The process iterates again. The reality is this: until a certification is completed and released, there's no accurate way of telling how many iterations are yet to come.  This is true regardless of the number of iterations that have already been completed.  Our Lips Are SealedGenerally, people understand that things are subject to change, so the second reason I can't say anything specific is actually much more important than the first.  "Loose lips might sink ships" was coined in World War II in an effort to remind people that careless talk can have serious consequences.  Curiously, this applies to Oracle's communications about upcoming features, configurations, and releases, too.  As a publicly traded company, we have very strict policies that prohibit us from linking specific releases to specific dates.  If you've ever listened to an earnings call with analysts, you'll often hear them asking, "Can you add a little more color to that statement?"  For certifications, color is usually the only thing that I have.  Sometimes I can provide a bit more information about the technical nature of the certification in question, such as expected footprints or version levels.  I can occasionally share technical issues that we've found, too, to convey the degree of risk or complexity involved in the certification.  Aside from that, there's little additional information about specific dates, date ranges, or even speculation about dates that I can provide... that is, without having one of those uncomfortable conversations with Oracle Legal.  So, as much as it pains me to do so, when it comes to dates, I'm always forced to conclude with a generic reply that blandly states one of the following: We're working on that certification right now That certification is in the pipeline but hasn't been started yet We don't have plans for that certification Don't Shoot the MessengerThankfully, I've developed a thick skin over the years -- which is a good thing, considering the colorful and energetic responses I've received over the years after answering these questions.  However, on behalf of my Oracle colleagues who are faced with these questions every day in the field, I urge you to remember that they're required to follow these same corporate rules about date disclosures.  It never hurts to ask, but don't be too disappointed if we can't provide you with a detailed answer.  The Go-Go's had it right, after all.  Related Articles Webcast Replay Available: Technical Preview of EBS 12.2 Online Patching

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • More than one location provider at same time

    - by Rabarama
    I have some problems with location systems. I have a service that implements locationlistener. I want to get the best location using network when possible, gps if network is not enough accurate (accuracy greater than 300mt). The problem is this. I need location (accurate if possible, inaccuarte otherways) every 5 minutes. I start with a : LocationManager lm=(LocationManager)getApplicationContext().getSystemService(LOCATION_SERVICE); Criteria criteria = new Criteria(); criteria.setAccuracy(Criteria.ACCURACY_COARSE); criteria.setAltitudeRequired(false); criteria.setBearingRequired(false); String provider=lm.getBestProvider(criteria, true); if(provider!=null){ lm.requestLocationUpdates( provider,5*60*1000,0,this); In "onLocationChanged" i listen to locations and when i get a location with accuracy greater than 300mt, i want to change to gps location system. If I remove allupdates and then request for gps updates, like this: lm.removeUpdates((android.location.LocationListener) this); Criteria criteria = new Criteria(); criteria.setAccuracy(Criteria.ACCURACY_FINE); criteria.setAltitudeRequired(false); criteria.setBearingRequired(false); String provider=lm.getBestProvider(criteria, true); if(provider!=null){ lm.requestLocationUpdates( provider,5*60*1000,0,this); } system stops waiting for gpsupdate, and if i'm in a close room it can stay without location updates for hours, ignoring timeupdate indications. Is there a way to tell locationprovider to switch to network if gps is not giving a location in "x" seconds? or how to understand when gps is not localizing? or if i requestlocationupdates from 2 providers at same time (network and gps), can be a problem? Any suggestion?

    Read the article

  • How to get levels for Fry Graph readability formula?

    - by Vic
    Hi, I'm working in an application (C#) that applies some readability formulas to a text, like Gunning-Fog, Precise SMOG, Flesh-Kincaid. Now, I need to implement the Fry-based Grade formula in my program, I understand the formula's logic, pretty much you take 3 100-words samples and calculate the average on sentences per 100-words and syllables per 100-words, and then, you use a graph to plot the values. Here is a more detailed explanation on how this formula works. I already have the averages, but I have no idea on how can I tell my program to "go check the graph and plot the values and give me a level." I don't have to show the graph to the user, I only have to show him the level. I was thinking that maybe I can have all the values in memory, divided into levels, for example: Level 1: values whose sentence average are between 10.0 and 25+, and whose syllables average are between 108 and 132. Level 2: values whose sentence average are between 7.7 and 10.0, and .... so on But the problem is that so far, the only place in which I have found the values that define a level, are in the graph itself, and they aren't too much accurate, so if I apply the approach commented above, trying to take the values from the graph, my level estimations would be too much imprecise, thus, the Fry-based Grade will not be accurate. So, maybe any of you knows about some place where I can find exact values for the different levels of the Fry-based Grade, or maybe any of you can help me think in a way to workaround this. Thanks

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >