Search Results

Search found 7473 results on 299 pages for 'usage statistics'.

Page 105/299 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Save match details to SQLite or XML?

    - by trizz
    I'm making a (conceptual) system to simulate any kind of sports match (like soccer,basketball,etc) with actions (for example pass,pass,out,pass,score) so it will be like a real report. The main statistics (play time, number of actions etc.) I'm saving to a MySQL database, but the report itself, can contain more than 1000 actions per match. To avoid millions of records in my database I'm thinking of saving the detailled report in a SQLite database or a XML file. For every match played, a file will be created. When a user request the match details, I read the file for details. What is the best choice for this purpose? SQLite or XML?

    Read the article

  • Why does flush dns often fail to work?

    - by Sharen Eayrs
    C:\Windows\system32>ipconfig /flushdns Windows IP Configuration Successfully flushed the DNS Resolver Cache. C:\Windows\system32>ping beautyadmired.com Pinging beautyadmired.com [xxx.45.62.2] with 32 bytes of data: Reply from xxx.45.62.2: bytes=32 time=253ms TTL=49 Reply from xxx.45.62.2: bytes=32 time=249ms TTL=49 Reply from xxx.45.62.2: bytes=32 time=242ms TTL=49 Reply from xxx.45.62.2: bytes=32 time=258ms TTL=49 Ping statistics for xxx.45.62.2: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 242ms, Maximum = 258ms, Average = 250ms My site should point to xx.73.42.27 I change the name server. It's been 3 hours. It still points to xxx.45.62.2 Actually what happen after we change name server anyway? Wait for what? I already flush dns. Why it still points to the wrong IP? Also most other people that do not have the DNS cache also still go to the wrong IP

    Read the article

  • Smart Grid Gateway and New Meter Data Management released

    - by Anthony Shorten
    Two products have just been released and are available from edlivery.oracle.com. Smart Grid Gateway 2.0.0 - A new product to integrate to Smart Grid networks Meter Data Management 2.0.1 - A new version of the Meter Data Management product. These products are the first products to use the brand new version of the Oracle Utilities Applicaton Framework (V4.1). The new framework builds up on FW2.2 and FW4.0.2 to add exciting new features (this is just a subset): Support for Database Vault Enhancements to Business Object Maintenance Batch Statistics Portal for benchmarking Custom template user exit support File permissions now consistent with other Oracle products Use of Universal Connection Pool for all database pool access Ability to manage the batch data cache Over the next few weeks I will be publishing articles and updates to existing whitepapers to highlight all the new features.

    Read the article

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • NEW CERTIFICATION: Oracle Certified Expert, Oracle Database 11g Release 2 SQL Tuning

    - by Brandye Barrington
    Oracle Certification announces the release of the new Oracle Certified Expert, Oracle Database 11g Release 2 SQL Tuning certification. This certification is designed forDevelopers, Database Administrators and SQL developers who are proficient at tuning efficient SQL statements. This certification covers topics on core elements such as: identifying and tuning inefficient SQL statements, using automatic SQL tuning, managing optimizer statistics on database objects, implementing partitioning and analyizing queries. Beta testing for the Oracle Database 11g Release 2: SQL Tuning exam (1Z1-117) is now underway and thus is available at the greatly discounted rate of $50 USD. Visit pearsonvue.com/oracle and register for exam 1Z1-117. You can get all preparation details on the Oracle Certification website, including exam objectives, number of questions, time allotments, and pricing. QUICK LINKS: Certification Track: Oracle Certified Expert, Oracle Database 11g Release 2 SQL Tuning Certification Exam: Oracle Database 11g Release 2: SQL Tuning (1Z0-117) Certification Website: About Beta Exams Register Now: Pearson VUE

    Read the article

  • What good Social Networking Site solutions there are? [closed]

    - by ZetsubouWebmaster
    What good and free Social Networking Site solutions there are? I tried many options but most of them are either too complicated, too simple, or just do not work... I tried: Dolphin, DZOIC-Handshakes, elgg, Oxwall, SocialEngine, and some plugins for wp and other CMS. I don't need much, just: groups, chats, forums, profiles, PM, photos, pages, comments, search, statistics. Most of which included in pretty much every CMS out there, but not all.. So, what good solutions there are? Also I don't mind paying some money (I guess no more then $200), but I'd prefer if it was a free open source engine. Of course it should be PHP+MySQL based.

    Read the article

  • Google I/O 2010 - Appstats - instrumentation for App Engine

    Google I/O 2010 - Appstats - instrumentation for App Engine Google I/O 2010 - Appstats - RPC instrumentation and optimizations for App Engine App Engine 201 Guido van Rossum Appstats is a pure userland library (for Python and Java) that inserts instrumentation hooks into the App Engine runtime at the interface between the runtime and services like the datastore. The collected statistics can be browsed in a rich UI which allows drilling down to various levels of detail. The talk will also discuss common optimizations to address typical findings. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 19 0 ratings Time: 59:31 More in Science & Technology

    Read the article

  • Oracle Database 12c is here!

    - by Maria Colgan
    Oracle Database 12c was officially release today and is now available for download. Along with the software release comes a whole new set of collateral that explains in detail all of the new features and functionality you will find in this release. The Optimizer page on Oracle.com has all the juicy details about what you can expect from the Optimizer in Oracle Database12c.  There you will find the following 3 new white papers; What to expect from the Oracle Optimizer in Oracle Database 12c SQL Plan Management with Oracle Database 12c Understanding Optimizer Statistics with Oracle Database 12c Over the coming months we will also present an in-depth series of blog posts on all of the cool new Optimizer features in 12c so stay tuned for that and happy reading! +Maria Colgan

    Read the article

  • How often do CPUs make calculation errors?

    - by veryfoolish
    In Dijkstra's Notes on Structured Programming he talks a lot about the provability of computer programs as abstract entities. As a corollary, he remarks how testing isn't enough. E.g., he points out the fact that it would be impossible to test a multiplication function f(x,y) = x*y for any large values of x and y across the entire ranges of x and y. My question concerns his misc. remarks on "lousy hardware". I know the essay was written in the 1970s when computer hardware was less reliable, but computers still aren't perfect, so they must make calculation mistakes sometimes. Does anybody know how often this happens or if there are any statistics on this?

    Read the article

  • MDW Reports–New Source Code ZIP File Available

    - by billramo
    In my MDW Reports series, I attached V1 of the RDL files in my post - May the source be with you! MDW Report Series Part 6–The Final Edition. Since that post, Rachna Agarwal from MSIT in India updated the RDL files that are ready to go in a single ZIP. The reports assume that they will ne uploaded to the Report Manager’s root folder and use a shared data source named MDW. The reports also integrate with the new Query Hash Statistics reports. You can download them from my SQLBlog.com download site.

    Read the article

  • Developing a live video-streaming website

    - by cawecoy
    I'm a computer science student and know a little about some technology to start developing my website, like PHP, RubyOnRails and Python, and MySQL and PostgreSQL for Database. I need to know what are the best (secure, stable, low-price, etc) to get started, based on my business information: My website will be a live video-streaming one, similar to livestream.com We need to provide a secure service for our customers. They need to have a page to create and configure their own Live-Streaming-Videos, get statistics, etc. We work with Wowza Media Server ruuning on an Apache Server In addition, I would like to know some good practices for this kind of website development, as I am new to this. Thanks in advance!

    Read the article

  • Day 3 of Oracle OpenWorld 2012 October 2nd

    - by Maria Colgan
    Hopefully you enjoyed yesterday, the first full day of technical sessions at Oracle OpenWorld and are ready for more today! Today we give our first technical session, Oracle Optimizer: Harnessing the Power of Optimizer Hints (Session CON8455) at 1:15pm, in Moscone South - room 103. In this session we will discuss in detail how Optimizer hints are interpreted, when they should be used, why they appear to be ignored and what you can do if you have inherited a hint ridden application. The Optimizer team will also be at the Oracle Database Demogrounds all day.  Demogrounds open at 9:45 am and run until 6pm. So stop by and find out what's new with the Optimizer and the statistics that feed it. Don't forget to pick up your Optimizer bumper sticker while you are there! +Maria Colgan

    Read the article

  • How to design a leaderboard?

    - by PeterK
    This sounds like an easy thing but when i considering the following Many players Some have played many games and some just started Different type of statistics ...on what information should the actual ranking be based on. I am planning to display the board in a UITableView so there is limited space available per player. However, I am not bound to the UITableView if there is a better solution. This is a quiz game and the information i am currently capturing per player is: #games played totally #games played per game type (current version have only one game type) #questions answered #correct answers Maybe i should include additional information. I have been thinking about having a leaderboard property page where the player can decide on what basis the leaderboard should display information but would like to avoid the complexity in that. However, if that is needed i will do it. Anyone that can give me some advice on how to design the presentation of this would be highly appreciated?

    Read the article

  • What are some recommended video lectures for a non-CS student to prepare for the GRE CS subject test?

    - by aristos
    Well the title kinda explains all there is to explain. I'm a non-cs student and was preparing to apply PhD programs in applied mathematics. But for my senior thesis I've been reading lots of machine learning and pattern recognition literature and enjoying it a lot. I've taken lots of courses with statistics and stochastics content, which I think, would help me if I get accepted to a program with ML focus, but there are only two CS courses -introduction to programming- in my transcript and therefore I decided to take the CS subject test to increase my chances. Which courses do you think would be most essential to have a good result from CS subject test? I'm thinking of watching video lectures of them, so do you have any recommendations?

    Read the article

  • Is there a ways to see granular per-visit data in Google Analytics?

    - by jakub.g
    I've started using Google Analytics very recently and I'm a bit lost with the sea of options (have been using Sitemeter before for some time). I've clicked through the service a lot but couldn't find what I'm accustomed to. I can see multitude of aggregated statistics in GA like: charts of browser share lists of country share lists of most visited URLs within the page and so on, but I would actually like to analyze each of the visits themselves. Something like: User X, France, Chrome, 7 pageviews between 18:01 and 18:15, entered on a.htm and exited on b.htm User Y, UK, Firefox, 1 pageview at 18:20, entered on c.htm Is there an easy way to see the reports in this way (perhaps by clicking a link to a separate page to see that particular session's stats)? How to navigate there if so?

    Read the article

  • Over a million COBOL porgrammers in the world?

    - by Lucas McCoy
    I think I heard on a previous StackOverflow podcast that COBOL was used as the programming language for traffic lights (or something like that), so this got me interested. I did a quick Google search and found this little article: Today, Cobol is everywhere, yet largely unheard of by millions of people who interact with it daily when using the ATM, stopping at traffic lights or buying a product online. The statistics on Cobol attest to its huge influence on the business world: There are over 220 billion lines of Cobol in existence, a figure which equates to about 80 per cent of the world’s actively used code. There over a million Cobol programmers in the world. There are 200 times as many Cobol transactions that take place each day than Google searches. I didn't really trust the source seeing as how it's on some random PHPBB forum. So how accurate are these figures? Are there really 220 billion lines of COBOL? I assume a few people/companies still use COBOL, but how many?

    Read the article

  • Are RSS feeds used? [closed]

    - by acidzombie24
    I was thinking about implementing RSS feeds on my site. The one thing that came to mind is Are RSS feeds ever used?! I don't use them nor know anyone who does. I know to use them it must be through an rss feed program or built in with the browser. I like my browser clean and i know many ppl dont know how to use/configure their browser. So with my thoughts i deducted that rss are not used 99.9999% of people (thats 4 decimal places which is really saying something). Now twitter on the other hand or even email may be used. But I have doubts about rss feeds. Can anyone give me statistics or change my mind on implementing it? I suspect it would be simple but i dont think i should bother if no one will ever use it. I dont even use SE/SO RSS feeds.

    Read the article

  • Server for online browser game

    - by Tim Rogers
    I am going to be making an online single player browser game. The online element is needed so that a player can login and store the state of their game. This will include things like what buildings have been made and where they have been positioned as well as the users personal statistics and achievements. At this point in time, I am expecting all of the game logic to be performed client side So far, I am thinking I will use flash for creating the client side of the game. I am also creating a MySQL database to store all the users information. My question is how do I connect the two. Presumably I will need some sort of server application which will listen for incoming requests from any clients, perform the SQL query and then return the data. Does anyone have any recommendations of what technology/language to use?

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • OWB – OWBLand on SourceForge

    - by David Allan
    There are a bunch of interesting utilities that are either experts or OMB scripts that are hosted on SourceForge by some keen OWB users (see the home here). One of the main initiatives has been an Excel to OWB ‘one click ETL’ utility, which looks to have had a fair amount of code added, there is an example but its kinda light on documentation, but does look like it covers quite a lot. One of the nice things about SourceForge is that you can peek into the statistics and see what kind of activity has gone on, from last August there have been a bunch of downloads with a big peak last November… Another utility that is there is one to generate OMB from a mapping definition, a bunch of useful stuff there - http://sourceforge.net/projects/owbland/files/

    Read the article

  • Getting to math applications gradually

    - by den-javamaniac
    I'm currently getting a formal degree related to computation, in particular my current focus is numerical programming, scientific computing and machine learning. I'd love to apply that knowledge in game dev and expand it with statistics, probability theory, and graph theory (probably even linear algebra). The question is: which spheres of gamedev are filled with such math stuff, is it possible to advance in those without being a part of a group of people and how to get to it gradually? P.S.: I've got experience with commercial java dev and am getting my hands on C/C++ at the moment, however, I'm opened to go ahead and try Unity3D and etc.

    Read the article

  • How to get average network load instead of instant

    - by Adam Ryczkowski
    Welcome, I use conky to see network load statistics with sampling every 8 seconds in order to get somewhat more smooth history chart. Unfortunately, all values i get are not average for this 8 second period, but they are sampled from much smaller time span, so charts are the same choppy, as if they were sampled from 1 second or less. Is there any way to get conky (or at least System Monitor) display system properties averaged over specified amount of time, just like Windows' task manager does? I would like to have conky display hard drive usage from iostat, but there will be little use if it, if conky reports instant values not averaged over time.

    Read the article

  • Social media exchange strategies

    - by Wladimir Ivanov
    Recently I've stumbled upon some [B]facebook/twitter/g+[/B] and other social site [B]tools[/B] which offer [B]like for like[/B]. As I know from personal experimenting following certain people/pages on twitter also gives you followers. What's your opinion on this type of social media exchange (I know the fans/followers you get are only number which couldn't help much with growing your site)? Which of these sites are proven to boost some statistics? Are there other better exchange tactics? Thanks in advance.

    Read the article

  • What is the increase in developer productivity while using Hibernate?

    - by Tarun Kohli
    I was curious to find out the percentage increase in developer's productivity by using Hibernate. We use both Hibernate and NHibernate extensively and find them to be extremely elegant frameworks but haven't undertaken any study to find out the time savings by using them. IMHO, one could get a good 30 to 40% jump in developer productivity as one doesn't have to write the basic CRUD operations and bother about caching. But, are there are any formal case studies which prove that point? I would really appreciate if someone could direct me to a published white paper about some statistics about the productivity gains.

    Read the article

  • SQL Server 2012 RTM Available!

    - by Davide Mauri
    SQL Server 2012 is available for download! http://www.microsoft.com/sqlserver/en/us/default.aspx The Evaluation version is available here: http://www.microsoft.com/download/en/details.aspx?id=29066 and along with the SQL Server 2012 RTM there’s also the Feature Pack available: http://www.microsoft.com/download/en/details.aspx?id=29065 The Feature Pack is rich of useful and interesting stuff, something needed by some feature, like the Semantic Language Statistics Database some other a very good (I would say needed) download if you use certain technologies, like MDS or Data Mining. Btw, for Data Mining also the updated Excel Addin has been released and it’s available in the Feature Pack. As if this would not be enough, also the SQL Server Data Tools IDE has been released in RTM: http://msdn.microsoft.com/en-us/data/hh297027 Remember that SQL Server Data Tool is completely free and can be used with SQL Server 2005 and after. Happy downloading!

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >