Search Results

Search found 13341 results on 534 pages for 'obiee performance tuning'.

Page 188/534 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • How the OS Makes the Database Scream

    - by rickramsey
    source Few things are as satisfying as a screaming burnout. When Oracle Database engineers team up with Solaris engineers, they do a lot of them. Here are a few of the reasons why. Article: How the OS Makes the Database Fast - Oracle Solaris For applications that rely on Oracle Database, a high-performance operating system translates into faster transactions, better scalability to support more users, and the ability to support larger capacity databases. When deployed in virtualized environments, multiple Oracle Database servers can be consolidated on the same physical server. Ginny Henningsen describes what Oracle Solaris does to make the Oracle database run faster. Interview: Why Is The OS Still Relevant? In a world of increasing virtualization and growing interest in cloud services, why is the OS still relevant? Michael Palmeter, senior director of Oracle Solaris, explains why it's not only relevant, but essential for data centers that care about performance. Interview: An Engineer's Perspective: Why the OS Is Still Relevant Sysadmins are handling hundreds or perhaps thousands of VM's. What is it about Solaris that makes it such a good platform for managing those VM's? Liane Praza, senior engineer in the Solaris core engineering group provides an engineer's perspective. Interview in the Lab: How to Get the Performance Promised by Oracle's T5 SPARC Chips If you want your applications to run on the new SPARC T5/M5 chips, how do you make sure they use all that new performance? Don Kretsch, Senior Director of Engineering, explains. Interview: Why Oracle Database Engineering Uses Oracle Solaris Studio The design priorities for Oracle Solaris Studio are performance, observability, and productivity. Why this is good for ISV's and developers, and why it's so important to the Oracle database engineering team. Taped in Oct 2012. - Rick Follow me on: Blog | Facebook | Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • New Exadata public references

    - by Javier Puerta
    The following customers are now public references for Exadata. Show your customers how other companies in their industries are leveraging Exadata to achieve their business objectives. MIGROS BANK - Financial Services - Switzerland Oracle EXADATA Database Machine + OBIEE 11gMigros Bank AG Makes Systems More Available and Improves Operational Insight and Analytics with a Scalable, Integrated Data Warehouse Success Story (English)Success Story(German) - Professional Services - United Arab Emirates Oracle EXADATA Database MachineTech Access Drives Compelling Proof-of-Concept Evaluations for Hardware Sales in Regions Largest Solutions CenterSuccess Story   - Saudi Arabia - Wholesale Distribution Oracle EXADATA Database Machine + OBIEE 11g Balubaid Group of Companies Reduces Help-Desk Complaints by 75%, Improves Business Continuity and System Response Success Story   - Nigeria - Communications Oracle EXADATA Database Machine Etisalat Accelerates Data Retrieval and Analysis by 99 Percent with Oracle Communications Data Model Running on Oracle Exadata Database Machine Oracle Press Release   ETISALAT BALUBAID GROUP TECH ACCESS

    Read the article

  • New Whitepaper - Exalogic Virtualization Architecture

    - by Javier Puerta
    One of the key enhancements in the current generation of Oracle Exalogic systems—and the focus of this whitepaper—is Oracle’s incorporation of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology to permit the system to share the internal InfiniBand network and storage fabric between as many as 63 virtual machines per physical server node with near-native performance simultaneously allowing both high performance and high workload consolidation. Download it here: An Oracle White Paper - November 2012- Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads

    Read the article

  • career change : non-functional to test automation

    - by centennial
    I started my Career as core-Java developer 6 years ago and stayed as developer for 6-7 month and then moved to performance testing (actualy pushed into this for short term and later I started liking it). I have done all sort of non-functional testing like performance, load, stress, soak, compatibility, failover etc on many performance test tools accross many industries. I was doing contracting all these years which means I kept moving to new projects after every 3-6 months. Now personal situation has been changed, married man now so looking for something long term. Performance testing generally comes at the end of the development life cycle hence very short term contracts so I was wondering if I can move into functional/test automation side I can earn myself good length of contract. I had some exposure of QTP but I am sure to learn all other tools very quickly as I am quite good in programming and concept of testing. in short I want to move into functional test automation to get long term contract without leaving my love for programming . any thoughts please ?

    Read the article

  • Enablement 2.0 Get Specialized

    - by mseika
    Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates.Oracle Taleo Enterprise Cloud Service 2013 Specialization – Now Active!This specialization recognizes partner organizations that are proficient in positioning, selling and implementing Taleo’s Enterprise Talent Management solutions.Taleo's Talent Management Cloud helps organizations attract, develop, motivate and retain human capital to improve performance and drive growth. Oracle’s Taleo Enterprise Cloud Service 2013 Specialization encompasses the following products: Oracle Taleo Performance Management Cloud Service, Oracle Taleo Recruiting Cloud Service and Oracle Taleo Performance Management Cloud Service.Topics covered in this Specialization include: Selling and positioning Taleo’s Talent Management Cloud; Functional and Technical positioning. Implementation tracks are included for Taleo Performance Management Cloud Service, Oracle Taleo Recruiting Cloud Service and Oracle Taleo Performance Management Cloud Service. Oracle partners who achieve this Specialization are differentiated in the marketplace through proven expertise in Oracle Taleo Enterprise Cloud Service.New Certified Implementation Specialist Exam in Production! Oracle Taleo Recruiting Cloud Service 2013 Certified Implementation Specialist (1Z0-474) All Beta exam participants will receive their exam scores as of beginning of July 2013. The successful candidates will receive their certificates starting mid-July 2013. Take the exam now at a near-by Pearson VUE testing center!Contact Us Please direct any inquiries you may have to Oracle Partner Enablement team at [email protected].

    Read the article

  • Enablement 2.0 Get Specialized

    - by mseika
    Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates.Oracle Taleo Enterprise Cloud Service 2013 Specialization – Now Active!This specialization recognizes partner organizations that are proficient in positioning, selling and implementing Taleo’s Enterprise Talent Management solutions.Taleo's Talent Management Cloud helps organizations attract, develop, motivate and retain human capital to improve performance and drive growth. Oracle’s Taleo Enterprise Cloud Service 2013 Specialization encompasses the following products: Oracle Taleo Performance Management Cloud Service, Oracle Taleo Recruiting Cloud Service and Oracle Taleo Performance Management Cloud Service.Topics covered in this Specialization include: Selling and positioning Taleo’s Talent Management Cloud; Functional and Technical positioning. Implementation tracks are included for Taleo Performance Management Cloud Service, Oracle Taleo Recruiting Cloud Service and Oracle Taleo Performance Management Cloud Service.Oracle partners who achieve this Specialization are differentiated in the marketplace through proven expertise in Oracle Taleo Enterprise Cloud Service.  New Certified Implementation Specialist Exam in Production! Oracle Taleo Recruiting Cloud Service 2013 Certified Implementation Specialist (1Z0-474) All Beta exam participants will receive their exam scores as of beginning of July 2013. The successful candidates will receive their certificates starting mid-July 2013.   Take the exam now at a near-by Pearson VUE testing center!Contact Us Please direct any inquiries you may have to Oracle Partner Enablement team at [email protected].

    Read the article

  • Enablement 2.0 Get Specialized

    - by mseika
    Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates.Oracle Taleo Enterprise Cloud Service 2013 Specialization – Now Active!This specialization recognizes partner organizations that are proficient in positioning, selling and implementing Taleo’s Enterprise Talent Management solutions.Taleo's Talent Management Cloud helps organizations attract, develop, motivate and retain human capital to improve performance and drive growth. Oracle’s Taleo Enterprise Cloud Service 2013 Specialization encompasses the following products: Oracle Taleo Performance Management Cloud Service, Oracle Taleo Recruiting Cloud Service and Oracle Taleo Performance Management Cloud Service.Topics covered in this Specialization include: Selling and positioning Taleo’s Talent Management Cloud; Functional and Technical positioning. Implementation tracks are included for Taleo Performance Management Cloud Service, Oracle Taleo Recruiting Cloud Service and Oracle Taleo Performance Management Cloud Service. Oracle partners who achieve this Specialization are differentiated in the marketplace through proven expertise in Oracle Taleo Enterprise Cloud Service.New Certified Implementation Specialist Exam in Production! Oracle Taleo Recruiting Cloud Service 2013 Certified Implementation Specialist (1Z0-474) All Beta exam participants will receive their exam scores as of beginning of July 2013. The successful candidates will receive their certificates starting mid-July 2013. Take the exam now at a near-by Pearson VUE testing center!Contact Us Please direct any inquiries you may have to Oracle Partner Enablement team at [email protected].

    Read the article

  • Enablement 2.0 Get Specialized

    - by mseika
    Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates.Oracle Taleo Enterprise Cloud Service 2013 Specialization – Now Active!This specialization recognizes partner organizations that are proficient in positioning, selling and implementing Taleo’s Enterprise Talent Management solutions.Taleo's Talent Management Cloud helps organizations attract, develop, motivate and retain human capital to improve performance and drive growth. Oracle’s Taleo Enterprise Cloud Service 2013 Specialization encompasses the following products: Oracle Taleo Performance Management Cloud Service, Oracle Taleo Recruiting Cloud Service and Oracle Taleo Performance Management Cloud Service. Topics covered in this Specialization include: Selling and positioning Taleo’s Talent Management Cloud; Functional and Technical positioning. Implementation tracks are included for Taleo Performance Management Cloud Service, Oracle Taleo Recruiting Cloud Service and Oracle Taleo Performance Management Cloud Service.Oracle partners who achieve this Specialization are differentiated in the marketplace through proven expertise in Oracle Taleo Enterprise Cloud Service.New Certified Implementation Specialist Exam in Production! Oracle Taleo Recruiting Cloud Service 2013 Certified Implementation Specialist (1Z0-474) All Beta exam participants will receive their exam scores as of beginning of July 2013. The successful candidates will receive their certificates starting mid-July 2013. Take the exam now at a near-by Pearson VUE testing center!Contact Us Please direct any inquiries you may have to Oracle Partner Enablement team at [email protected].

    Read the article

  • How DBAs Can Tune Distributed IBM DB2 Applications

    Many critical business applications now execute in an environment separate from that of the enterprise database server. The database administrator often finds monitoring and performance tuning of these "distributed" applications to be especially difficult. This article looks at common performance issues of distributed applications and presents advice to assist the IBM DB2 database administrator in mitigating performance problems.

    Read the article

  • MySQL December Webinars

    - by Bertrand Matthelié
    We'll be running 3 webinars next week and hope many of you will be able to join us: MySQL Replication: Simplifying Scaling and HA with GTIDs Wednesday, December 12, at 15.00 Central European TimeJoin the MySQL replication developers for a deep dive into the design and implementation of Global Transaction Identifiers (GTIDs) and how they enable users to simplify MySQL scaling and HA. GTIDs are one of the most significant new replication capabilities in MySQL 5.6, making it simple to track and compare replication progress between the master and slave servers. Register Now MySQL 5.6: Building the Next Generation of Web/Cloud/SaaS/Embedded Applications and Services Thursday, December 13, at 9.00 am Pacific Time As the world's most popular web database, MySQL has quickly become the leading cloud database, with most providers offering MySQL-based services. Indeed, built to deliver web-based applications and to scale out, MySQL's architecture and features make the database a great fit to deliver cloud-based applications. In this webinar we will focus on the improvements in MySQL 5.6 performance, scalability, and availability designed to enable DBA and developer agility in building the next generation of web-based applications. Register Now Getting the Best MySQL Performance in Your Products: Part IV, Partitioning Friday, December 14, at 9.00 am Pacific Time We're adding Partitioning to our extremely popular "Getting the Best MySQL Performance in Your Products" webinar series. Partitioning can greatly increase the performance of your queries, especially when doing full table scans over large tables. Partitioning is also an excellent way to manage very large tables. It's one of the best ways to build higher performance into your product's embedded or bundled MySQL, and particularly for hardware-constrained appliances and devices. Register Now We have live Q&A during all webinars so you'll get the opportunity to ask your questions!

    Read the article

  • Adding more RAM at different speeds? Will it impact performance as much as to make it worse than without adding it?

    - by user1676874
    I got a new laptop with 4GBs of ram, expandable to 8GBs. It has 1 4GB stick DDR3 PC3-12800 at 1600Mhz. I can't seem to find another one exactly the same locally, the closest I've found is 1 4GB stick DDR3 PC3-10600 at 1333Mhz. So my question is, I know they will both run at the slowest speed, so even if I have more available RAM it will become slower. Is the performance loss big enough to make the upgrade not worth the hassle?

    Read the article

  • We are moving to two servers from one server due to performance issues. How do we monitor the value of this change?

    - by MikeN
    We are moving to two servers from one server due to performance issues. We are moving our MySQL DB to its own dedicated server and will keep the original machine as the front end machine running nginx/apache (Django backend.) How do we monitor the value of this change? It is possible our whole site could actually get slower since the MySQL queries will be going out over a secure remote connection instead of locally?

    Read the article

  • calculate AUC (GAM) in R [migrated]

    - by ahmad
    I used the following script to calculate AUC in R: library(mgcv) library(ROCR) library(AUC) data1=read.table("d:\\2005.txt", header=T) GAM<-gam(tuna ~ s(chla)+s(sst)+s(ssha),family=binomial, data=data1) gampred<- predict(GAM, type="response") rp <- prediction(gampred, data1$tuna) auc <- performance( rp, "auc")@y.values[[1]] auc roc <- performance( rp, "tpr", "fpr") plot( roc ) But when I was running the script, the result is: **rp <- prediction(gampred, data1$tuna) Error in prediction(gampred, data1$tuna) : Format of predictions is invalid. > > auc <- performance( rp, "auc")@y.values[[1]] Error in performance(rp, "auc") : object 'rp' not found > auc function (x, min = 0, max = 1) { if (any(class(x) == "roc")) { if (min != 0 || max != 1) { x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max] x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:length(x$fpr)) { ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i - 1]) * (x$tpr[i] + x$tpr[i - 1]) } } else if (any(class(x) %in% c("accuracy", "sensitivity", "specificity"))) { if (min != 0 || max != 1) { x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max] x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:(length(x$cutoffs))) { ans <- ans + 0.5 * abs(x$cutoffs[i - 1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i - 1]) } } return(as.numeric(ans)) } <bytecode: 0x03012f10> <environment: namespace:AUC> > > roc <- performance( rp, "tpr", "fpr") Error in performance(rp, "tpr", "fpr") : object 'rp' not found > plot( roc ) Error in levels(labels) : argument "labels" is missing, with no default** Can anybody help me to solve this problem? Thank you in advance.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 19 (sys.dm_exec_query_stats)

    - by Tamarick Hill
    The sys.dm_exec_query_stats DMV is one of the most useful DMV’s out there when it comes to performance tuning. If you have been keeping up with this blog series this month, you know that I started out on Day 1 reviewing many of the DMV’s within the ‘exec’ namespace. I’m not sure how I missed this one considering how valuable it is, but hey, they say it’s better late than never right?? On Day 7 and Day 8 we reviewed the sys.dm_exec_procedure_stats and sys.dm_exec_trigger_stats respectively. This sys.dm_exec_query_stats DMV is very similar to these two. As a matter of fact, this DMV will return all of the information you saw in the other two DMV’s, but in addition to that, you can see stats for all queries that have cached execution plans on your server. You can even see stats for statements that are ran Ad-Hoc as long as they are still cached in the buffer pool. To better illustrate this DMV, let have a quick look at it: SELECT * FROM sys.dm_exec_query_stats As you can see, there is a lot of information returned from this DMV. I wont go into detail about each and every one of these columns, but I will touch on a few of them briefly. The first column is the ‘sql_handle’, which if you remember from Day 4 of our blog series, I explained how you can use this column to extract the actual SQL text that was executed. The next columns statement_start_offset and statement_end_offset provide you a way of extracting the exact SQL statement that was executed as part of a batch. The plan_handle column is used to extract the Execution plan that was used, which we talked about during Day 5 of this blog series. Later in the result set, you have columns to identify how many times a particular statement was executed, how much CPU time it used, how many reads/writes it performed, the duration, how many rows were returned, etc. These columns provide you with a solid avenue to begin your performance optimization. The last column I will touch on is the query_plan_hash column. A lot of times when you have Dynamic SQL running on your server, you have similar statements with different parameter values being passed in. Many times these types of statements will get similar execution plans and then a Binary hash value can be generated based on these similar plans. This query plan hash can be used to find the cost of all queries that have similar execution plans and then you can tune based on that plan to improve the performance of all of the individual queries. This is a very powerful way of identifying and tuning Ad-hoc statements that run on your server. As I stated earlier, this sys.dm_exec_query_stats DMV is a very powerful and recommended DMV for performance tuning. You are able to quickly identify statements that are running on your server and analyze their impact on system resources. Using this DMV to track down the biggest performance killers on your server will allow you to make the biggest gains once you focus your tuning efforts on those top offenders. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms189741.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • SQL SERVER – Discard Results After Query Execution – SSMS

    - by pinaldave
    The first thing I do any day is to turn on the computer. Today I woke up and as soon as I turned on the computer I saw a chat message from a friend. He was a bit confused and wanted me to help him. Just as usual I am keeping the relevant conversation in focus and documenting our conversation as chat. Let us call him Ajit. Ajit: Pinal, every time I run a query there is no result displayed in the SSMS but when I run the query in my application it works and returns an appropriate result. Pinal:  Have you tried with different parameters? Ajit: Same thing. However, it works from another computer when I connect to the same server with the same query parameters? Pinal: What? That is new and I believe it is something to do with SSMS and not with the server. Send me screenshot please. Ajit: I believe so, let me send you a screenshot, Pinal: (looking at the screenshot) Oh man, there is no result-tab at all. Ajit: That is what the problem is. It does not have the tab which displays the result. This works just fine from another computer. Pinal: Have you referred Nakul’s blog post – SSMS – Query result options – Discard result after query executes, that talks about setting which can discard the query results after execution. (After a while) Ajit: I think it seems like on the computer where I am running the query my SSMS seems to have the option enabled related to discarding results. I fixed it by following Nakul’s blog post. Pinal: Great! Quite often I get the question what is the importance of the feature. Let us first see how to turn on or turn off this feature in SQL Server Management Studio 2012. In SSMS 2012 go to Tools >> Options >> Query Results > SQL Server >> Results to Grid >> Discard Results After Query Execution. When enabled this option will discard results after the execution. The advantage of disabling the option is that it will improve the performance by using less memory. However the real question is why would someone enable or disable the option. What are the cases when someone wants to run the query but do not care about the result? Matter of the fact, it does not make sense at all to run query and not care about the result. The matter of the fact, I can see quite a few reasons for using this option. I often enable this option when I am doing performance tuning exercise. During performance tuning exercise when I am working with execution plans and do not need results to verify every time or when I am tuning Indexes and its effect on execution plan I do not need the results. In this kind of situations I do keep this option on and discard the results. It always helps me big time as in most of the performance tuning exercise I am dealing with huge amount of the data and dealing with this data can be expensive. Nakul’s has done the experiment here already but I am going to repeat the same again using AdventureWorks Database. Run following T-SQL Script with and without enabling the option to discard the results. USE AdventureWorks2012 GO SELECT * FROM Sales.SalesOrderDetail GO 10 After enabling Discard Results After Query Execution After disabling Discard Results After Query Execution Well, this is indeed a good option when someone is debugging the execution plan or does not want the result to be displayed. Please note that this option does not reduce IO or CPU usage for SQL Server. It just discards the results after execution and a good help for debugging on the development server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • How do the performance characteristics of jQuery selectors differ from those of CSS selectors?

    - by Moss
    I came across Google's Page Speed add-on for Firebug yesterday. The page about using efficient CSS selectors said to not use overqualified selectors, i.e. use #foo instead of div#foo. I thought the latter would be faster but Google's saying otherwise, and who am I to go against that? So that got me wondering if the same applied to jQuery selectors. This page I found the link to on SO says I should use $("div#foo"), which is what I was doing all along, since I thought that things would speed up by limiting the selector to match div elements only. But is it really better than writing $("#foo") like Google's saying for CSS selectors, or do CSS versus jQuery element matching work in different ways and I should stick with $("div#foo")?

    Read the article

  • VTD-XML Parsing Performance (speed critical factor). Requesting Feedback/Comments

    - by andreas
    Hello, I am about to use VTD-XML (found at http://vtd-xml.sourceforge.net/) but I am interested in getting real-case usage feedback, by any one that has used the library and has any comments. At the URL (http://vtd-xml.sourceforge.net/) there are benchmarks but if someone has used VTD-XML and has comments FOR it I would like to hear them. Speed is a critical factor in the application and comments after real-case usage, by developers, is what i am looking for. Regards,

    Read the article

  • Why does my performance increase when touching the screen?

    - by Smills
    For some reason my FPS jumps up considerably when I move my mouse around on the screen (on the emulator) while holding the left mouse button. Normally my game is very laggy, but if I touch the screen (and as long as I am moving the mouse around while touching) it goes perfectly smooth. I have tried sleeping for 20ms in the onTouchEvent, but it doesn't appear to make any difference. Here is the code I use in my onTouchEvent: // events when touching the screen public boolean onTouchEvent(MotionEvent event) { int eventaction = event.getAction(); touchX=event.getX(); touchY=event.getY(); switch (eventaction) { case MotionEvent.ACTION_DOWN: { touch=true; } break; case MotionEvent.ACTION_MOVE: { } break; case MotionEvent.ACTION_UP: { touch=false; } break; } /*try { AscentThread.sleep(20); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); }*/ return true; } In the logcat log, FPS is the current fps (average of the last 20 frames), touch is whether or not the screen is being touched (from onTouchEvent). What on earth is going on? Has anyone else had this odd behaviour before? Logcat log: 12-21 19:43:26.154: INFO/myActivity(786): FPS: 31.686569159606414 Touch: false 12-21 19:43:27.624: INFO/myActivity(786): FPS: 19.46310293212206 Touch: false 12-21 19:43:29.104: INFO/myActivity(786): FPS: 18.801202175690467 Touch: false 12-21 19:43:30.514: INFO/myActivity(786): FPS: 21.118295877408478 Touch: false 12-21 19:43:31.985: INFO/myActivity(786): FPS: 19.117397812958878 Touch: false 12-21 19:43:33.534: INFO/myActivity(786): FPS: 15.572571858239263 Touch: false 12-21 19:43:34.934: INFO/myActivity(786): FPS: 20.584119901503506 Touch: false 12-21 19:43:36.404: INFO/myActivity(786): FPS: 18.888025905454207 Touch: false 12-21 19:43:37.814: INFO/myActivity(786): FPS: 22.35722329083629 Touch: false 12-21 19:43:39.353: INFO/myActivity(786): FPS: 15.73604859775362 Touch: false 12-21 19:43:40.763: INFO/myActivity(786): FPS: 20.912449882754633 Touch: false 12-21 19:43:42.233: INFO/myActivity(786): FPS: 18.785278388997718 Touch: false 12-21 19:43:43.634: INFO/myActivity(786): FPS: 20.1357397209596 Touch: false 12-21 19:43:45.043: INFO/myActivity(786): FPS: 21.961138432007957 Touch: false 12-21 19:43:46.453: INFO/myActivity(786): FPS: 22.167196852834273 Touch: false 12-21 19:43:47.854: INFO/myActivity(786): FPS: 22.207318228024274 Touch: false 12-21 19:43:49.264: INFO/myActivity(786): FPS: 22.36980559230175 Touch: false 12-21 19:43:50.604: INFO/myActivity(786): FPS: 23.587638823252547 Touch: false 12-21 19:43:52.073: INFO/myActivity(786): FPS: 19.233902040593076 Touch: false 12-21 19:43:53.624: INFO/myActivity(786): FPS: 15.542190150440987 Touch: false 12-21 19:43:55.034: INFO/myActivity(786): FPS: 20.82290063974675 Touch: false 12-21 19:43:56.436: INFO/myActivity(786): FPS: 21.975282007207717 Touch: false 12-21 19:43:57.914: INFO/myActivity(786): FPS: 18.786927284103687 Touch: false 12-21 19:43:59.393: INFO/myActivity(786): FPS: 18.96879004217992 Touch: false 12-21 19:44:00.625: INFO/myActivity(786): FPS: 28.367566618064878 Touch: false 12-21 19:44:02.113: INFO/myActivity(786): FPS: 19.04441528684418 Touch: false 12-21 19:44:03.585: INFO/myActivity(786): FPS: 18.807837511809065 Touch: false 12-21 19:44:04.993: INFO/myActivity(786): FPS: 21.134330284993418 Touch: false 12-21 19:44:06.275: INFO/myActivity(786): FPS: 27.209688764079907 Touch: false 12-21 19:44:07.753: INFO/myActivity(786): FPS: 19.055894653261653 Touch: false 12-21 19:44:09.163: INFO/myActivity(786): FPS: 22.05422794901088 Touch: false 12-21 19:44:10.644: INFO/myActivity(786): FPS: 18.6956805300596 Touch: false 12-21 19:44:12.124: INFO/myActivity(786): FPS: 17.434180581311054 Touch: false 12-21 19:44:13.594: INFO/myActivity(786): FPS: 18.71932038510891 Touch: false 12-21 19:44:14.504: INFO/myActivity(786): FPS: 40.94571503868066 Touch: true 12-21 19:44:14.924: INFO/myActivity(786): FPS: 57.061200121138576 Touch: true 12-21 19:44:15.364: INFO/myActivity(786): FPS: 62.54377946377936 Touch: true 12-21 19:44:15.764: INFO/myActivity(786): FPS: 64.05005071818726 Touch: true 12-21 19:44:16.384: INFO/myActivity(786): FPS: 50.912951172948155 Touch: true 12-21 19:44:16.874: INFO/myActivity(786): FPS: 55.31242053078078 Touch: true 12-21 19:44:17.364: INFO/myActivity(786): FPS: 59.31625410615102 Touch: true 12-21 19:44:18.413: INFO/myActivity(786): FPS: 36.63504170925923 Touch: false 12-21 19:44:19.885: INFO/myActivity(786): FPS: 18.099130467755923 Touch: false 12-21 19:44:21.363: INFO/myActivity(786): FPS: 18.458978222946566 Touch: false 12-21 19:44:22.683: INFO/myActivity(786): FPS: 25.582179409330823 Touch: true 12-21 19:44:23.044: INFO/myActivity(786): FPS: 60.99865521942455 Touch: true 12-21 19:44:23.403: INFO/myActivity(786): FPS: 74.17873975470984 Touch: true 12-21 19:44:23.763: INFO/myActivity(786): FPS: 64.25663040460714 Touch: true 12-21 19:44:24.113: INFO/myActivity(786): FPS: 62.47483457826921 Touch: true 12-21 19:44:24.473: INFO/myActivity(786): FPS: 65.27969529547072 Touch: true 12-21 19:44:24.825: INFO/myActivity(786): FPS: 67.84743115273311 Touch: true 12-21 19:44:25.173: INFO/myActivity(786): FPS: 73.50854551357706 Touch: true 12-21 19:44:25.523: INFO/myActivity(786): FPS: 70.46432534585368 Touch: true 12-21 19:44:25.873: INFO/myActivity(786): FPS: 69.04076953445896 Touch: true

    Read the article

  • Silverlight performance with a large number of Objects Org Chart.

    - by KC
    Hello, I’m working on a org chart project(SL 3) and I’m seeing the UI thread hang when the chart is building around 2,000 nodes and when it renders it takes about a min and then FPS drop to a crawl. Here is the code flow. Page.xaml.cs calls a wcf service that returns a list of AD users. Then we use Linq to build a collection of nodes to bind to the Orgchart.cs OrgChart.cs is a canvas and displays a collection of nodes and connecting lines. Node.cs is a canvas and has user data can contain children nodes. NodeContent.xaml is a user control that has borders so I can set the background, textblocks to display user's data, Events that handle the selected and expaned nodes, and storyboards that resize the nodes when they are selected or expanded.I noticed during hours of debugging , here in the InitializeComponent(); section where it loads the xaml it seems to be where the preformance hit is happening. System.Windows.Application.LoadComponent(this, new System.Uri("/Silverlight.Custom;component/NodeContent.xaml", System.UriKind.Relative)); So I guess I have two questions. Can threading help in any way with the UI thread hanging while drawing the nodes? How can I avoid the hit when calling this user control? Any advice or direction anyone can lend would be greatly appreciated. Thanks, KC

    Read the article

  • Linq Query Performance , comparing Compiled query vs Non-Compiled.

    - by AG.
    Hello Guys, I was wondering if i extract the common where clause query into a common expression would it make my query much faster, if i have say something like 10 linq queries on a collection with exact same 1st part of the where clause. I have done a small example to explain a bit more . public class Person { public string First { get; set; } public string Last { get; set; } public int Age { get; set; } public String Born { get; set; } public string Living { get; set; } } public sealed class PersonDetails : List<Person> { } PersonDetails d = new PersonDetails(); d.Add(new Person() {Age = 29, Born = "Timbuk Tu", First = "Joe", Last = "Bloggs", Living = "London"}); d.Add(new Person() { Age = 29, Born = "Timbuk Tu", First = "Foo", Last = "Bar", Living = "NewYork" }); Expression<Func<Person, bool>> exp = (a) => a.Age == 29; Func<Person, bool> commonQuery = exp.Compile(); var lx = from y in d where commonQuery.Invoke(y) && y.Living == "London" select y; var bx = from y in d where y.Age == 29 && y.Living == "NewYork" select y; Console.WriteLine("All Details {0}, {1}, {2}, {3}, {4}", lx.Single().Age, lx.Single().First , lx.Single().Last, lx.Single().Living, lx.Single().Born ); Console.WriteLine("All Details {0}, {1}, {2}, {3}, {4}", bx.Single().Age, bx.Single().First, bx.Single().Last, bx.Single().Living, bx.Single().Born); So can some of the guru's here give me some advice if it would be a good practice to write query like var lx = "Linq Expression " or var bx = "Linq Expression" ? Any inputs would be highly appreciated. Thanks, AG

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >