Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 63/572 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Is there a downside of running too many Symfony applications for 1 website?

    - by gentrobot
    Recently I got access to a Symfony 1.2 project which is for just 1 website, but with too many applications. In the past, I have developed websites but with not more than 2 or 3 applications. The cross-application links are achieved by passing the full URL to the 'href' attribute. Since the site is still working absolutely fine, my question is will having too many front controllers (approximately 25-30) hamper the performance of the website? Should I just try to create Cross Application Links or put an additional effort in combining similar applications (I guess almost all of the site's frontend part) into 1 application but different modules ?

    Read the article

  • Tell the kernel to strongly cache a particular directory

    - by silviot
    This question is a rephrasing of Optimizing EXT4 performance. I have a directory that contains build files, most very small, but totaling 5.6G. I usually access the same subset of files (some thousands, for some tens of megabytes) over and over again. The subset changes daily (different projects, different versions of libraries). What takes longer when I use it seem to be disk seeks. For example if I do a du twice the second time it takes as much time as the first, and disk activity is similar. Ideally I'd like to tell the kernel to allocate X Mb to the metadata and Y to data in the folder, like the options for nfs cache. Is it possible in some way, other than mounting nfs from localhost and caching it to a ramdisk?

    Read the article

  • Can I recreate main user account and delete old?

    - by nazar_art
    Something happen with performance of super user account. When I tried to load home folder it has really looooong booting duration. If compare to earlier time. And I couldn't figure out why this happen and what is wrong. It has been started after I copied a lot of contents from external usb disk. But if I go through other user account all work perfect, without this trouble, fast and cool. I want to create new user account copy all necessary content to this account and delete old account. Can I recreate main user account and delete old??

    Read the article

  • Which one is better to get started? [closed]

    - by vanangamudi
    Which one of the open-source game engine is better to get started? I read several threads over several forums and found that it is better to write own game engine specific to application. But I need to know the requirements of a game engine, other than Graphics, Physics and AI... Many people suggested Unity, But I need open-source version so that I can have a look at implementation... so I google rigorously and found some unknown game engines(at least to me) Unvanquished Cube Spring Pyrogenesis Torque3D CrystalSpace Panda3D Delta3D Irrichlt OpenArena AlienArena (please list others if I missed anything....) FYI: my present focus is on FPS/TPS. Can you tell me which one is better at performance if possible? Torque3D claims to be the best opensource engine - is that true, and if so to what extent?

    Read the article

  • What in /home would benefit from being on an SSD?

    - by N.N.
    In Is a 40GB SSD practical to use for ' / ' Jorge describes how he symlinks directories in his /home that would benefit from being on an SSD. The directories he names are ~/.cache ~/.config ~/.gconf I know how to make the symlinks. What I am asking for is if this is a good list of directories in /home that benefits from being on an SSD? I figure that good items on such a list are files that are read often. The reason for asking this is that I cannot fit all of /home on the SSD but I still want to get as much performance out of the SSD as possible.

    Read the article

  • RAID 0 performance gains?

    - by NickAldwin
    I'm building a new computer over the summer. I'm fairly competent in computer hardware, and am thus building the computer from scratch. I have everything planned out, but I was wondering about RAID. I asked which RAID I should use earlier, but now that it's pretty clear that RAID 1 isn't really that great, I think I'll go with cloud-backup instead of disk-redundancy. However, I still face a choice: use two 1TB drives as two 1TB drives, or combine them into a RAID 0 striped array. Is there any performance gain at all? I know that if one drive dies, everything is gone, so is the performance gain worth it? I'm building a pretty advanced computer, with SLI video cards and a fast CPU, so I'm thinking RAID 0 would give me some good hard drive performance. From your experience, is RAID 0 viable?

    Read the article

  • Unhappy with performance of GBit Ethernet to Fiber converter

    - by Aaron Digulla
    I just bought a TP-Link MC200CM GBit Ethernet (1000-T) to Fiber (1000-SX) media converter. The device works but I'm unhappy with the performance: When connecting my computer over 1000-T (twisted pair, Cat 6, 18meters) with my server, I get a throughput of about 610MBit/s. If I replace the cable with two media converters, I'm left with about 310-315MBit/s (i.e. half the performance). My setup is like this: Computer <- GBit switch <- long cable <- GBit switch <- server Computer <- GBit switch <- MC200CM <- 30m fiber <- MC200CM <- GBit switch <- server Is there a way to improve the performance? Will another MC be faster? Or is that about as much as I can expect with the additional 2 converters?

    Read the article

  • Linear Performance Scalability with HP San Solutions

    - by Berzemus
    Hi all, I need a San Solution with linear scalability in size as well as in performance. From what I know, with a Modular Smart Array solution such as the P2000/MSA-class solutions from HP, even with a dual controller initial node, I can only increase the size of it, as added nodes come controller-less, so overall performance tends to decrease. On the other hand, the P4000 (lefthand) family of solutions has each of it's nodes have it's own controller, and so when a node is added, storage capacity as well as performance increase. Am I right in all that I say, and is the P4000 the only solution, or have I forgotten something ?

    Read the article

  • SQL Server cluster performance baseline

    - by Dwight T
    Currently I'm tasked with getting a good performance baseline on a SQL 2005 cluster. The main db on the server is for Sharepoint, but I would like to add other dbs on the cluster. I do have access to Quest's Performance Analysis tool to help. What are key factors to look at to see if the cluster can handle additional dbs? Do you look at different performance indicators for a cluster vs a stand alone sql server? One db will be a low usage transactional db and a read only db that is used for sales data. Thanks Dwight

    Read the article

  • Is there a relation between MS SQL Server client licenses and performance

    - by ramdaz
    I have a customer who has an .Net application running on MS SQL Server 2008, supplied by our company as a part of Microsoft Small Business Server. He started off with around 5 users, and hence we had not sold any extra licenses. Today there are 40 users, and there's performance degradation. An MS Consultant said that to improve performance you need to buy extra licenses. Is there a relationship. I am anyway planning to force the customer to buy extra licenses on legal grounds. But will there be any appreciable performance difference too? Advice welcome

    Read the article

  • Terminal Server 2003 Performance Troubleshooting

    - by MikeM
    Let me get your thoughts on terminal server performance problems. The server hosts average 25 users which, after running some numbers, on average use 600MB memory with their main applications running (web browser, adobe reader, IP phone client). All users are on the same LAN as server. We constantly experience slow response and short session lockups. Combined CPU usage is on average 10%. What appears strange to me is that the system shows 29GB physical memory with 25GB of it free. The page file usage is about 50% averaging 9GB used. Some server specs OS: Server 2003 32bit Enterprise with /PAE flag RAM: 32GB CPU: 2xQuad Core @ 2.27Ghz HD: RAID5 1.2GB After doing basic troubleshooting using performance monitor it leads me to believe that the performance problems are caused by the 32bit OS limitation in addressing full 32GB of physical memory even though the /PAE flag is used. Can anyone suggest something, troubleshooting steps that can lead to a more conclusive answer? Thanks

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • Performance Drop Lingers after Load [closed]

    - by Charles
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I'm noticing a drop in performance after subsequent load tests. Although our cpu and ram numbers look fine, performance seems to degrade over time as sustained load is applied to the system. If we allow more time between the load tests, the performance gets back to about 1,000 ms, but if you apply load every 3 minutes or so, it starts to degrade to a point where it takes 12,000 ms. None of the application servers are showing lingering apache processes and the number of database connections cools down to about 3 (from a sustained 20). Is there anything else I should be looking out for here?

    Read the article

  • TouchXML to read in twitter feed for iphone app

    - by Fiona
    Hello there, So I've managed to get the feed from twitter and am attempting to parse it... I only require the following fields from the feed: name, description, time_zone and created_at I am successfully pulling out name and description.. however time_zone and created_at always are nil... The following is the code... Anyone see why this might not be working? -(void) friends_timeline_callback:(NSData *)data{ NSString *string = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; NSLog(@"Data from twitter: %@", string); NSMutableArray *res = [[NSMutableArray alloc] init]; CXMLDocument *doc = [[[CXMLDocument alloc] initWithData:data options:0 error:nil] autorelease]; NSArray *nodes = nil; //! searching for item nodes nodes = [doc nodesForXPath:@"/statuses/status/user" error:nil]; for (CXMLElement *node in nodes) { int counter; Contact *contact = [[Contact alloc] init]; for (counter = 0; counter < [node childCount]; counter++) { //pulling out name and description only for the minute!!! if ([[[node childAtIndex:counter] name] isEqual:@"name"]){ contact.name = [[node childAtIndex:counter] stringValue]; }else if ([[[node childAtIndex:counter] name] isEqual:@"description"]) { // common procedure: dictionary with keys/values from XML node if ([[node childAtIndex:counter] stringValue] == NULL){ contact.nextAction = @"No description"; }else{ contact.nextAction = [[node childAtIndex:counter] stringValue]; } }else if ([[[node childAtIndex:counter] name] isEqual:@"created_at"]){ contact.date == [[node childAtIndex:counter] stringValue]; }else if([[[node childAtIndex:counter] name] isEqual:@"time_zone"]){ contact.status == [[node childAtIndex:counter] stringValue]; [res addObject:contact]; [contact release]; } } } self.contactsArray = res; [res release]; [self.tableView reloadData]; } Thanks in advance for your help!! Fiona

    Read the article

  • MySQL – Scalability on Amazon RDS: Scale out to multiple RDS instances

    - by Pinal Dave
    Today, I’d like to discuss getting better MySQL scalability on Amazon RDS. The question of the day: “What can you do when a MySQL database needs to scale write-intensive workloads beyond the capabilities of the largest available machine on Amazon RDS?” Let’s take a look. In a typical EC2/RDS set-up, users connect to app servers from their mobile devices and tablets, computers, browsers, etc.  Then app servers connect to an RDS instance (web/cloud services) and in some cases they might leverage some read-only replicas.   Figure 1. A typical RDS instance is a single-instance database, with read replicas.  This is not very good at handling high write-based throughput. As your application becomes more popular you can expect an increasing number of users, more transactions, and more accumulated data.  User interactions can become more challenging as the application adds more sophisticated capabilities. The result of all this positive activity: your MySQL database will inevitably begin to experience scalability pressures. What can you do? Broadly speaking, there are four options available to improve MySQL scalability on RDS. 1. Larger RDS Instances – If you’re not already using the maximum available RDS instance, you can always scale up – to larger hardware.  Bigger CPUs, more compute power, more memory et cetera. But the largest available RDS instance is still limited.  And they get expensive. “High-Memory Quadruple Extra Large DB Instance”: 68 GB of memory 26 ECUs (8 virtual cores with 3.25 ECUs each) 64-bit platform High I/O Capacity Provisioned IOPS Optimized: 1000Mbps 2. Provisioned IOPs – You can get provisioned IOPs and higher throughput on the I/O level. However, there is a hard limit with a maximum instance size and maximum number of provisioned IOPs you can buy from Amazon and you simply cannot scale beyond these hardware specifications. 3. Leverage Read Replicas – If your application permits, you can leverage read replicas to offload some reads from the master databases. But there are a limited number of replicas you can utilize and Amazon generally requires some modifications to your existing application. And read-replicas don’t help with write-intensive applications. 4. Multiple Database Instances – Amazon offers a fourth option: “You can implement partitioning,thereby spreading your data across multiple database Instances” (Link) However, Amazon does not offer any guidance or facilities to help you with this. “Multiple database instances” is not an RDS feature.  And Amazon doesn’t explain how to implement this idea. In fact, when asked, this is the response on an Amazon forum: Q: Is there any documents that describe the partition DB across multiple RDS? I need to use DB with more 1TB but exist a limitation during the create process, but I read in the any FAQ that you need to partition database, but I don’t find any documents that describe it. A: “DB partitioning/sharding is not an official feature of Amazon RDS or MySQL, but a technique to scale out database by using multiple database instances. The appropriate way to split data depends on the characteristics of the application or data set. Therefore, there is no concrete and specific guidance.” So now what? The answer is to scale out with ScaleBase. Amazon RDS with ScaleBase: What you get – MySQL Scalability! ScaleBase is specifically designed to scale out a single MySQL RDS instance into multiple MySQL instances. Critically, this is accomplished with no changes to your application code.  Your application continues to “see” one database.   ScaleBase does all the work of managing and enforcing an optimized data distribution policy to create multiple MySQL instances. With ScaleBase, data distribution, transactions, concurrency control, and two-phase commit are all 100% transparent and 100% ACID-compliant, so applications, services and tooling continue to interact with your distributed RDS as if it were a single MySQL instance. The result: now you can cost-effectively leverage multiple MySQL RDS instance to scale out write-intensive workloads to an unlimited number of users, transactions, and data. Amazon RDS with ScaleBase: What you keep – Everything! And how does this change your Amazon environment? 1. Keep your application, unchanged – There is no change your application development life-cycle at all.  You still use your existing development tools, frameworks and libraries.  Application quality assurance and testing cycles stay the same. And, critically, you stay with an ACID-compliant MySQL environment. 2. Keep your RDS value-added services – The value-added services that you rely on are all still available. Amazon will continue to handle database maintenance and updates for you. You can still leverage High Availability via Multi A-Z.  And, if it benefits youra application throughput, you can still use read replicas. 3. Keep your RDS administration – Finally the RDS monitoring and provisioning tools you rely on still work as they did before. With your one large MySQL instance, now split into multiple instances, you can actually use less expensive, smallersmaller available RDS hardware and continue to see better database performance. Conclusion Amazon RDS is a tremendous service, but it doesn’t offer solutions to scale beyond a single MySQL instance. Larger RDS instances get more expensive.  And when you max-out on the available hardware, you’re stuck.  Amazon recommends scaling out your single instance into multiple instances for transaction-intensive apps, but offers no services or guidance to help you. This is where ScaleBase comes in to save the day. It gives you a simple and effective way to create multiple MySQL RDS instances, while removing all the complexities typically caused by “DIY” sharding andwith no changes to your applications . With ScaleBase you continue to leverage the AWS/RDS ecosystem: commodity hardware and value added services like read replicas, multi A-Z, maintenance/updates and administration with monitoring tools and provisioning. SCALEBASE ON AMAZON If you’re curious to try ScaleBase on Amazon, it can be found here – Download NOW. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup

    - by pinaldave
    In recent times I have observed that not many people have proper understanding of what is bookmark lookup or key lookup. Increasing numbers of the questions tells me that this is something developers are encountering every single day but have no idea how to deal with it. I have previously written three articles on this subject. I want to point all of you looking for further information on the same post. SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 SQL SERVER – Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries In one of my recent class we had in depth conversation about what are the alternative of creating covering indexes to remove the bookmark lookup. I really want to this question open to all of you and see what community thinks about the same. Is there any other way then creating covering index or included index to remove his expensive keylookup? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Subquery or Join – Various Options – SQL Server Engine Knows the Best – Part 2

    - by pinaldave
    This blog post is part 2 of the earlier written article SQL SERVER – Subquery or Join – Various Options – SQL Server Engine knows the Best by Paulo R. Pereira. Paulo has left excellent comment to earlier article once again proving the point that SQL Server Engine is smart enough to figure out the best plan itself and uses the same for the query. Let us go over his comment as he has posted. “I think IN or EXISTS is the best choice, because there is a little difference between ‘Merge Join’ of query with JOIN (Inner Join) and the others options (Left Semi Join), and JOIN can give more results than IN or EXISTS if the relationship is 1:0..N and not 1:0..1. And if I try use NOT IN and NOT EXISTS the query plan is different from LEFT JOIN too (Left Anti Semi Join vs. Left Outer Join + Filter). So, I found a case where EXISTS has a different query plan than IN or ANY/SOME:” USE AdventureWorks GO -- use of SOME SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = SOME ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) -- use of IN SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) -- use of EXISTS SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) When looked into execution plan of the queries listed above indeed we do get different plans for queries and SQL Server Engines creates the best (least cost) plan for each query. Click on image to see larger images. Thanks Paulo for your wonderful contribution. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Fastest Way to Restore the Database

    - by pinaldave
    A few days ago, I received following email: “Pinal, We are in an emergency situation. We have a large database of around 80+ GB and its backup is of 50+ GB in size. We need to restore this database ASAP and use it; however, restoring the database takes forever. Do you think a compressed backup would solve our problem? Any other ideas you got?” First of all, the asker has already answered his own question. Yes; I have seen that if you are using a compressed backup, it takes lesser time when you try to restore a database. I have previously blogged about the same subject. Here are the links to those blog posts: SQL SERVER – Data and Page Compressions – Data Storage and IO Improvement SQL SERVER – 2008 – Introduction to Row Compression SQL SERVER – 2008 – Introduction to New Feature of Backup Compression However, if your database is very large that it still takes a few minutes to restore the database even though you use any of the features listed above, then it will really take some time to restore the database. If there is urgency and there is no time you can spare for restoring the database, then you can use the wonderful tool developed by Idera called virtual database. This tool restores a certain database in just a few seconds so it will readily be available for usage. I have in depth written my experience with this tool in the article here SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual database. Let me know your experience in this scenario. Have you ever needed your database backup restored very quickly, what did you do in that scenario. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • OBIEE 11.1.1 - Disable Wrap Data Types in WebLogic Server 10.3.x

    - by Ahmed Awan
    By default, JDBC data type’s objects are wrapped with a WebLogic wrapper. This allows for features like debugging output and track connection usage to be done by the server. The wrapping can be turned off by setting this value to false. This improves performance, in some cases significantly, and allows for the application to use the native driver objects directly. Tip: How to Disable Wrapping in WLS Administration Console You can use the Administration Console to disable data type wrapping for following JDBC data sources in bifoundation_domain domain: Data Source Name bip_datasource mds-owsm EPMSystemRegistry   To disable wrapping for each JDBC data source (as stated in above table): 1.     If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit. 2.     In the Domain Structure tree, expand Services, then select Data Sources. 3.     On the Summary of Data Sources page, click the data source name for example “mds-owsm”. 4.     Select the Configuration: Connection Pool tab. 5.     Scroll down and click Advanced to show the advanced connection pool options. 6.     In Wrap Data Types, deselect the checkbox to disable wrapping. 7.     Click Save. 8.     To activate these changes, in the Change Center of the Administration Console, click Activate Changes. Important Note: This change does not take effect immediately—it requires the server be restarted.

    Read the article

  • SQL SERVER – Find Most Expensive Queries Using DMV

    - by pinaldave
    The title of this post is what I can express here for this quick blog post. I was asked in recent query tuning consultation project, if I can share my script which I use to figure out which is the most expensive queries are running on SQL Server. This script is very basic and very simple, there are many different versions are available online. This basic script does do the job which I expect to do – find out the most expensive queries on SQL Server Box. SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_logical_reads DESC -- logical reads -- ORDER BY qs.total_logical_writes DESC -- logical writes -- ORDER BY qs.total_worker_time DESC -- CPU time You can change the ORDER BY clause to order this table with different parameters. I invite my reader to share their scripts. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology Tagged: SQL DMV

    Read the article

  • SQLAuthority News – 18 Seconds of Fame – My PASS Experience

    - by pinaldave
    Happy Holidays to All of YOU! Life is full of little and happy surprises. I think Christmas and Santa are based on it. I just received very interesting email earlier today, I had no idea about it. Earlier this year, I had visited Seattle to attend SQLPASS – read the complete summary over here: SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience. While I was walking down, someone has stopped me and asked if they can talk to me for 15 seconds, I said yes and they had shot quick movie with mobile. The conversation was very quick and I had forgotten about it. Today I received email from one of the blog reader about it being on YouTube. Honestly, I did not know if this was ever going to be on YouTube. I am surprised and thrilled. Watch my 18 seconds fame movie. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology

    Read the article

  • Starting a Java activity in Unity3d Android

    - by Matthew Pavlinsky
    I wrote a small Java activity extension of UnityPlayerActivity similar to what is described in the Unity docs. It has a method for displaying a song picking interface using an ACTION_GET_CONTENT intent. I start this activity using startActivityForResult() and it absolutely kills the performance of my Unity game when it is finished, it drops to about .1 FPS afterwords. I've changed removed the onActivityResult function and even tried starting the activity from inside an onKeyDown event in Java to make sure my method of starting the activity from Unity was not the problem. Heres the code in a basic sense: package com.company.product; import com.unity3d.player.UnityPlayerActivity; import com.unity3d.player.UnityPlayer; import android.os.Bundle; import android.util.Log; import android.content.Intent; public class SongPickerActivity extends UnityPlayerActivity { private Intent myIntent; final static int PICK_SONG = 1; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Log.i("SongPickerActivity", "OnCreate"); myIntent = new Intent(Intent.ACTION_GET_CONTENT); myIntent.setType("audio/*"); } public void Pick() { Log.i("SongPickerActivity", "Pick"); startActivityForResult(myIntent, PICK_SONG); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); } } This is causing me a bit more of a headache than it should and I would be thankful for any sort of advice. Does anyone have any experience with using custom activities in Unity Android or any insight on why this is happening or how to resolve this?

    Read the article

  • Python — Time complexity of built-in functions versus manually-built functions in finite fields

    - by stackuser
    Generally, I'm wondering about the advantages versus disadvantages of using the built-in arithmetic functions versus rolling your own in Python. Specifically, I'm taking in GF(2) finite field polynomials in string format, converting to base 2 values, performing arithmetic, then output back into polynomials as string format. So a small example of this is in multiplication: Rolling my own: def multiply(a,b): bitsa = reversed("{0:b}".format(a)) g = [(b<<i)*int(bit) for i,bit in enumerate(bitsa)] return reduce(lambda x,y: x+y,g) Versus the built-in: def multiply(a,b): # a,b are GF(2) polynomials in binary form .... return a*b #returns product of 2 polynomials in gf2 Currently, operations like multiplicative inverse (with for example 20 bit exponents) take a long time to run in my program as it's using all of Python's built-in mathematical operations like // floor division and % modulus, etc. as opposed to making my own division, remainder, etc. I'm wondering how much of a gain in efficiency and performance I can get by building these manually (as shown above). I realize the gains are dependent on how well the manual versions are built, that's not the question. I'd like to find out 'basically' how much advantage there is over the built-in's. So for instance, if multiplication (as in the example above) is well-suited for base 10 (decimal) arithmetic but has to jump through more hoops to change bases to binary and then even more hoops in operating (so it's lower efficiency), that's what I'm wondering. Like, I'm wondering if it's possible to bring the time down significantly by building them myself in ways that maybe some professionals here have already come across.

    Read the article

  • SQL SERVER – Plan Cache and Data Cache in Memory

    - by pinaldave
    I get following question almost all the time when I go for consultations or training. I often end up providing the scripts to my clients and attendees. Instead of writing new blog post, today in this single blog post, I am going to cover both the script and going to link to original blog posts where I have mentioned about this blog post. Plan Cache in Memory USE AdventureWorks GO SELECT [text], cp.size_in_bytes, plan_handle FROM sys.dm_exec_cached_plans AS cp CROSS APPLY sys.dm_exec_sql_text(plan_handle) WHERE cp.cacheobjtype = N'Compiled Plan' ORDER BY cp.size_in_bytes DESC GO Further explanation of this script is over here: SQL SERVER – Plan Cache – Retrieve and Remove – A Simple Script Data Cache in Memory USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.TYPE = 1 OR au.TYPE = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.TYPE = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Further explanation of this script is over here: SQL SERVER – Get Query Plan Along with Query Text and Execution Count Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Memory

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >