Search Results

Search found 14475 results on 579 pages for 'veritas storage manager'.

Page 59/579 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • josso newbie setup problems - can't use tomcat's manager page

    - by opensas
    I'm trying to setup josso on an apache tomcat server running on windows. I've installed Apache Tomcat/6.0.26 fro zip file to c:\tomcat then installed josso following the documentation at http://www.josso.org/confluence/display/JOSSO1/Quick+Start started tomcat with c:\tomcat\bin\startup.bat, and noticed the following warnings ADVERTENCIA: [SetPropertiesRule]{Server/Service/Engine/Realm} Setting property ' debug' to '1' did not find a matching property. 21/03/2010 15:55:03 org.apache.tomcat.util.digester.SetPropertiesRule begin ADVERTENCIA: [SetPropertiesRule]{Server/Service/Engine/Host/Valve} Setting prope rty 'appName' to 'josso' did not find a matching property. ... ADVERTENCIA: Unable to find required classes (javax.activation.DataHandler and j avax.mail.internet.MimeMultipart). Attachment support is disabled. ... ADVERTENCIA: Bean with key 'josso:type=SSOAuditManager' has been registered as a n MBean but has no exposed attributes or operations ... but then everything seems to work fine, the problem is I can no longer access http://localhost:8080/manager/html using user tomcat /tomcat, as configured in \conf\tomcat-users.xml (before installing josso it worked) I tried with tomcat/tomcatpwd as defined in \lib\josso-credentials.xml and even added tomcat and the manager role to \lib\josso-users.xml, with no luck... Is anybody having the same problem? how can I access tomcat's manager page? Thanks a lot saludos sas This is my config: C:\tomcat\bincatalina version Using CATALINA_BASE: "C:\tomcat" Using CATALINA_HOME: "C:\tomcat" Using CATALINA_TMPDIR: "C:\tomcat\temp" Using JRE_HOME: "c:\java" Using CLASSPATH: "C:\tomcat\bin\bootstrap.jar" Server version: Apache Tomcat/6.0.26 Server built: March 9 2010 1805 Server number: 6.0.26.0 OS Name: Windows XP OS Version: 5.1 Architecture: x86 JVM Version: 1.5.0_22-b03 JVM Vendor: Sun Microsystems Inc ps: moreover, when shutting down, I get a couple of error like this GRAVE: A web application appears to have started a thread named [JOSSOAssertionM onitor] but has failed to stop it. This is very likely to create a memory leak. 21/03/2010 15:57:06 org.apache.catalina.loader.WebappClassLoader clearReferences Threads and then tomcat's shutdown freezes at 21/03/2010 15:57:07 org.apache.coyote.ajp.AjpAprProtocol destroy INFO: Parando Coyote AJP/1.3 en ajp-8009 ps: sorry for this lengthy question...

    Read the article

  • Windows-mobile app won't run after being closed by Task Manager

    - by pithyless
    I've inherited some windows-mobile code that I've been bringing up-to-date. I've come across a weird bug, and I was hoping that even though a bit vague, maybe it will spark someone's memory: Running the app (which is basically a glorified Forms app with P/Invoke gps code), I switch to the task manager, and close the app via End Task. Seems to exit fine (no errors and disappears from Task Manager). Unfortunately, the app refuses to start a second time until I reboot the phone or reinstall the CAB. What's worse: this bug is reproducible on a HTC Diamond, but works fine (ie. can run again after EndTask) on an HTC HD2. The only thing I can think of is some kind of timing race between a Dispose() and the Task Manager. Any ideas? I'm also thinking of a workaround - I do have a working "Exit Application" routine that correctly cleans up the app; can I catch the EndTask event in the c# code in order to complete a proper cleanup? Maybe I'm just missing the pain point... all ideas welcome :)

    Read the article

  • JPA - Performance with using multiple entity manager

    - by Nguyen Tuan Linh
    My situation is: The code is not mine I have two kinds of database: one is Dad, one is Son. In Dad, I have a table to store JNDI name. I will look up Dad using JNDI, create entity manager, and retrieve this table. From these retrieved JNDI names, I will create multiple entity managers using multiple Son databases. The problem is: Son have thousands of entities. It takes each Son database around 10 minutes to load all entities. If there is 4 Son databases, it will be 40 minutes. My question: Is there any way to load all entities and use them for all entity manager? Please look at the code below For each Son JNDI: Map<String, String> puSonProperties = new HashMap<String, String>(); puSonProperties.put("javax.persistence.jtaDataSource", sonJndi); EntityManagerFactory emf = Persistence.createEntityManagerFactory("PUSon", puSonProperties); PUSon - All of them use the same persistence unit log.info("Verify entity manager for son: {0} - {1}", sonCode, emSon.find(Son_configuration.class, 0) != null ? "ok" : "failed!"); This is the actual code where the loading of all entities begins. 10 mins.

    Read the article

  • Integrating HP Systems Insight Manager into an existing environment

    - by ewwhite
    I'm working with an environment that spans multiple data centers/sites and consists primarily of HP ProLiant servers (G5-G7) running Linux. The mix is 30% RHEL/CentOS, the rest are Gentoo :(. I also have a few dozen virtual machines running back-office and Windows servers on VMWare ESX hosts. I run OpenNMS to pull SNMP data from the various server nodes and networking devices. While OpenNMS works wonderfully for up/down, thresholds and notifications, it's native handling of traps is a little rough and the graphs are not particularly pretty. I use Orca/RRD graphs for performance trending and nice graphs. I'm tasked with inventorying the environment and wanted to come up with a clean way to organize server information. Since my environment is mostly HP, I've been playing with HP Systems Insight Manager as a way to extract server data and to deploy HP health/monitoring packages and firmware. The Gentoo systems eventually have to be converted to CentOS, so getting a quick assessment of what hardware is where would be great. Although I've read through a few hundred pages of HP manuals, I'm having a difficult time understanding how to get HP SIM to do what I want, though. My main problems are: I have about 40 subnets to deal with; 98% connected with private lines to facilities across the globe. I don't want to initiate an HP SIM discovery only to pull back every piece of intermediate networking hardware and equipment from all of the locations. I'd like this to focus on the servers. I have OpenNMS configured to accept traps. I don't want HP SIM to duplicate that effort. It seems like the built-in software deployment tool wants to overwrite the trapsink parameters for the systems it encounters during discovery. I have about 10 administrative username/password combinations in use across this infrastructure. Is there a more efficient way to get HP SIM to do the discovery or break discovery into manageable chunks? In terms of general workflow, do people typically install the HP Management Agents during the initial OS deployment (e.g. kickstart post script) or afterwards from HP SIM? Is HP SIM too thick/fat to be an inventory tool? I can't tell if it's meant to be used standalone or alongside other monitoring products. Since the majority of the systems I'm trying to track are those running Gentoo (in order to plan the move to CentOS), is there any way for HP SIM to extract system model information from them ( like dmidecode)? I have systems here where I may have an SSH key established, but not direct user or login access. Is there a way for me to import an SSH private/public key pair into HP SIM to reach out to the servers that can't accept standard credentials? There are a handful of sites where I have inconsistent access or have a double-NAT situation. I may be able to poke a server, but it may not be able to find its way back to the management system. Is there a workaround for this? The certificate configuration for HP SIM seems complicated. What is the preferred setup for trust between systems? I'd also appreciate any notes or recommendations to using this product. Or if there's a better way to do this, I'd like to know.

    Read the article

  • 404 redirect with cloud storage

    - by Jeremy DeGroot
    I'm hoping to reach someone with some experience using a service like Amazon's S3 with this question. On my site we have a dedicated image server. And on this server, we have an automatic 404 redirect through Apapche so that, if a user tries to access an image that doesn't exist, they'll see a snazzy "Image Not Available" image. We're looking to move the hosting of these images to a cloud storage solution (S3 or Rackspace's CloudFiles), and I'm wondering if anyone's had any success replicating this behavior on a cloud storage service and if so how they did it.

    Read the article

  • MySQL index cardinality - performance vs storage efficiency

    - by Sean
    Say you have a MySQL 5.0 MyISAM table with 100 million rows, with one index (other than primary key) on two integer columns. From my admittedly poor understanding of B-tree structure, I believe that a lower cardinality means the storage efficiency of the index is better, because there are less parent nodes. Whereas a higher cardinality means less efficient storage, but faster read performance, because it has to navigate through less branches to get to whatever data it is looking for to narrow down the rows for the query. (Note - by "low" vs "high", I don't mean e.g. 1 million vs 99 million for a 100 million row table. I mean more like 90 million vs 95 million) Is my understanding correct? Related question - How does cardinality affect write performance?

    Read the article

  • WPF: isolated storage file path too long

    - by user342961
    Hi, I'm deploying my WPF app with ClickOnce. When developing locally in Visual Studio, I store files in the isolated storage by calling IsolatedStorageFile.GetUserStoreForDomain(). This works just fine and the generated path is C:\Users\Frederik\AppData\Local\IsolatedStorage\phqduaro.crw\hux3pljr.cnx\StrongName.kkulk3wafjkvclxpwvxmpvslqqwckuh0\Publisher.ui0lr4tpq53mz2v2c0uqx21xze0w22gq\Files\FilerefData\-581750116 (189 chars) But when I deploy my app with ClickOnce, the generated path becomes too long, resulting in a DirectoryNotFoundException when creating the isolated storage directory. The generated path with ClickOnce is: C:\Users\Frederik\AppData\Local\Apps\2.0\Data\OQ0LNXJT.R5V\8539ABHC.ODN\exqu..tion_e07264ceafd7486e_0001.0000_b8f01b38216164a0\Data\StrongName.wy0cojdd3mpvq45404l3gxdklugoanvi\Publisher.ui0lr4tpq53mz2v2c0uqx21xze0w22gq\Files\FilerefData\-581750116 (247 chars) When I browse the folders all but the last directory of the path exists. Then when trying to create a folder at this location windows tells me I can't create a directory because the resulting path name will be too long. How can I shorten the path generated by the IsolatedStorage?

    Read the article

  • Zend_Auth / Zend_Session error and storing objects in Auth Storage

    - by Martin
    Hi All, I have been having a bit of a problem with Zend_Auth and keep getting an error within my Acl. Within my Login Controller I setup my Zend_Auth storage as follows $auth = Zend_Auth::getInstance(); $result = $auth->authenticate($adapter); if ($result->isValid()) { $userId = $adapter->getResultRowObject(array('user_id'), null)->user_id; $user = new User_Model_User; $users = new User_Model_UserMapper; $users->find($userId, $user); $auth->getStorage()->write( $user ); } This seems to work well and I am able to use the object stored in the Zend_Auth storage within View Helpers without any problems. The problem that I seem to be having is when I try to use this within my Acl, below is a snippet from my Acl, as soon as it gets to the if($auth->hasIdentity()) { line I get the exception detailed further down. The $user->getUserLevel() is a methord within the User Model that allows me to convert the user_level_id that is stored in the database to a meaning full name. I am assuming that the auto loader sees these kind of methords and tries to load all the classes that would be required. When looking at the exception it appears to be struggling to find the class as it is stored in a module, I have the Auto Loader Name Space setup in my application.ini. Could anyone help with resolving this? class App_Controller_Plugin_Acl extends Zend_Controller_Plugin_Abstract { protected $_roleName; public function __construct() { $auth = Zend_Auth::getInstance(); if($auth->hasIdentity()) { $user = $auth->getIdentity(); $this->_roleName = strtolower($user->getUserLevel()); } else { $this->_roleName = 'guest'; } } } Fatal error: Uncaught exception 'Zend_Session_Exception' with message 'Zend_Session::start() - \Web\library\Zend\Loader.php(Line:146): Error #2 include_once() [&lt;a href='function.include'&gt;function.include&lt;/a&gt;]: Failed opening 'Menu\Model\UserLevel.php' for inclusion (include_path='\Web\application/../library;\Web\library;.;C:\php5\pear') Array' in \Web\library\Zend\Session.php:493 Stack trace: #0 \Web\library\Zend\Session\Namespace.php(143): Zend_Session::start(true) #1 \Web\library\Zend\Auth\Storage\Session.php(87): Zend_Session_Namespace-&gt;__construct('Zend_Auth') #2 \Web\library\Zend\Auth.php(91): Zend_Auth_Storage_Session-&gt;__construct() #3 \Web\library\Zend\A in \Web\library\Zend\Session.php on line 493 Thanks, Martin

    Read the article

  • Convert ASP.NET membership system to secure password storage

    - by wrburgess
    I have a potential client that set up their website and membership system in ASP.NET 3.5. When their developer set up the system, it seems he turned off the security/hashing aspect of password storage and everything is stored in the clear. Is there a process to reinstall/change the secure password storage of ASP.NET membership without changing all of the passwords in the database? The client is worried that they'll lose their customers if they all have to go through a massive password change. I've always installed with security on by default, thus I don't know the effect of a switchover. Is there a way to convert the entire system to a secure password system without major effects on the users?

    Read the article

  • Silverlight Isolated Storage and loading big files

    - by Thomas Joulin
    In a Windows Phone 7 application, I would like to query a big XML file (list of cities) stored using Isolated Storage. If I do that this way, will the file be loaded to memory ( 5 mo) ? If so, what other solution do I have? Edit: More details. I want to use AutoCompleteBox (http://www.jeff.wilcox.name/2008/10/introducing-autocompletebox/), but instead of using a web service (this is fixed data, no need to be online), I want to query a file/database/isolated storage... I have a fixed list of cities. I said in the comments it's 40k, but it finally seems closer to 1k rows.

    Read the article

  • Moving webshop storage to NoSQL solution

    - by mare
    If you had a webshop solution based on SQL Server relational DB what would be the reasons, if any, to move to NoSQL storage ? Does it even make sense to migrate datastores that rely on relations heavily to NoSQL? If starting from scratch, would you choose NoSQL solution over relational one for a webshop project, which will, after a while, again end up with a bunch of tables like Articles, Classifications, TaxRates, Pricelists etc. and a magnitude of relations between them? What's the support like in .NET (4.0) for MongoDB or MongoDB's support for .NET 4.0? Can I count on rich code generation tools similar to EF wizard, L2SQL wizard etc. for MongoDB? Because as what I have read so far, NoSQL's are mostly suited for document storage, simpler object models. Your answer to this question will help me make the right infrastructure design decisions.

    Read the article

  • MySQL storage engine dilemma

    - by burntblark
    There are two MySQL database features that I want to use in my application. The first is FULL-TEXT-SEARCH and TRANSACTIONS. Now, the dilemma here is that I cannot get this feature in one storage engine. It's either I use MyIsam (which has the FULL-TEXT-SEARCH feature) or I use InnoDB (which supports the TRANSACTION feature). I can't have both. My question is, is there anyway I can have both features in my application before I am forced to make a choice between the two storage engines.

    Read the article

  • Rapid Repository – Silverlight Development

    - by SeanMcAlinden
    Hi All, One of the questions I was recently asked was whether the Rapid Repository would work for normal Silverlight development as well as for the Windows 7 Phone. I can confirm that the current code in the trunk will definitely work for both the Windows 7 Phone and normal Silverlight development. I haven’t tested V.1.0 for compatibility but V2.0 which will be released fairly soon will work absolutely fine.   Kind Regards, Sean McAlinden.

    Read the article

  • Windows 7 Phone Database Rapid Repository – V2.0 Beta Released

    - by SeanMcAlinden
    Hi All, A V2.0 beta has been released for the Windows 7 Phone database Rapid Repository, this can be downloaded at the following: http://rapidrepository.codeplex.com/ Along with the new View feature which greatly enhances querying and performance, various bugs have been fixed including a more serious bug with the caching that caused the GetAll() method to sometimes return inconsistent results (I’m a little bit embarrased by this bug). If you are currently using V1.0 in development, I would recommend swapping in the beta immediately. A full release will be available very shortly, I just need a few more days of testing and some input from other users/testers.   *Breaking Changes* The only real change is the RapidContext has moved under the main RapidRepository namespace. Various internal methods have been actually made ‘internal’ and replaced with a more friendly API (I imagine not many users will notice this change). Hope you like it Kind Regards, Sean McAlinden

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • Networking is disabled after installing Maverick

    - by Zifre
    I recently installed Ubuntu 10.10 (Maverick Meerkat). Everything was working fine. Then I just started up the computer again, and the networking doesn't work. The network manager applet says "Networking disabled". The button is disabled, so I can't enable it. This question seems to be basically the same issue I have. managed in was set to false, but changing it to true does not fix the problem. Is there any other way to fix this problem?

    Read the article

  • SQLAuthority News – SQL Server Technical Article – The Data Loading Performance Guide

    - by pinaldave
    The white paper describes load strategies for achieving high-speed data modifications of a Microsoft SQL Server database. “Bulk Load Methods” and “Other Minimally Logged and Metadata Operations” provide an overview of two key and interrelated concepts for high-speed data loading: bulk loading and metadata operations. After this background knowledge, white paper describe how these methods can be [...]

    Read the article

  • Networking is disabled after installing Maverick

    - by Zifre
    I recently installed Ubuntu 10.10 (Maverick Meerkat). Everything was working fine. Then I just started up the computer again, and the networking doesn't work. The network manager applet says "Networking disabled". The button is disabled, so I can't enable it. This question seems to be basically the same issue I have. managed in was set to false, but changing it to true does not fix the problem. Is there any other way to fix this problem?

    Read the article

  • Trace Flag 610 – When should you use it?

    - by simonsabin
    Thanks to Marcel van der Holst for providing this great information on the use of Trace Flag 610. This trace flag can be used to have minimal logging into a b tree (i.e. clustered table or an index on a heap) that already has data. It is a trace flag because in testing they found some scenarios where it didn’t perform as well. Marcel explains why below. “ TF610 can be used to get minimal logging in a non-empty B-Tree. The idea is that when you insert a large amount of data, you don't want to...(read more)

    Read the article

  • Troubleshooting High-CPU Utilization for SQL Server

    - by Susantha Bathige
    The objective of this FAQ is to outline the basic steps in troubleshooting high CPU utilization on  a server hosting a SQL Server instance. The first and the most common step if you suspect high CPU utilization (or are alerted for it) is to login to the physical server and check the Windows Task Manager. The Performance tab will show the high utilization as shown below: Next, we need to determine which process is responsible for the high CPU consumption. The Processes tab of the Task Manager will show this information: Note that to see all processes you should select Show processes from all user. In this case, SQL Server (sqlserver.exe) is consuming 99% of the CPU (a normal benchmark for max CPU utilization is about 50-60%). Next we examine the scheduler data. Scheduler is a component of SQLOS which evenly distributes load amongst CPUs. The query below returns the important columns for CPU troubleshooting. Note – if your server is under severe stress and you are unable to login to SSMS, you can use another machine’s SSMS to login to the server through DAC – Dedicated Administrator Connection (see http://msdn.microsoft.com/en-us/library/ms189595.aspx for details on using DAC) SELECT scheduler_id ,cpu_id ,status ,runnable_tasks_count ,active_workers_count ,load_factor ,yield_count FROM sys.dm_os_schedulers WHERE scheduler_id See below for the BOL definitions for the above columns. scheduler_id – ID of the scheduler. All schedulers that are used to run regular queries have ID numbers less than 1048576. Those schedulers that have IDs greater than or equal to 1048576 are used internally by SQL Server, such as the dedicated administrator connection scheduler. cpu_id – ID of the CPU with which this scheduler is associated. status – Indicates the status of the scheduler. runnable_tasks_count – Number of workers, with tasks assigned to them that are waiting to be scheduled on the runnable queue. active_workers_count – Number of workers that are active. An active worker is never preemptive, must have an associated task, and is either running, runnable, or suspended. current_tasks_count - Number of current tasks that are associated with this scheduler. load_factor – Internal value that indicates the perceived load on this scheduler. yield_count – Internal value that is used to indicate progress on this scheduler.                                                                 Now to interpret the above data. There are four schedulers and each assigned to a different CPU. All the CPUs are ready to accept user queries as they all are ONLINE. There are 294 active tasks in the output as per the current_tasks_count column. This count indicates how many activities currently associated with the schedulers. When a  task is complete, this number is decremented. The 294 is quite a high figure and indicates all four schedulers are extremely busy. When a task is enqueued, the load_factor  value is incremented. This value is used to determine whether a new task should be put on this scheduler or another scheduler. The new task will be allocated to less loaded scheduler by SQLOS. The very high value of this column indicates all the schedulers have a high load. There are 268 runnable tasks which mean all these tasks are assigned a worker and waiting to be scheduled on the runnable queue.   The next step is  to identify which queries are demanding a lot of CPU time. The below query is useful for this purpose (note, in its current form,  it only shows the top 10 records). SELECT TOP 10 st.text  ,st.dbid  ,st.objectid  ,qs.total_worker_time  ,qs.last_worker_time  ,qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC This query as total_worker_time as the measure of CPU load and is in descending order of the  total_worker_time to show the most expensive queries and their plans at the top:      Note the BOL definitions for the important columns: total_worker_time - Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled. last_worker_time - CPU time, in microseconds, that was consumed the last time the plan was executed.   I re-ran the same query again after few seconds and was returned the below output. After few seconds the SP dbo.TestProc1 is shown in fourth place and once again the last_worker_time is the highest. This means the procedure TestProc1 consumes a CPU time continuously each time it executes.      In this case, the primary cause for high CPU utilization was a stored procedure. You can view the execution plan by clicking on query_plan column to investigate why this is causing a high CPU load. I have used SQL Server 2008 (SP1) to test all the queries used in this article.

    Read the article

  • Access and Manage Your Ubuntu One Account in Chrome and Iron

    - by Asian Angel
    Do you have an Ubuntu One account that you access across different operating systems? Whether you are using Ubuntu, a different flavor of Linux, Windows, or Mac the Ubuntu One web app makes it easy to access and manage your Ubuntu One account in just moments. The Ubuntu One web app will definitely be useful if you find yourself away from your favorite Ubuntu computer but need to get important files uploaded to your account. Ubuntu One [Chrome Web Store] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Access and Manage Your Ubuntu One Account in Chrome and Iron Mouse Over YouTube Previews YouTube Videos in Chrome Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video] Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron

    Read the article

  • Network config with pppoe and Ubuntu 13.10

    - by Pavel
    I have an internet connection that is using pppoe. In my windows I do not assign an ip address for my network and I am able to connect using password and username. I installed Ubintu 13.10 today, and I did pppoeconf, and I setup an ip/network mask and changed the mac address in the options, and I was able to connect to the Internet. I restarted the computer, and I got a message saying that the wired connection is not managed. Internet was not working. I went to network manager file, and I changed the option to true, but I still can't connect to the Internet. I am pretty new to linux. How can I get my Internet working? Thanks

    Read the article

  • Back Up to Tape the Way You Shop For Groceries

    - by rickramsey
    Imagine if this was how you shopped for groceries: From the end of the aisle sprint to the point where you reach the ketchup. Pull a bottle from the shelf and yell at the top of your lungs, “Got it!” Sprint back to the end of the aisle. Start again and sprint down the same aisle to the mustard, pull a bottle from the shelf and again yell for the whole store to hear, “Got it!” Sprint back to the end of the aisle. Repeat this procedure for every item you need in the aisle. Proceed to the next aisle and follow the same steps for the list of items you need from that aisle. Sounds ridiculous, doesn’t it? Not only is it horribly inefficient, it’s exhausting and can lead to wear out failures on your grocery cart, or worse, yourself. This is essentially how NetApp and some other applications write NDMP backups to tape. In the analogy, the ketchup and mustard are the files to be written, yelling “Got it!” is the equivalent of a sync mark at the end of a file, and the sprint back to the end of an aisle is the process most commonly called a “backhitch” where the drive has to back up on a tape to start writing again. Writing to tape in this way results in very slow tape drive performance and imposes unnecessary wear on the tape drive and the media, especially when writing small files. The good news is not all tape drives behave this way when writing small files. Unlike midrange LTO drives, Oracle’s StorageTek T10000D tape drive is designed to handle this scenario efficiently. The difference between the two drive types is that the T10000D drive gives you the ability to write files in a NetApp NDMP backup environment the way you would normally shop for groceries. With grocery shopping, you essentially stream through aisles picking up items as you go, and then after checking out, yell, “Got it!”, though you might do that last step silently. With the T10000D, it has a feature called the Tape Application Accelerator, which prevents the drive from having to stop after each file is written to notify NetApp or another application that the write was successful. When enabled in the T10000D tape drive, Tape Application Accelerator causes the tape drive to respond to tape mark and file sync commands differently than when disabled: A tape mark received by the tape drive is treated as a buffered tape mark. A file sync received by the tape drive is treated as a no op command. Since buffered tape marks and no op commands do not cause the tape drive to empty the contents of its buffer to tape and backhitch, the data is written to tape in significantly less time. Oracle has emulated NetApp environments with a number of different file sizes and found the following when comparing the T10000D with the Tape Application Accelerator enabled versus LTO6 tape drives. Notice how the T10000D is not only monumentally faster, but also remarkably consistent? In addition, the writing of the 50 GB of files is done without a single backhitch. The LTO6 drive, meanwhile, will perform as many as 3,800 backhitches! At the end of writing the entire set of files, the T10000D tape drive reports back to the application, in this case NetApp, that the write was successful via a tape mark. So if the Tape Application Accelerator dramatically improves performance and reliability, why wouldn’t you always have it enabled? The reason is because tape drive buffers are meant to be just temporary data repositories so in the event of a power loss, there could be data loss in certain environments for the files that resided in the buffer. Fortunately, we do have best practices depending on your environment to avoid this from happening. I highly recommend reading Maximizing Tape Performance with StorageTek T10000 Tape Drives (pdf) to decide which best practice is right for you. The white paper also digs deeper into the benefits of the Tape Application Accelerator. The white paper is free, and after downloading it you can decide for yourself whether you want to yell “Got it!” out loud or just silently to yourself. Customer Advisory Panel One final link: Oracle has started up a Customer Advisory Panel program to collect feedback from customers on their current experiences with Oracle products, as well as desires for future product development. If you would like to participate in the program, go to this link at oracle.com. photo taken on Idaho's Sacajewea Historic Biway by Rick Ramsey - Brian Zents Follow OTN on Blog | Facebook | Twitter | YouTube

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >