Search Results

Search found 70199 results on 2808 pages for 'file monitoring'.

Page 50/2808 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Developer Dashboard in SharePoint 2010

    - by jcortez
    Introducing the Developer Dashboard As a SharePoint developer (or IT Professional), how many times have you had the pleasure of figuring out why a particular page on your site is taking too long to render? I'm sure one of the techniques you have employed in troubleshooting is the process of elimination - removing individual web parts from the page hoping to identify which web part is misbehaving. One of the new features of SharePoint 2010 is the Developer Dashboard. This dashboard provides tracing and performance information that can be useful when you are trying to troubleshoot pages that are loading too slow. The Developer Dashboard is turned off by default and I'll go over 3 different ways to display it. Here is a screenshot of what the Developer Dashboard looks like when displayed at the bottom of the page:   You can see on the left side the different events that fired during the page processing pipeline and how long these events took. This is where you will see individual web parts being processed and how long it took to complete (obviously the kind of processing depends on what the web part does). On the right side you would see the different database calls issued through the SharePoint Object Model to process the page. You will notice that each of these database queries are actually a hyperlink and clicking on it displays a pop-up window that shows the actual SQL Query Text, the Call Stack that triggered the database call, and the IO statistics of that query. Enabling the Developer Dashboard Option 1: Managed Code   The Developer Dashboard is a farm-wide setting and the code above won't work if it is used within a web part hosted on any non-Central Admin site. The SPDeveloperDashboardLevel enum has three possible values: On, Off, and OnDemand. Setting it to On will always display the Developer Dashboard at the bottom of the page. Setting it Off will hide the Developer Dashboard. Setting it to OnDemand will add an icon at the top right corner of the page (see screenshot below) where a Site Collection Admin can toggle the display of the Developer Dashboard for a particular site collection. In my opinion, OnDemand is the best setting when troubleshooting a page or during development since a Site Collection Admin can turn it on or off and for a particular site only. The first cool thing about this is that the Site Collection Admin that turned it on will be the only one to see the Developer Dashboard output. Everyday users won't see the Developer Dashboard output even if it was turned on by a Site Collection Admin. If you need more flexibility on who gets to see the Developer Dashboard output, you can set the SPDeveloperDashboardSettings.RequiredPermissions to control which group of users will have the permission to see the output. Option 2: Using stsadm Using stsadm, you can run the following command to configure the Developer Dashboard: STSADM –o setproperty –pn developer-dashboard –pv OnDemand To successfully execute this command, be sure you that are running as a Farm Admin. Option 3: Using PowerShell For all scripts in SharePoint 2010, I prefer writing them as PowerShell scripts. Though the stsadm command is less verbose, the PowerShell equivalent is pretty straightforward and uses the SharePoint Object Model: You can of course parameterized the value that gets assigned to the DisplayLevel property so you can turn it On, Off or OnDemand depending on the parameter. Events and the Developer Dashboard  Now, don't assume that all the code inside your web part or page will show up in the Developer Dashboard complete with all the great troubleshooting information. Only a finite set of events are monitored by default (for a web part it will events in the base web part class). Let's say you have a click event that could take some time, for example a web service call. And you want to include troubleshooting information for this event in the Developer Dashboard. Enter SPMonitoredScope which is also a new feature in SharePoint 2010. In SharePoint 2010, everything is executed within a "Monitored Scope". And each scope has a set of "Monitors" that measures and counts calls and timings which appears in the Developer Dashboard. Below is an example on how to get your custom code to get included in the Developer Dashboard by wrapping it inside a new monitored scope: The code above would include your new scope "My long web service call" into the Developer Dashboard and would log the time it took to complete processing. In my opinion, wrapping your custom code in a SPMonitoredScope is a SharePoint development best practice since it provides you visibility and a better understanding on the performance of your components.

    Read the article

  • Ask How-To Geek: How Can I Monitor My Bandwidth Usage?

    - by Jason Fitzpatrick
    If you’re lucky you enjoy wide open internet access with out restriction (or restrictions so high you would have to work all month to meet them). If you’re not so lucky, you’ve got an ISP with heavy caps. Today we help out a reader working under such a cap. Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper SnapBird Supercharges Your Twitter Searches

    Read the article

  • How Can I Track the Modifications a Program’s Installer Makes?

    - by Jason Fitzpatrick
    What exactly are those installation apps doing as the progress bar whizzes by? If you want to keep a close eye on things, you’ll need the right tools. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Multiple file upload with asp.net 4.5 and Visual Studio 2012

    - by Jalpesh P. Vadgama
    This post will be part of Visual Studio 2012 feature series. In earlier version of ASP.NET there is no way to upload multiple files at same time. We need to use third party control or we need to create custom control for that. But with asp.net 4.5 now its possible to upload multiple file with file upload control. With ASP.NET 4.5 version Microsoft has enhanced file upload control to support HTML5 multiple attribute. There is a property called ‘AllowedMultiple’ to support that attribute and with that you can easily upload the file. So what we are waiting for!! It’s time to create one example. On the default.aspx file I have written following. <asp:FileUpload ID="multipleFile" runat="server" AllowMultiple="true" /> <asp:Button ID="uploadFile" runat="server" Text="Upload files" onclick="uploadFile_Click"/> Here you can see that I have given file upload control id as multipleFile and I have set AllowMultiple file to true. I have also taken one button for uploading file.For this example I am going to upload file in images folder. As you can see I have also attached event handler for button’s click event. So it’s time to write server side code for this. Following code is for the server side. protected void uploadFile_Click(object sender, EventArgs e) { if (multipleFile.HasFiles) { foreach(HttpPostedFile uploadedFile in multipleFile.PostedFiles) { uploadedFile.SaveAs(System.IO.Path.Combine(Server.MapPath("~/Images/"),uploadedFile.FileName)); Response.Write("File uploaded successfully"); } } } Here in the above code you can see that I have checked whether multiple file upload control has multiple files or not and then I have save that in Images folder of web application. Once you run the application in browser it will look like following. I have selected two files. Once I have selected and clicked on upload file button it will give message like following. As you can see now it has successfully upload file and you can see in windows explorer like following. As you can see it’s very easy to upload multiple file in ASP.NET 4.5. Stay tuned for more. Till then happy programming. P.S.: This feature is only supported in browser who support HTML5 multiple file upload. For other browsers it will work like normal file upload control in asp.net.

    Read the article

  • Cannot start tor with vidalia, failed to bind listening port because of tor-socks running

    - by ganjan
    I get these errors trying to run tor with vidalia Apr 19 21:55:15.371 [Notice] Tor v0.2.1.30. This is experimental software. Do not rely on it for strong anonymity. (Running on Linux i686) Apr 19 21:55:15.372 [Notice] Initialized libevent version 1.4.13-stable using method epoll. Good. Apr 19 21:55:15.373 [Notice] Opening Socks listener on 127.0.0.1:9050 Apr 19 21:55:15.373 [Warning] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Apr 19 21:55:15.373 [Warning] Failed to parse/validate config: Failed to bind one of the listener ports. Apr 19 21:55:15.373 [Error] Reading config failed--see warnings above. I don't think tor is running. Here is a nmap scan of my localhost Starting Nmap 5.21 ( http://nmap.org ) at 2011-04-19 21:59 CEST Nmap scan report for localhost (127.0.0.1) Host is up (0.0000050s latency). Hostname localhost resolves to 2 IPs. Only scanned 127.0.0.1 rDNS record for 127.0.0.1: localhost.localdomain Not shown: 989 closed ports PORT STATE SERVICE 22/tcp open ssh 53/tcp open domain 80/tcp open http 139/tcp open netbios-ssn 445/tcp open microsoft-ds 631/tcp open ipp 3128/tcp open squid-http 3306/tcp open mysql 9000/tcp open cslistener 9050/tcp open tor-socks 10000/tcp open snet-sensor-mgmt I see tor-socks is running here, probably be the cause of the problem. How do I stop this from starting up? I want to use vidalia so I can monitor whats going on.

    Read the article

  • Write TSQL, win a Kindle.

    - by Fatherjack
    So recently Red Gate launched sqlmonitormetrics.red-gate.com and showed the world how to embed your own scripts harmoniously in a third party tool to get the details that you want about your SQL Server performance. The site has a way to submit your own metrics and take a copy of the ones that other people have submitted to build a library of code to keep track of key metrics of your servers performance. There have been several submissions already but they have now launched a competition to provide an incentive for you to get creative and show us what you can do with a bit of TSQL and the SQL Monitor framework*. What’s it worth? Well, if you are one of the 3 winners then you get to choose either a Kindle Fire or $199. How do you win? Simply write the T-SQL for a SQL Monitor custom metric and the relevant description and introduction for it and submit it via  sqlmonitormetrics.red-gate.com before 14th Sept 2012 and then sit back and wait while the judges review your code and your aims in writing the metric. Who are the judges and how will they judge the metrics? There are two judges for this competition, Steve Jones (Microsoft SQL Server MVP, co-founder of SQLServerCentral.com, author, blogger etc) and Jonathan Allen (um, yeah, Steve has done all the good stuff, I’m here by good fortune). We will be looking to rate the metrics on each of 3 criteria: how the metric can help with performance tuning SQL Server. how having the metric running enables DBA’s to meet best practice. how interesting /original the idea for the metric is. Our combined decision will be final etc etc **  What happens to my metric? Any metrics submitted to the competition will be automatically entered into the site library and become available for sharing once the competition is over. You’ll get full credit for metrics you submit regardless of the competition results. You can enter as many metrics as you like. How long does it take? Honestly? Once you have the T-SQL sorted then so long as you can type your name and your email address you are done : http://sqlmonitormetrics.red-gate.com/share-a-metric/ What can I monitor? If you really really want a Kindle or $199 (and let’s face it, who doesn’t? ) and are momentarily stuck for inspiration, take a look at these example custom metrics that have been written by Stuart Ainsworth, Fabiano Amorim, TJay Belt, Louis Davidson, Grant Fritchey, Brad McGehee and me  to start the library off. There are some great pieces of TSQL in those metrics gathering important stats about how SQL Server is performing.   * – framework may not be the best word here but I was under pressure and couldnt think of a better one. If you prefer try ‘engine’, or ‘application’? I don’t know, pick something that makes sense to you. ** – for the full (legal) version of the rules check the details on sqlmonitormetrics.red-gate.com or send us an email if you want any point clarified. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Tuning Red Gate: #3 of Lots

    - by Grant Fritchey
    I'm drilling down into the metrics about SQL Server itself available to me in the Analysis tab of SQL Monitor to see what's up with our two problematic servers. In the previous post I'd noticed that rg-sql01 had quite a few CPU spikes. So one of the first things I want to check there is how much CPU is getting used by SQL Server itself. It's possible we're looking at some other process using up all the CPU Nope, It's SQL Server. I compared this to the rg-sql02 server: You can see that there is a more, consistently low set of CPU counters there. I clearly need to look at rg-sql01 and capture more specific data around the queries running on it to identify which ones are causing these CPU spikes. I always like to look at the Batch Requests/sec on a server, not because it's an indication of a problem, but because it gives you some idea of the load. Just how much is this server getting hit? Here are rg-sql01 and rg-sql02: Of the two, clearly rg-sql01 has a lot of activity. Remember though, that's all this is a measure of, activity. It doesn't suggest anything other than what it says, the number of requests coming in. But it's the kind of thing you want to know in order to understand how the system is used. Are you seeing a correlation between the number of requests and the CPU usage, or a reverse correlation, the number of requests drops as the CPU spikes? See, it's useful. Some of the details you can look at are Compilations/sec, Compilations/Batch and Recompilations/sec. These give you some idea of how the cache is getting used within the system. None of these showed anything interesting on either server. One metric that I like (even though I know it can be controversial) is the Page Life Expectancy. On the average server I expect see a series of mountains as the PLE climbs then drops due to a data load or something along those lines. That's not the case here: Those spikes back in January suggest that the servers weren't really being used much. The PLE on the rg-sql01 seems to be somewhat consistent growing to 3 hours or so then dropping, but the rg-sql02 PLE looks like it might be all over the map. Instead of continuing to look at this high level gathering data view, I'm going to drill down on rg-sql02 and see what it's done for the last week: And now we begin to see where we might have an issue. Memory on this system is getting flushed every 1/2 hour or so. I'm going to check another metric, scans: Whoa! I'm going back to the system real quick to look at some disk information again for rg-sql02. Here is the average disk queue length on the server: and the transfers Right, I think I have a guess as to what's up here. We're seeing memory get flushed constantly and we're seeing lots of scans. The disks are queuing, especially that F drive, and there are lots of requests that correspond to the scans and the memory flushes. In short, we've got queries that are scanning the data, a lot, so we either have bad queries or bad indexes. I'm going back to the server overview for rg-sql02 and check the Top 10 expensive queries. I'm modifying it to show me the last 3 days and the totals, so I'm not looking at some maintenance routine that ran 10 minutes ago and is skewing the results: OK. I need to look into these queries that are getting executed this much. They're generating a lot of reads, but which queries are generating the most reads: Ow, all still going against the same database. This is where I'm going to temporarily leave SQL Monitor. What I want to do is connect up to the server, validate that the Warehouse database is using the F:\ drive (which I'll put money down it is) and then start seeing what's up with these queries. Part 1 of the Series Part 2 of the Series

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • Tuning Red Gate: #4 of Some

    - by Grant Fritchey
    First time connecting to these servers directly (keys to the kingdom, bwa-ha-ha-ha. oh, excuse me), so I'm going to take a look at the server properties, just to see if there are any issues there. Max memory is set, cool, first possible silly mistake clear. In fact, these look to be nicely set up. Oh, I'd like to see the ANSI Standards set by default, but it's not a big deal. The default location for database data is the F:\ drive, where I saw all the activity last time. Cool, the people maintaining the servers in our company listen, parallelism threshold is set to 35 and optimize for ad hoc is enabled. No shocks, no surprises. The basic setup is appropriate. On to the problem database. Nothing wrong in the properties. The database is in SIMPLE recovery, but I think it's a reporting system, so no worries there. Again, I'd prefer to see the ANSI settings for connections, but that's the worst thing I can see. Time to look at the queries, tables, indexes and statistics because all the information I've collected over the last several days suggests that we're not looking at a systemic problem (except possibly not enough memory), but at the traditional tuning issues. I just want to note that, I started looking at the system, not the queries. So should you when tuning your environment. I know, from the data collected through SQL Monitor, what my top poor performing queries are, and the most frequently called, etc. I'm starting with the most frequently called. I'm going to get the execution plan for this thing out of the cache (although, with the cache dumping constantly, I might not get it). And it's not there. Called 1.3 million times over the last 3 days, but it's not in cache. Wow. OK. I'll see what's in cache for this database: SELECT  deqs.creation_time,         deqs.execution_count,         deqs.max_logical_reads,         deqs.max_elapsed_time,         deqs.total_logical_reads,         deqs.total_elapsed_time,         deqp.query_plan,         SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,                   (deqs.statement_end_offset - deqs.statement_start_offset) / 2                   + 1) AS QueryStatement FROM    sys.dm_exec_query_stats AS deqs         CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest         CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp WHERE   dest.dbid = DB_ID('Warehouse') AND deqs.statement_end_offset > 0 AND deqs.statement_start_offset > 0 ORDER BY deqs.max_logical_reads DESC ; And looking at the most expensive operation, we have our first bad boy: Multiple table scans against very large sets of data and a sort operation. a sort operation? It's an insert. Oh, I see, the table is a heap, so it's doing an insert, then sorting the data and then inserting into the primary key. First question, why isn't this a clustered index? Let's look at some more of the queries. The next one is deceiving. Here's the query plan: You're thinking to yourself, what's the big deal? Well, what if I told you that this thing had 8036318 reads? I know, you're looking at skinny little pipes. Know why? Table variable. Estimated number of rows = 1. Actual number of rows. well, I'm betting several more than one considering it's read 8 MILLION pages off the disk in a single execution. We have a serious and real tuning candidate. Oh, and I missed this, it's loading the table variable from a user defined function. Let me check, let me check. YES! A multi-statement table valued user defined function. And another tuning opportunity. This one's a beauty, seriously. Did I also mention that they're doing a hash against all the columns in the physical table. I'm sure that won't lead to scans of a 500,000 row table, no, not at all. OK. I lied. Of course it is. At least it's on the top part of the Loop which means the scan is only executed once. I just did a cursory check on the next several poor performers. all calling the UDF. I think I found a big tuning opportunity. At this point, I'm typing up internal emails for the company. Someone just had their baby called ugly. In addition to a series of suggested changes that we need to implement, I'm also apologizing for being such an unkind monster as to question whether that third eye & those flippers belong on such an otherwise lovely child.

    Read the article

  • Tuning Red Gate: #2 of Many

    - by Grant Fritchey
    In the last installment, I used the SQL Monitor tool to get a snapshot view of the current state of the servers at Red Gate that are giving us trouble. That snapshot suggested some areas where I should focus some time, primarily in which queries were being called most frequently or were running the longest. But, you don't want to just run off & start tuning queries. Remember, the foundation for query tuning is the server itself. So, I want to be sure I'm not looking at some major hardware or configuration issues that I need to address first. Rather than look at the current status of the server, I'm going to look at historical data. Clicking on the Analysis tab of SQL Monitor I get a whole list of counters that I can look at. More importantly, I can look at them over a period of time. Even more importantly, I can compare past periods with current periods to see if we're looking at a progressive issue or not. There are counters here that will give me an indication of load, and there are counters here that will tell me specifics about that load. First, I want to just look at the load to understand where the pain points might be. Trying to drill down before you have detailed information is just bad planning. First thing I'm going to check is the CPU, just to see what's up there. I have two servers I'm interested in, so I'll show you both: Looking at the last 30 days for both servers, well, let's just say that the first server is about what I would expect. It has an average baseline behavior with occasional, regular, peaks. This looks like a system with a fairly steady & predictable load that probably has a nightly batch process that spikes the processor. In short, normal stuff. The points there where the CPU drops radically. that might be worth investigating further because something changed the processing on this system a lot. But the first server. It's all over the place. There's no steady CPU behavior at all. It's spike high for long periods of time. It's up, it's down. I'm really going to have to spend time looking at CPU issues on this server to try to figure out what's up. It might be other processes being shared on the server, it might be something else. Either way, I'm going to have to spend time evaluating this CPU, especially those peeks about a week ago. Looking at the Pages/sec, again, just a measure of load, I see that there are some peaks on the rg-sql02 server, but over all, it looks like a fairly standard load. Plus, the peaks are only up to 550 pages/sec. Remember, this isn't a performance measure, but just a load measurement, but from this, I don't think we're looking at major memory issues, but I may want to correlate these counters with the CPU counters. Again, the other server looks like there's stuff going on. The load is not at all consistent. In fact there was a point earlier in the year that looks pretty severe. Plus the spikes here are twice the size of the other system. We've got a lot more load going on here and I will probably need to drill down on memory usage on this server. Taking a look at the disk transfers/sec the load on both systems seems to roughly correspond to the other load indicators. Notice that drop right in the middle of the graph for rg-sql02. I wonder if the office was closed over that period or a system was down for maintenance. If I saw spikes in memory or disk that corresponded to the drip in CPU, you can assume something was using those other resources and causing a drop, but when everything goes down, it just means that the system isn't gettting used. The disk on the rg-sql01 system isn't spiking exactly the same way as the memory & cpu, so there's a good chance (chance mind you) that any performance issues might not be disk related. However, notice that huge jump at the beginning of the month. Several disks were used more than they were for the rest of the month. That's the load on the server. What about the load on SQL Server itself? Next time.

    Read the article

  • Manage and Monitor Identity Ranges in SQL Server Transactional Replication

    - by Yaniv Etrogi
    Problem When using transactional replication to replicate data in a one way topology from a publisher to a read-only subscriber(s) there is no need to manage identity ranges. However, when using  transactional replication to replicate data in a two way replication topology - between two or more servers there is a need to manage identity ranges in order to prevent a situation where an INSERT commands fails on a PRIMARY KEY violation error  due to the replicated row being inserted having a value for the identity column which already exists at the destination database. Solution There are two ways to address this situation: Assign a range of identity values per each server. Work with parallel identity values. The first method requires some maintenance while the second method does not and so the scripts provided with this article are very useful for anyone using the first method. I will explore this in more detail later in the article. In the first solution set server1 to work in the range of 1 to 1,000,000,000 and server2 to work in the range of 1,000,000,001 to 2,000,000,000.  The ranges are set and defined using the DBCC CHECKIDENT command and when the ranges in this example are well maintained you meet the goal of preventing the INSERT commands to fall due to a PRIMARY KEY violation. The first insert at server1 will get the identity value of 1, the second insert will get the value of 2 and so on while on server2 the first insert will get the identity value of 1000000001, the second insert 1000000002 and so on thus avoiding a conflict. Be aware that when a row is inserted the identity value (seed) is generated as part of the insert command at each server and the inserted row is replicated. The replicated row includes the identity column’s value so the data remains consistent across all servers but you will be able to tell on what server the original insert took place due the range that  the identity value belongs to. In the second solution you do not manage ranges but enforce a situation in which identity values can never get overlapped by setting the first identity value (seed) and the increment property one time only during the CREATE TABLE command of each table. So a table on server1 looks like this: CREATE TABLE T1 (  c1 int NOT NULL IDENTITY(1, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); And a table on server2 looks like this: CREATE TABLE T1(  c1 int NOT NULL IDENTITY(2, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); When these two tables are inserted the results of the identity values look like this: Server1:  1, 6, 11, 16, 21, 26… Server2:  2, 7, 12, 17, 22, 27… This assures no identity values conflicts while leaving a room for 3 additional servers to participate in this same environment. You can go up to 9 servers using this method by setting an increment value of 9 instead of 5 as I used in this example. Continues…

    Read the article

  • SCOM, 90 Days In, II. Noise.

    - by merrillaldrich
    Once you get past the basic architecture of a SCOM implementation, and build the servers, and so on, the first real problem is … well, noise. Suddenly (depending on how you deploy) the system will reach out, like marching army ants or a some very clever cybernetic spider and find, and then proceed to yell at you about, every single problem on every server you didn’t know you had. That, of course, is the point. Still, a tool like this is not useful if it doesn’t surface the real problems from the...(read more)

    Read the article

  • How do I install Cacti 8.7i with PIA 3.1(Plug In Architecture) for 12.04 Server LTS?

    - by Pininy
    I am planning to upgrade my current 9.04 server version to 12.04LTS server. I've successfully backup and restore my Cacti version 8.7d to 8.7i that comes with the distribution. However, the plugin patch installation for PIA 3.1 it is not working for only ubuntu. Can you assist? or it's there a way to include Cacti 8.8a which is a stable version which comes with PIA 3.1 preinstalled. regards, thomas

    Read the article

  • A Warning to Those Using sys.dm_exec_query_stats

    - by Adam Machanic
    The sys.dm_exec_query_stats view is one of my favorite DMVs. It has replaced a large chunk of what I used to use SQL Trace for--pulling metrics about what queries are running and how often--and it makes this kind of data collection painless and automatic. What's not to love? But use cases for the view are a topic for another post. Today I want to quickly point out an inconsistency. If you're using this view heavily, as I am, you should know that in some cases your queries will not get a row. One...(read more)

    Read the article

  • System monitor network speed monitor not working for LAN but works for my Wi-fi

    - by Pavak
    I'm on Ubuntu 13.10. I generally use wi-fi to connect to the internet. But Yesterday my wi-fi router occurred some problem and now it's out for warranty. So temporarily I'm using LAN. System monitor displayed the network speed correctly when I was in wi-fi. But now it's not showing any kinda network speed in System Monitor. I checked the preferences opption but couldn't find a way. I also checked "ksysguard"(KDE's system monitor) and conky. None of them working. How can i solve this? I'm attaching a screenshot to clear the problem.

    Read the article

  • Check/Monitor Amount of SQL Queries per Hour

    - by deathlock
    My website is hosted on a shared hosting and I'd like to know how much SQL queries it is using per hour. I tried to navigate through cPanel and I find nothing to check or monitor the amount of SQL queries per hour. I tried to ask my host and they said it is not possible to do manually. However I found this http://forum.powweb.com/archive/index.php/t-49937.html and another one on Stackoverflow: http://stackoverflow.com/questions/9842094/sql-how-can-i-get-the-number-of-executed-queries-per-database-or-hour-or And since this exists, I assume that it is actually possible. Problem is I can't execute that in my phpmyAdmin. Can someone here guide me through the process?

    Read the article

  • Ask HTG: LAN-to-LAN Messaging in Windows 7, Multi-Monitor Full Screen Video, and Alternative File Copiers

    - by Jason Fitzpatrick
    Once a week we roundup some of the answers we’ve sent out to reader questions and share them with everyone. This week we’re looking at inter-LAN messaging with Windows 7, multi-monitor full screen video, and alternative Windows file copiers. How to See What Web Sites Your Computer is Secretly Connecting To HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast!

    Read the article

  • Quickly compute added and removed lines

    - by Philippe Marschall
    I'm trying to compare two text files. I want to compute how many lines were added and removed. Basically what git diff --stat is doing. Bonus points for not having to store the entire file contents in memory. The approach I'm currently having in mind is: read each line of the old file compute a hash (probably MD5 or SHA-1) for each line store the hashes in a set do the same for each line in the new file every hash from the old file set that's missing in the new file set was removed every hash from the new file set that's missing in the old file set was added I'll probably want to exclude empty and all white space lines. There is a small issue with duplicated lines. This can either be solved by additionally storing how often a hash appears or comparing the number of lines in the old and new file and adjust either the added or removed lines so that the numbers add up. Do you see room for improvements or a better approach?

    Read the article

  • How to monitor outgoing server activity to detect malware?

    - by ted.strauss
    I have a website that has previously been victim of malware. I restored the site from an old backup and have made every effort to lock down the server. I have no way to be absolutely certain that the backup I used is clean, and I'm worried that this malware may re-appear. I would like to use a tool to monitor outgoing port activity to detect signs of malware activity. Unfortunately I'm using a server host that does not give me shell access, so I need to use a tool that can be installed via FTP and used via the browser. My site is Joomla :( so a Joomla extension with this capability would work, but I haven't found that yet. Any suggestions. Many thanks

    Read the article

  • SQLSat65, Great Perf Counters Poster from Quest

    - by merrillaldrich
    I was fortunate to be able to attend the Vancouver BC SQLSaturday this past weekend, and it was excellent! Great sessions, good facility, well attended. Nice work, and a huge thank you to the volunteers that made that happen. One side perk: I got a copy of this terrific performance counters poster from Quest, which you can download as a PDF for free. Very handy, especially as a teaching tool. I'm using it for my SCOM MP work. Check it out....(read more)

    Read the article

  • Visualize Disaster

    - by merrillaldrich
    Or, How Mirroring Off-Site Saved my #Bacon My company does most things right. Our management is very supportive, listens and generally funds the technology that makes sense for the best interest of the organization. We have good redundancy, HA and disaster recovery in place that fit our objectives. Still, as they say, bad things can happen to good people. This weekend we did have an outage despite our best efforts, and that’s the reason for this post. It went pretty well for my team, all things considered,...(read more)

    Read the article

  • Microsoft SQL Server 2008 R2 Administration Cookbook - Book and eBook expected June 2011. Pre-order now!

    - by ssqa.net
    Over 85 practical recipes for administering a high-performance SQL Server 2008 R2 system. Book and eBook expected June 2011 . Pre-order now! Multi-format orders get free access on PacktLib , This practical cookbook will show you the advanced administration techniques for managing and administering a scalable and high-performance SQL Server 2008 R2 system. It contains over 85 practical, task-based, and immediately useable recipes covering a wide range of advanced administration techniques for administering...(read more)

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Keeping rackspace vserver alive

    - by mit
    It appears to me that rackspace somehow freezes cloud VMs after some idle time. This means the first page request to a php page takes much longer to respond than the subsequent requests. This is in some cases good, in other cases not acceptable. I am actually querying a machine with wget from a different host now to keep it "alive". But I wonder what frequency would be necessary. Does anyone know the time period after which they send a VM to "sleep"? I guess it would be some minutes. EDIT: There is absolutely no caching involved on the php site. It just recently moved from another vhost and there was never such latency on the first request.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >