Search Results

Search found 18854 results on 755 pages for 'mr null'.

Page 114/755 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • SQL SERVER – 2008 – Missing Index Script – Download

    - by pinaldave
    Download Missing Index Script with Unused Index Script Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance. Here is the script from my script bank which I use to identify missing indexes on any database. Please note, if you should not create all the missing indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table. Additionally, this script sometime does not give accurate information so use your common sense. Any way, the scripts is good starting point. You should pay attention to Avg_Estimated_Impact when you are going to create index. The index creation script is also provided in the last column. Download Missing Index Script with Unused Index Script -- Missing Index Script -- Original Author: Pinal Dave (C) 2011 SELECT TOP 25 dm_mid.database_id AS DatabaseID, dm_migs.avg_user_impact*(dm_migs.user_seeks+dm_migs.user_scans) Avg_Estimated_Impact, dm_migs.last_user_seek AS Last_User_Seek, OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) AS [TableName], 'CREATE INDEX [IX_' + OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) + '_' + REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.equality_columns,''),', ','_'),'[',''),']','') + CASE WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN '_' ELSE '' END + REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.inequality_columns,''),', ','_'),'[',''),']','') + ']' + ' ON ' + dm_mid.statement + ' (' + ISNULL (dm_mid.equality_columns,'') + CASE WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END + ISNULL (dm_mid.inequality_columns, '') + ')' + ISNULL (' INCLUDE (' + dm_mid.included_columns + ')', '') AS Create_Statement FROM sys.dm_db_missing_index_groups dm_mig INNER JOIN sys.dm_db_missing_index_group_stats dm_migs ON dm_migs.group_handle = dm_mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details dm_mid ON dm_mig.index_handle = dm_mid.index_handle WHERE dm_mid.database_ID = DB_ID() ORDER BY Avg_Estimated_Impact DESC GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Download, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How can I disable Lumension Sanctuary

    - by Mr. Flibble
    I'm an administrator on my computer but I can't work out how to uninstall/disable Lumension Sanctuary. There is no Uninstall button in the uninstall programs list. I've disabled the service (Sanctuary command and control) but it's had no effect. Any ideas? I'm on Windows XP BTW.

    Read the article

  • How to change cpufreq settings in Kubuntu

    - by Mr Woody
    I have been using Kubuntu, and I would like to change the cpufreq settings. My understanding is that there is no applet for that, and I would have to do it with a script. So I run a command like this: sudo cpufreq-set -g userspace -c 0 -d 800Mhz -u 1200Mhz and when I type cpufreq-info, I get cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 1.20 GHz. The governor "userspace" may decide which speed to use within this range. current CPU frequency is 1.20 GHz. cpufreq stats: 2.50 GHz:70.06%, 2.50 GHz:0.97%, 2.00 GHz:4.85%, 1.60 GHz:0.35%, 1.20 GHz:2.89%, 800 MHz:20.88% (193873) analyzing CPU 1: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 1 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 2.00 GHz and 2.00 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 2.00 GHz. cpufreq stats: 2.50 GHz:83.43%, 2.50 GHz:1.03%, 2.00 GHz:4.28%, 1.60 GHz:0.01%, 1.20 GHz:1.74%, 800 MHz:9.50% (3208) which shows that everything worked well (on cpu 0). The problem is that if I run cpufreq-info again after few minutes I get cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 800 MHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 800 MHz. cpufreq stats: 2.50 GHz:69.73%, 2.50 GHz:0.97%, 2.00 GHz:4.83%, 1.60 GHz:0.35%, 1.20 GHz:2.92%, 800 MHz:21.20% (193880) analyzing CPU 1: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 1 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 800 MHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 800 MHz. cpufreq stats: 2.50 GHz:82.94%, 2.50 GHz:1.03%, 2.00 GHz:4.33%, 1.60 GHz:0.01%, 1.20 GHz:1.73%, 800 MHz:9.96% (3215) so it looks like some other process changed the settings. Does anyone know how to fix this? I also tried many different settings, but I get similar behavior.

    Read the article

  • Automating an SSRS 2008 R2 Report Snapshots and run report with most recent data

    - by Mr Shoubs
    I would like to automate a report snapshot, but there is only an option to take a snapshot in the Report History Tab. All the resources I've found suggest I need to go to processing options and select "Render this report from a snapshot". But I don't want to do that - when I go to a report, I want to get the most recent data. However daily at midnight I'd like to take a snapshot and store it in the history in case I want to compare the reports as of midnight for the last few weeks. Or am I doing this wrong and have to create a subscription instead? Note: this is for an auditing database and has way to much data in to query a range with more than 1 day in it - reports are restricted as such. (1 day has over 1 million rows on it's own).

    Read the article

  • SQL Spatial: Getting “nearest” calculations working properly

    - by Rob Farley
    If you’ve ever done spatial work with SQL Server, I hope you’ve come across the ‘nearest’ problem. You have five thousand stores around the world, and you want to identify the one that’s closest to a particular place. Maybe you want the store closest to the LobsterPot office in Adelaide, at -34.925806, 138.605073. Or our new US office, at 42.524929, -87.858244. Or maybe both! You know how to do this. You don’t want to use an aggregate MIN or MAX, because you want the whole row, telling you which store it is. You want to use TOP, and if you want to find the closest store for multiple locations, you use APPLY. Let’s do this (but I’m going to use addresses in AdventureWorks2012, as I don’t have a list of stores). Oh, and before I do, let’s make sure we have a spatial index in place. I’m going to use the default options. CREATE SPATIAL INDEX spin_Address ON Person.Address(SpatialLocation); And my actual query: WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Great! This is definitely working. I know both those City locations, even if the AddressLine1s don’t quite ring a bell. I’m sure I’ll be able to find them next time I’m in the area. But of course what I’m concerned about from a querying perspective is what’s happened behind the scenes – the execution plan. This isn’t pretty. It’s not using my index. It’s sucking every row out of the Address table TWICE (which sucks), and then it’s sorting them by the distance to find the smallest one. It’s not pretty, and it takes a while. Mind you, I do like the fact that it saw an indexed view it could use for the State and Country details – that’s pretty neat. But yeah – users of my nifty website aren’t going to like how long that query takes. The frustrating thing is that I know that I can use the index to find locations that are within a particular distance of my locations quite easily, and Microsoft recommends this for solving the ‘nearest’ problem, as described at http://msdn.microsoft.com/en-au/library/ff929109.aspx. Now, in the first example on this page, it says that the query there will use the spatial index. But when I run it on my machine, it does nothing of the sort. I’m not particularly impressed. But what we see here is that parallelism has kicked in. In my scenario, it’s split the data up into 4 threads, but it’s still slow, and not using my index. It’s disappointing. But I can persuade it with hints! If I tell it to FORCESEEK, or use my index, or even turn off the parallelism with MAXDOP 1, then I get the index being used, and it’s a thing of beauty! Part of the plan is here: It’s massive, and it’s ugly, and it uses a TVF… but it’s quick. The way it works is to hook into the GeodeticTessellation function, which is essentially finds where the point is, and works out through the spatial index cells that surround it. This then provides a framework to be able to see into the spatial index for the items we want. You can read more about it at http://msdn.microsoft.com/en-us/library/bb895265.aspx#tessellation – including a bunch of pretty diagrams. One of those times when we have a much more complex-looking plan, but just because of the good that’s going on. This tessellation stuff was introduced in SQL Server 2012. But my query isn’t using it. When I try to use the FORCESEEK hint on the Person.Address table, I get the friendly error: Msg 8622, Level 16, State 1, Line 1 Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. And I’m almost tempted to just give up and move back to the old method of checking increasingly large circles around my location. After all, I can even leverage multiple OUTER APPLY clauses just like I did in my recent Lookup post. WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT     l.Name,     COALESCE(a1.AddressLine1,a2.AddressLine1,a3.AddressLine1),     COALESCE(a1.City,a2.City,a3.City),     s.Name AS [State],     c.Name AS Country FROM MyLocations AS l OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 1000     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a1 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 5000     AND a1.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a2 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 20000     AND a2.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a3 JOIN Person.StateProvince AS s     ON s.StateProvinceID = COALESCE(a1.StateProvinceID,a2.StateProvinceID,a3.StateProvinceID) JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; But this isn’t friendly-looking at all, and I’d use the method recommended by Isaac Kunen, who uses a table of numbers for the expanding circles. It feels old-school though, when I’m dealing with SQL 2012 (and later) versions. So why isn’t my query doing what it’s supposed to? Remember the query... WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Well, I just wasn’t reading http://msdn.microsoft.com/en-us/library/ff929109.aspx properly. The following requirements must be met for a Nearest Neighbor query to use a spatial index: A spatial index must be present on one of the spatial columns and the STDistance() method must use that column in the WHERE and ORDER BY clauses. The TOP clause cannot contain a PERCENT statement. The WHERE clause must contain a STDistance() method. If there are multiple predicates in the WHERE clause then the predicate containing STDistance() method must be connected by an AND conjunction to the other predicates. The STDistance() method cannot be in an optional part of the WHERE clause. The first expression in the ORDER BY clause must use the STDistance() method. Sort order for the first STDistance() expression in the ORDER BY clause must be ASC. All the rows for which STDistance returns NULL must be filtered out. Let’s start from the top. 1. Needs a spatial index on one of the columns that’s in the STDistance call. Yup, got the index. 2. No ‘PERCENT’. Yeah, I don’t have that. 3. The WHERE clause needs to use STDistance(). Ok, but I’m not filtering, so that should be fine. 4. Yeah, I don’t have multiple predicates. 5. The first expression in the ORDER BY is my distance, that’s fine. 6. Sort order is ASC, because otherwise we’d be starting with the ones that are furthest away, and that’s tricky. 7. All the rows for which STDistance returns NULL must be filtered out. But I don’t have any NULL values, so that shouldn’t affect me either. ...but something’s wrong. I do actually need to satisfy #3. And I do need to make sure #7 is being handled properly, because there are some situations (eg, differing SRIDs) where STDistance can return NULL. It says so at http://msdn.microsoft.com/en-us/library/bb933808.aspx – “STDistance() always returns null if the spatial reference IDs (SRIDs) of the geography instances do not match.” So if I simply make sure that I’m filtering out the rows that return NULL… …then it’s blindingly fast, I get the right results, and I’ve got the complex-but-brilliant plan that I wanted. It just wasn’t overly intuitive, despite being documented. @rob_farley

    Read the article

  • C#/.NET Little Wonders: The EventHandler and EventHandler&lt;TEventArgs&gt; delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the last two weeks, we examined the Action family of delegates (and delegates in general), and the Func family of delegates and how they can be used to support generic, reusable algorithms and classes. So this week, we are going to look at a handy pair of delegates that can be used to eliminate the need for defining custom delegates when creating events: the EventHandler and EventHandler<TEventArgs> delegates. Events and delegates Before we begin, let’s quickly consider events in .NET.  According to the MSDN: An event in C# is a way for a class to provide notifications to clients of that class when some interesting thing happens to an object. So, basically, you can create an event in a type so that users of that type can subscribe to notifications of things of interest.  How is this different than some of the delegate programming that we talked about in the last two weeks?  Well, you can think of an event as a special access modifier on a delegate.  Some differences between the two are: Events are a special access case of delegates They behave much like delegates instances inside the type they are declared in, but outside of that type they can only be (un)subscribed to. Events can specify add/remove behavior explicitly If you want to do additional work when someone subscribes or unsubscribes to an event, you can specify the add and remove actions explicitly. Events have access modifiers, but these only specify the access level of those who can (un)subscribe A public event, for example, means anyone can (un)subscribe, but it does not mean that anyone can raise (invoke) the event directly.  Events can only be raised by the type that contains them In contrast, if a delegate is visible, it can be invoked outside of the object (not even in a sub-class!). Events tend to be for notifications only, and should be treated as optional Semantically speaking, events typically don’t perform work on the the class directly, but tend to just notify subscribers when something of note occurs. My basic rule-of-thumb is that if you are just wanting to notify any listeners (who may or may not care) that something has happened, use an event.  However, if you want the caller to provide some function to perform to direct the class about how it should perform work, make it a delegate. Declaring events using custom delegates To declare an event in a type, we simply use the event keyword and specify its delegate type.  For example, let’s say you wanted to create a new TimeOfDayTimer that triggers at a given time of the day (as opposed to on an interval).  We could write something like this: 1: public delegate void TimeOfDayHandler(object source, ElapsedEventArgs e); 2:  3: // A timer that will fire at time of day each day. 4: public class TimeOfDayTimer : IDisposable 5: { 6: // Event that is triggered at time of day. 7: public event TimeOfDayHandler Elapsed; 8:  9: // ... 10: } The first thing to note is that the event is a delegate type, which tells us what types of methods may subscribe to it.  The second thing to note is the signature of the event handler delegate, according to the MSDN: The standard signature of an event handler delegate defines a method that does not return a value, whose first parameter is of type Object and refers to the instance that raises the event, and whose second parameter is derived from type EventArgs and holds the event data. If the event does not generate event data, the second parameter is simply an instance of EventArgs. Otherwise, the second parameter is a custom type derived from EventArgs and supplies any fields or properties needed to hold the event data. So, in a nutshell, the event handler delegates should return void and take two parameters: An object reference to the object that raised the event. An EventArgs (or a subclass of EventArgs) reference to event specific information. Even if your event has no additional information to provide, you are still expected to provide an EventArgs instance.  In this case, feel free to pass the EventArgs.Empty singleton instead of creating new instances of EventArgs (to avoid generating unneeded memory garbage). The EventHandler delegate Because many events have no additional information to pass, and thus do not require custom EventArgs, the signature of the delegates for subscribing to these events is typically: 1: // always takes an object and an EventArgs reference 2: public delegate void EventHandler(object sender, EventArgs e) It would be insane to recreate this delegate for every class that had a basic event with no additional event data, so there already exists a delegate for you called EventHandler that has this very definition!  Feel free to use it to define any events which supply no additional event information: 1: public class Cache 2: { 3: // event that is raised whenever the cache performs a cleanup 4: public event EventHandler OnCleanup; 5:  6: // ... 7: } This will handle any event with the standard EventArgs (no additional information).  But what of events that do need to supply additional information?  Does that mean we’re out of luck for subclasses of EventArgs?  That’s where the generic for of EventHandler comes into play… The generic EventHandler<TEventArgs> delegate Starting with the introduction of generics in .NET 2.0, we have a generic delegate called EventHandler<TEventArgs>.  Its signature is as follows: 1: public delegate void EventHandler<TEventArgs>(object sender, TEventArgs e) 2: where TEventArgs : EventArgs This is similar to EventHandler except it has been made generic to support the more general case.  Thus, it will work for any delegate where the first argument is an object (the sender) and the second argument is a class derived from EventArgs (the event data). For example, let’s say we wanted to create a message receiver, and we wanted it to have a few events such as OnConnected that will tell us when a connection is established (probably with no additional information) and OnMessageReceived that will tell us when a new message arrives (probably with a string for the new message text). So for OnMessageReceived, our MessageReceivedEventArgs might look like this: 1: public sealed class MessageReceivedEventArgs : EventArgs 2: { 3: public string Message { get; set; } 4: } And since OnConnected needs no event argument type defined, our class might look something like this: 1: public class MessageReceiver 2: { 3: // event that is called when the receiver connects with sender 4: public event EventHandler OnConnected; 5:  6: // event that is called when a new message is received. 7: public event EventHandler<MessageReceivedEventArgs> OnMessageReceived; 8:  9: // ... 10: } Notice, nowhere did we have to define a delegate to fit our event definition, the EventHandler and generic EventHandler<TEventArgs> delegates fit almost anything we’d need to do with events. Sidebar: Thread-safety and raising an event When the time comes to raise an event, we should always check to make sure there are subscribers, and then only raise the event if anyone is subscribed.  This is important because if no one is subscribed to the event, then the instance will be null and we will get a NullReferenceException if we attempt to raise the event. 1: // This protects against NullReferenceException... or does it? 2: if (OnMessageReceived != null) 3: { 4: OnMessageReceived(this, new MessageReceivedEventArgs(aMessage)); 5: } The above code seems to handle the null reference if no one is subscribed, but there’s a problem if this is being used in multi-threaded environments.  For example, assume we have thread A which is about to raise the event, and it checks and clears the null check and is about to raise the event.  However, before it can do that thread B unsubscribes to the event, which sets the delegate to null.  Now, when thread A attempts to raise the event, this causes the NullReferenceException that we were hoping to avoid! To counter this, the simplest best-practice method is to copy the event (just a multicast delegate) to a temporary local variable just before we raise it.  Since we are inside the class where this event is being raised, we can copy it to a local variable like this, and it will protect us from multi-threading since multicast delegates are immutable and assignments are atomic: 1: // always make copy of the event multi-cast delegate before checking 2: // for null to avoid race-condition between the null-check and raising it. 3: var handler = OnMessageReceived; 4: 5: if (handler != null) 6: { 7: handler(this, new MessageReceivedEventArgs(aMessage)); 8: } The very slight trade-off is that it’s possible a class may get an event after it unsubscribes in a multi-threaded environment, but this is a small risk and classes should be prepared for this possibility anyway.  For a more detailed discussion on this, check out this excellent Eric Lippert blog post on Events and Races. Summary Generic delegates give us a lot of power to make generic algorithms and classes, and the EventHandler delegate family gives us the flexibility to create events easily, without needing to redefine delegates over and over.  Use them whenever you need to define events with or without specialized EventArgs.   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Delegates, EventHandler

    Read the article

  • How do I prevent Pidgin from loading XMPP chat room history on join?

    - by Mr. Jefferson
    In Pidgin, when I join a chat room, it loads the chat room history. iChat on the Mac has a preference in the Accounts section to set a variable amount of history to load, or disable loading history entirely. How do I do the same thing in Pidgin? Is there a preference somewhere that I've missed? The object is to have the chat room start fresh each day, so I'd also be fine with disabling chat room history entirely on the server if that's possible. But I didn't see that option either when I looked in Server Admin on the server. I found this list of XMPP room types, and it looks like creating a Temporary Room might be the best way to do this, but I don't want to have to create the room manually every morning. Right now I've got Pidgin set to auto-join the room when I log in; I want it to do that without loading history. EDIT: The XMPP multi-user chat spec referenced above also contains a section on managing history. And I got this to work by pulling up the XMPP Console plugin in Pidgin, copying the <presence /> stanza it sent when I joined the room, closing the room, pasting the stanza into the console, adding the <history /> element and sending it. When I opened the room again, I had no history. But it all came back the next time! So: how do I get Pidgin to send the <history /> stanza by default?

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Airport Extreme and Windows PC via regular LAN (not WIFI)

    - by Mr AJL
    So I got an airpoort extreme, and everything works beautifully on the mac, but my windows 7 PC which is connected via a regular ethernet cable can't see any network. The Win7 PC says there's no cable connected. Any ideas? Is there some kind of setting you have to enable with the Airport utility? I looked everywhere but can't find anything. Thanks!

    Read the article

  • Use synergy with Physical KVM

    - by Mr. Man
    I am using synergy on a Linux Mint computer as the server with a Mac as the client. I also have a physical KVM switch. The problem I have is that when ever I switch the physical KVM to my Mac, synergy stops working as in the keyboard and mouse don't work with the Mac. Thanks in advance! EDIT: here are some logs: From the Mint machine: INFO: synergys.cpp,1042: Synergy server 1.3.1 on Linux 2.6.31-14-generic #48-Ubuntu SMP Fri Oct 16 14:04:26 UTC 2009 i686 DEBUG: synergys.cpp,1051: opening configuration synergy.conf DEBUG: synergys.cpp,1062: configuration read successfully DEBUG: CXWindowsScreen.cpp,847: XOpenDisplay(:0.0) DEBUG: CXWindowsScreenSaver.cpp,339: xscreensaver window: 0x00000000 DEBUG: CXWindowsScreen.cpp,117: screen shape: 0,0 1024x768 DEBUG: CXWindowsScreen.cpp,118: window is 0x03800004 DEBUG: CScreen.cpp,38: opened display DEBUG: CXWindowsScreen.cpp,679: registered hotkey F12 (id=efc9 mask=0000) as id=1 NOTE: synergys.cpp,500: started server INFO: CServer.cpp,1141: screen ubuntu shape changed NOTE: CClientListener.cpp,127: accepted client connection DEBUG: CClientProxy1_0.cpp,404: received client marks-mac.local info shape=-1024,0 2304x800 NOTE: CServer.cpp,278: client mac has connected INFO: CServer.cpp,447: switch from ubuntu to mac at -1024,393 INFO: CScreen.cpp,116: leaving screen DEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWinavDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG302)DEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDE47DEBUG: CXWindowsClipboard.cDEBUG: CXWindowsrset=utf-8 (633), text/plain (462) DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555: added f DEBUG: CXWindCXWDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG:SerDEBUG: CXWindowsClipboard.cpp,555: ed DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555owsClDEBUG: CXWindowsClipboard.cpp,555: 1DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: getDEBUG: CXWindowsClipboard.cpp,555: added f DEBUG: CXW8_STDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555: added fD textDEBUG: CXWindowsClipboard.cpp,555: added fDEBU DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipinDEBUG: CXWindowsClipboard.cpp,555:oardDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindCXWDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG:SerDEBUG: CXWindowsClipboard.cpp,555: ed DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555owsClDEBUG: CXWindowsClipboard.cpp,555: 1DEBUG: CXWindowsClipboard.cpp, s From the Mac: connecting to '192.168.3.5': 192.168.3.5:24800 connected to server entering screen leaving screen entering screen leaving screen stopped client

    Read the article

  • Make my git user and apache user have read/write/delete access

    - by Mr A
    I am having permission problems on my server. I use user developer to pull my git repository on the server. Then apache uses its own apache user to do write and execute code. I always have the problems when the app wants to write something in the directory (i.e: log files, and cache ...) if I execute a cron job and it uses my developer rights and wants to add something to the folders that is written by apache. My question is how to have my developer have the same write/delete access as my apache and avoid permission conflicts with each other? I am not fluent on linux command so, it would help if you could provide links or simply examples of doing so. thanks.

    Read the article

  • VSFTPD does not allow upload with virtual users

    - by Mr. Squig
    I am attempting to setup VSFTPD with virtual users on a server running Ubuntu 12.04. I have configured the server to allow for virtual users to login, but I am having trouble getting it to allow uploads. My vsftpd.conf is as follows: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 anon_upload_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES virtual_use_local_privs=YES guest_enable=YES guest_username=virtual user_sub_token=$USER local_root=/var/www/$USER hide_ids=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem /etc/pam.d/vsftpd contains: auth required pam_pwdfile.so pwdfile /etc/vsftpd.passwd crypt=hash account required pam_permit.so crypt=hash I have two virtual users set up, one of which has the same name as a local user. They each have a directory in /var/www/ owned by 'virtual'. As I understand it, when a virtual user logs in this way they will appear to the system as the user virtual. Using this configuration user can log on, but cannot upload files. The error given in /var/log/vsftpd.log is: Tue Nov 20 19:49:00 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:07 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 3] [zac] FAIL CHMOD: Client "96.233.116.53", "/test.ppm 644" I have tried changing the permissions of these directories in all sorts of ways, but nothing seem to work. I have a feeling that it is something simple related to permissions. Any ideas?

    Read the article

  • DNS and DHCP not agreeing on an IP address

    - by Mr. Jefferson
    I'm having a problem where our Windows Server 2003 domain controller assigns my Windows 7 computer one IP address (x.x.x.75) via DHCP, but reports another (x.x.x.84) via DNS. This causes some interesting behavior on the network. If I change my adapter settings to get IP and DNS addresses from DHCP, I can access the internet, but no one on our network can access my computer. If I change my IP manually to what DNS says it is, I lose my internet access, but everyone can get to my computer again. I know that we have some old, invalid reverse DNS pointers hanging around (a reverse lookup on an IP address often gives more than one result, usually not including the one that is correct), so that could be contributing, but my problem is recent, and the invalid reverse pointers have been around a long time. What's going on, and how do I fix it?

    Read the article

  • IIS rewrite rule to check for querystring and add it if its not there

    - by M.R.
    I'm trying to make a IIS URL rewrite rule that appends an URL parameter to the URL. The url parameter is hssc. So, any url that is processed through the server, needs that parameter. Keeping in mind that some urls will have their own params already, and other urls won't, and root urls, etc, sometimes it will need to add ?hssc=1 or &hssc= - so, if I have a URL that is as such: http://www.blah.com should become http://www.blah.com/?hssc=1 http://www.blah.com/index.html should become http://www.blah.com/index.html?hssc=1 http://www.blah.com/?q=5 should become http://www.blah.com/q=5&hssc=1 http://www.blah.com/index.html?q=5 should become http://www.blah.com/index.html?q=5&hssc=1 http://www.blah.com/index.html?q=5&hssc=1 should be left alone I also want it that the URL should not be hidden (as in a backend rewrite behind the scenes). I need the URL to appear in the URL, so when users copy the URL, or bookmark it, the parameter is there. I've set the condition to match it \&hssc|\?hssc - now I just need a way to write the URL, so it appears and keeps the part of the original URL that is already there.

    Read the article

  • Windows 7 can't connect via ethernet to Airport Extreme

    - by Mr AJL
    I have an Airport Extreme router, and everything works beautifully on the Mac, but my Windows 7 computer, which is connected via a regular ethernet cable, can't see any network. The Windows computer says there's no cable connected. Any ideas? Is there some kind of setting you have to enable with the Airport utility? I looked everywhere but can't find anything. Or do I need to install that Airport utility something on the PC just to connect to the router? (Haven't tried because the PC has no CD drive)

    Read the article

  • Restrict VPN client traffic to certain domains/IP

    - by mr-euro
    Hi Is there any way to restrict a VPN client to only route certain traffic via the VPN and the rest via their local gateway? For example: traffic to a certain IP or domain gets routed across the VPN and all other requests do not. Let me know if you need more details. Thank you.

    Read the article

  • Linux Ubuntu: Updating the GRUB menu

    - by Mr X
    So apparently there are 2 versions of Linux on my laptop(I have a dual boot system with Linux and Windows 8 but Linux is the master OS). One of them uses the Kernel version 3.11.0 whose headers + source code are incompatible with my wireless driver. My laptop is a Toshiba Satellite with a Realtek RTL8188CE wireless lan.So the newer kernel version is still the first option on the menu and to get to Linux I use the "Previous versions of linux" which is running Kernel version 3.2.0-55. What can I do to update the grub menu so that Linux version with the 3.2.0-55 Kernel appears as the primary option? Do I need to get rid of/uninstall the newer kernel version? How can I do this without screwing up Linux entirely?

    Read the article

  • How does the linux update manager work?

    - by Mr.Student
    I want to know how the update manager for linux works. For instance, how does my linux distro check to see if there are any available updates for download and which servers to download these updates? If I am dealing with 3rd party software not apart of the main distro, how do those programs interact with my update manager to notify me that those programs have available updates? Lastly what would be some good literature on the subject?

    Read the article

  • LINQ: Enhancing Distinct With The PredicateEqualityComparer

    - by Paulo Morgado
    Today I was writing a LINQ query and I needed to select distinct values based on a comparison criteria. Fortunately, LINQ’s Distinct method allows an equality comparer to be supplied, but, unfortunately, sometimes, this means having to write custom equality comparer. Because I was going to need more than one equality comparer for this set of tools I was building, I decided to build a generic equality comparer that would just take a custom predicate. Something like this: public class PredicateEqualityComparer<T> : EqualityComparer<T> { private Func<T, T, bool> predicate; public PredicateEqualityComparer(Func<T, T, bool> predicate) : base() { this.predicate = predicate; } public override bool Equals(T x, T y) { if (x != null) { return ((y != null) && this.predicate(x, y)); } if (y != null) { return false; } return true; } public override int GetHashCode(T obj) { if (obj == null) { return 0; } return obj.GetHashCode(); } } Now I can write code like this: .Distinct(new PredicateEqualityComparer<Item>((x, y) => x.Field == y.Field)) But I felt that I’d lost all conciseness and expressiveness of LINQ and it doesn’t support anonymous types. So I came up with another Distinct extension method: public static IEnumerable<TSource> Distinct<TSource>(this IEnumerable<TSource> source, Func<TSource, TSource, bool> predicate) { return source.Distinct(new PredicateEqualityComparer<TSource>(predicate)); } And the query is now written like this: .Distinct((x, y) => x.Field == y.Field) Looks a lot better, doesn’t it?

    Read the article

  • Use synergy with KVM

    - by Mr. Man
    I am using synergy on a Linux Mint server with a Mac as the client. I also have a physical KVM switch. The problem I have is that when ever I switch the physical KVM to my Mac, synergy stops working as in the keyboard and mouse don't work with the Mac. Thanks in advance!

    Read the article

  • HP DL580 G5 Hyper-V networking problem

    - by mr-perfect
    Hi I have installed the hyper-v role on a DL580 G5 cluster. The host operating system is a server 2008 x64 patched to the maximum. Everything has gone fine, but if I install a guest operating system, actually a windows server 2008 x64 I can't reach the network from it. It send many packets, but don't received any, so I can't ping the external network as well. I have installed the latest drivers to the server and unintalled the network configuration utility from the physical server, but no luck. I added a legacy adapter to the guest binded to the same phisycal adapter on the host machine but it can't help Any idea welcome...

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >