Search Results

Search found 6099 results on 244 pages for 'mass effect'.

Page 65/244 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Removing HTML entities while preserving line breaks with JSoup

    - by shrodes
    I have been using JSoup to parse lyrics and it has been great until now, but have run into a problem. I can use Node.html() to return the full HTML of the desired node, which retains line breaks as such: Gl&oacute;andi augu, silfurn&aacute;tt <br />Bl&oacute;&eth; alv&ouml;ru, starir &aacute; <br />&Oacute;&eth;ur hundur er &iacute; v&iacute;gam&oacute;&eth;, &iacute; maga... m&eacute;r <br /> <br />Kolni&eth;ur gref, kvik sem dreg h&eacute;r <br />Kolni&eth;ur svart, hvergi bjart n&eacute; But has the unfortunate side-effect, as you can see, of retaining HTML entities and tags. However, if I use Node.text(), I can get a better looking result, free of tags and entities: Glóandi augu, silfurnátt Blóð alvöru, starir á Óður hundur er í vígamóð, í maga... mér Kolniður gref, kvik sem dreg hér Kolniður svart, Which has another unfortunate side-effect of removing the line breaks and compressing into a single line. Simply replacing <br /> from the node before calling Node.text() yields the same result, and it seems that that method is compressing the text onto a single line in the method itself, ignoring newlines. Is it possible to have the best of both worlds, and have tags and entities replaced correctly which preserving the line breaks, or is there another method or way of decoding entities and removing tags without having to replace them manually?

    Read the article

  • Packet fragmentation when sending data via SSLStream

    - by Ive
    When using an SSLStream to send a 'large' chunk of data (1 meg) to a (already authenticated) client, the packet fragmentation / dissasembly I'm seeing is FAR greater than when using a normal NetworkStream. Using an async read on the client (i.e. BeginRead()), the ReadCallback is repeatedly called with exactly the same size chunk of data up until the final packet (the remainder of the data). With the data I'm sending (it's a zip file), the segments happen to be 16363 bytes long. Note: My receive buffer is much bigger than this and changing it's size has no effect I understand that SSL encrypts data in chunks no bigger than 18Kb, but since SSL sits on top of TCP, I wouldn't think that the number of SSL chunks would have any relevance to the TCP packet fragmentation? Essentially, the data is taking about 20 times longer to be fully read by the client than with a standard NetworkStream (both on localhost!) What am I missing? EDIT: I'm beginning to suspect that the receive (or send) buffer size of an SSLStream is limited. Even if I use synchronous reads (i.e. SSLStream.Read()), no more data ever becomes available, regardless of how long I wait before attempting to read. This would be the same behavior as if I were to limit the receive buffer to 16363 bytes. Setting the Underlying NetworkStream's SendBufferSize (on the server), and ReceiveBufferSize (on the client) has no effect.

    Read the article

  • Determining the popularity of a video with ratings and views

    - by user295825
    I am about to embark on a new project - a video website. Users will be able to register, and vote on videos by clicking "like" or "dislike", or something to that effect. In any event, it will be a 2-option voting system, not a 5-star system. Every X number of days, I will be generating a "chart" of the most popular videos. So my question is: how should I determine the popularity of a given video? If I went the route of tallying up the videos with the most views, this could have the effect of exceptionally bad videos making it to the of the charts (just because they're so bad). If I go the route of a scoring system based on the amount of "like" and "dislike" votes (eg. 100 like votes, and 50 dislike votes equals a score of 2), videos with few views could appear on the top of the charts. So, what I need to do is a combination of the two. Barring, of course, spammy views and votes. What's your guys' thoughts on the subject?

    Read the article

  • C++ Pointer member function with templates assignment with a member function of another class

    - by Agusti
    Hi, I have this class: class IShaderParam{ public: std::string name_value; }; template<class TParam> class TShaderParam:public IShaderParam{ public: void (TShaderParam::*send_to_shader)( const TParam&,const std::string&); TShaderParam():send_to_shader(NULL){} TParam value; void up_to_shader(); }; typedef TShaderParam<float> FloatShaderParam; typedef TShaderParam<D3DXVECTOR3> Vec3ShaderParam; In another class, I have a vector of IShaderParams* and functions that i want to send to "send_to_shader". I'm trying assign the reference of these functions like this: Vec3ShaderParam *_param = new Vec3ShaderParam; _param-send_to_shader = &TShader::setVector3; This is the function: void TShader::setVector3(const D3DXVECTOR3 &vec, const std::string &name){ //... } And this is the class with IshaderParams*: class TShader{ std::vector params; public: Shader effect; std::string technique_name; TShader(std::string& afilename):effect(NULL){}; ~TShader(); void setVector3(const D3DXVECTOR3 &vec, const std::string &name); When I compile the project with Visual Studio C++ Express 2008 I recieve this error: Error 2 error C2440: '=' :can't make the conversion 'void (__thiscall TShader::* )(const D3DXVECTOR3 &,const std::string &)' a 'void (__thiscall TShaderParam::* )(const TParam &,const std::string &)' c:\users\isagoras\documents\mcv\afoc\shader.cpp 127 Can I do the assignment? No? I don't know how :-S Yes, I know that I can achieve the same objective with other techniques, but I want to know how can I do this..

    Read the article

  • Parent and siblings inherits a child list items styles

    - by elvista
    I have a simple menu <ul id="menu"> <li class="leaf"><a href="#">Menu Item 1</a></li> <li class="leaf"><a href="#">Menu Item 2</a></li> <li class="expanded"><a href="#">Menu Item 3</a> <ul> <li class="leaf"><a href="#">Menu Item a</a></li> <li class="leaf"><a href="#">Menu Item b</a></li> <li class="leaf"><a href="#">Menu Item c</a></li> </ul> </li> <li class="leaf"><a href="#">Menu Item 4</a></li> </ul> and ul#menu li:hover {font-weight:bold;} The problem I am facing is when I hover above a ul li li, the parent as well as all its siblings gets the hover effect. I only want the list item I hovered above to get the effect. I tried ul#menu li.leaf:hover {..}, ul#menu li.expanded:hover {..} , but even in that case, when I hover above li.expanded, it's child inherits the style. It is important for me to style the list items, not a (the style is more complicated than the one I posted) How do I fix this?

    Read the article

  • jQuery: Moving window (or FIFO) type DIV?

    - by Legend
    I have been trying to get this effect for a couple of hours now and I must admit I am failing at it. I am trying to construct a DIV that accepts a particular number of items (say 5), when the 6th item is added, the first item that was aded should be removed (first-in-first-out). The feel should have some kind of a fadeIn and fadeOut. Here's what I managed to write till now: ... //Create a ul element with id 'ulele' and add it to a div ... //Do an ajax call and when an element arrives Hash = ComputeHash(message) if(!$("#" + Hash).exists()) { var element = $("<li></li>").html(message).attr('id', Hash).prependTo("#ulele"); $("#" + Hash).hide().delay(10000 - 1000 * messageNumber).show("slow"); _this.prune("#ulele"); } ... prune: function(divid) { $("#" + divid).children().each( function(i, elemLi) { if(i >= maxMessages) $(this).delay(10000).hide("slow").delay(10000).remove(); } ); } I've tried a couple of variations but the final effect I am getting is not that of a FIFO. The elements disappear instantaneously despite the delay and hide("slow") calls. Anyone has a more straightforward approach?

    Read the article

  • jQuery slideDown + CSS Floats

    - by danilo
    I'm using a HTML Table with several rows. Every second row - containing details about the preceding row - is hidden using CSS. When clicking the first row, the second row gets showed using jQuery show(). This is quite nice, but I would prefer the slideDown-Effect. The problem is, inside the details row, there are two floating DIVs, one floating on the left, and one on the right. Now if i slideDown the previously hidden row, the contained DIVs behave strange and "jump around". See this animated gif to understand what I mean: http://ich-wars-nicht.ch/tmp/lunapic_127365879362365_.gif The markup: <tr class="row-vm"> <td>...</td> ... </tr> <tr class="row-details"> <td colspan="8"> <div class="vmdetail-left"> ... </div> <div class="vmdetail-right"> ... </div> </td> </tr> The CSS: .table-vmlist tr.row-details { display: none; } div.vmdetail-left { float: left; width: 50%; } div.vmdetail-right { float: right; width: 50%; } And the jQuery code: if ($(this).next().css('display') == 'none') { // Show details //$(this).next().show(); $(this).next().slideDown(); } else { // Hide details //$(this).next().hide(); $(this).next().slideUp(); } Is there a way to fix this behavior, and to implement a nice slideDown-effect?

    Read the article

  • jQuery load() problem with html that contain jQuery plugin

    - by Victorgalaxy
    FYI, here is my code: [index.html] <script type="text/javascript" src="js/script.js"></script> [script.js] $(document).ready(function() { $('#buttonEphone').click(function() { $('#apDiv2').load("ePhone.html, #content"); }); }); "ePhone.html" contain some lightbox effect. (making use of code below) [ePhone.html] <script type="text/javascript" src="js/prototype.lite.js"></script> <script type="text/javascript" src="js/moo.fx.js"></script> <script type="text/javascript" src="js/litebox-1.0.js"></script> The Litebox plugin also required to add onload="initLightbox()" within the BODY tag of ePhone.html. From the above code, I can load ePhone.html's content(#content div) to my (apDiv2) of my index.html. However, the lightbox effect is no longer work. I've also try loading the whole html instead of only #content: $('#apDiv2').load('ePhone.html'); but it still doesn't work. Please help, thx

    Read the article

  • CakePHP update multiple elements in one div

    - by sw3n
    The idea is that I've 3 ajax links and each link is coupled to one single element (a form). The element will be loaded in one div. Each time when you click on a different link, the element that is requested before must be removed and the new one must come in there. So it has to update the DIV. The problem is, that it request the element, but it does not remove the other one. So when you click one the first link, it shows the first element, click on the second link and it puts the second element beneath them, and with the third one will happen exactly the same. If you clicked all the 3 links, it shows 3 forms right under each other. Some Code: Controller: function view($id) { // content could come from a database, xml, etc. $content = array ( array($this->render(null, 'ajax', '/elements/ga')), array($this->render(null, 'ajax', '/elements/ex')), array($this->render(null, 'ajax', '/elements/both')) ); $this->set('google', $content[$id][0]); // use ajax layout $this->render('/pages/form', 'ajax'); } View code: echo $ajax-link( 'Google', array( 'controller' = 'analytics', 'action' = 'view', 0 ), array( 'update' = 'dynamic1')); echo ' | '; echo $ajax-link( 'Exact', array( 'controller' = 'analytics', 'action' = 'view', 1 ), array( 'update' = 'dynamic1', 'loading' = 'Effect.BlindDownUp(\'dynamic1\')')); echo ' | '; echo $ajax-link( 'Beide', array( 'controller' = 'analytics', 'action' = 'view', 2 ), array( 'update' = 'dynamic1', 'loading' = 'Effect.BlindDown(\'dynamic1\')' )); echo $ajax-div('dynamic1'); echo $google; echo $ajax-divEnd('dynamic1');

    Read the article

  • Make floating element "maximally wide"

    - by bobobobo
    I have some floating elements on a page. What I want is the div that is floated left to be "maximally wide" so that it is as wide as it possibly can be without causing the red div ("I go at the right") to spill over onto the next line. An example is here: The width:100%; doesn't produce the desired effect! ** I don't want the green element ("I want to be as wide as possible") to go "under" the red element. Its very important that they both stay separate i.e. .. I think they must both be floated! <div class="container"> <div class="a1">i go at the right</div> <div class="a2">i want to be as wide as possible,</div> <div class="clear"></div> </div> <style> div { border: solid 2px #000; background-color: #eee; margin: 8px; padding: 8px; } div.a1 { float:right; background-color: #a00; border: solid 2px #f00; margin: 12px; padding: 6px; } div.a2 { float: left; /*width: 100%;*/ /*this doens't produce desired effect!*/ background-color: #0b0; border: solid 2px #0f0; margin: 12px; padding: 14px; } .clear { border: none; padding: 0 ; margin: 0; clear:both; } </style>

    Read the article

  • Implementing a non-public assignment operator with a public named method?

    - by Casey
    It is supposed to copy an AnimatedSprite. I'm having second thoughts that it has the unfortunate side effect of changing the *this object. How would I implement this feature without the side effect? EDIT: Based on new answers, the question should really be: How do I implement a non-public assignment operator with a public named method without side effects? (Changed title as such). public: AnimatedSprite& AnimatedSprite::Clone(const AnimatedSprite& animatedSprite) { return (*this = animatedSprite); } protected: AnimatedSprite& AnimatedSprite::operator=(const AnimatedSprite& rhs) { if(this == &rhs) return *this; destroy_bitmap(this->_frameImage); this->_frameImage = create_bitmap(rhs._frameImage->w, rhs._frameImage->h); clear_bitmap(this->_frameImage); this->_frameDimensions = rhs._frameDimensions; this->CalcCenterFrame(); this->_frameRate = rhs._frameRate; if(rhs._animation != nullptr) { delete this->_animation; this->_animation = new a2de::AnimationHandler(*rhs._animation); } else { delete this->_animation; this->_animation = nullptr; } return *this; }

    Read the article

  • jQuery Drill down selector ?

    - by Etienne
    Hi, I have made this piece of code that makes a roll hover effect on any image you add the class 'simplehover' to it : //rollover effect on links $(".simplehover").mouseover(function () { $(this).attr("src", $(this).attr("src").replace("-off.", "-on.")); }).mouseout(function () { $(this).attr("src", $(this).attr("src").replace("-on.", "-off.")); }); <img src="slogan.gif" class="simplehover" /> Im trying to get it work if you add the class 'simplehover' to the surrounding element : . (especially usefull for ASP.net asp:HyperLink tag which automatically generate the IMG tag inside the hyperlink. <a href="#" class="simplehover"> <img src="slogan.gif" /> </a> If I change my selector for : $(".simplehover img") , it will work, however it no longer works in senario #1. I tried this with no luck: $(".simplehover").mouseover(function () { if ($(this).has('img')) { $(this) = $(this).children(); } ... Anyone can figure this out?

    Read the article

  • jquery .on() toggle div

    - by lollo
    i'm developing a blog with dynamic content loaded via ajax request and i'm trying to hide/show the "post comment" form, of the article, with a button : <div id="post"> ... <ol id="update" class="timeline"> <!-- comments here --> </ol> <button class="mybutton Bhome">Commenta</button><!-- hide/show button --> <div class="mycomment"><!-- div to hide/show --> <form action="#" method="post"> Nome: <input type="text" size="12" id="name" /><br /> Email: <input type="text" size="12" id="email" /><br /> <textarea rows="10" cols="50" class="mytext" id="commentArea"></textarea><br /> <input type="submit" class="Bhome" value=" Post " /> </form> </div><!-- mycomment --> </div><!-- post --> This jquery code works well but has effect on All the posts... $("div").on("click", ".mybutton", function(e){ $(".mycomment").slideToggle("slow"); e.stopPropagation(); }); I want that only the clicked hide/show button has effect on the related article but f i have more than one article on my page the button with .mybutton class hides or shows all the comment form of all the articles.

    Read the article

  • Else statement crashes when i enter a letter for a cin << int value

    - by TimothyTech
    Alright, i have a question, i veered away from using strings for selection so now i use an integer.when the user enters a number then the game progresses. if they enter a wrong character it SHOULD give the else statement, however if i enter a letter or character the system goes into an endless loop effect then crashes. is there a way to give the else statement even if the user defies the variable's type. // action variable; int c_action: if (c_action == 1){ // enemy attack and user attack with added effect buffer. /////////////////////////////////////////////////////// u_attack = userAttack(userAtk, weapons); enemyHP = enemyHP - u_attack; cout << " charging at the enemy you do " << u_attack << "damage" << endl; e_attack = enemyAttack(enemyAtk); userHP = userHP - e_attack; cout << "however he lashes back causing you to have " << userHP << "health left " << endl << endl << endl << endl; //end of ATTACK ACTION }else{ cout << "invalid actions" << endl; goto ACTIONS; }

    Read the article

  • Need help making a div appear on the bottom of the screen while the rest of the divs scroll

    - by user1896600
    It's hard to describe my specific problem without just showing you the HTML code. The HTML source can be seen easily from clicking "View Source" while seeing the page http://techdot.us/projects/jeopardy/view.php. The CSS is located here: http://techdot.us/projects/jeopardy/style/gameStyle.css. My main goal is to have the main content table rows/columns appear on the majority of the screen (everything except 69px, to be exact). So, the bottom 69px contains an informational panel that is supposed to stay on the bottom of the screen, even when the user scrolls down the page. Scrolling is supposed to, in theory, trigger the majority of the content to go down the page normally, except the bottom bar which stays static. I have achieved this effect on the website. However, there is a big problem. On smaller screens (as you can simulate by resizing the window), some of the main table gets cut off. It seems that my CSS solution is a botch, and, in effect, does not accomplish my main goal. The bottom bar should not cut off part of the table from the main content div (gameTable), but the main content div should display all of its content in a scrollable fashion. My CSS at the moment works as long as the viewer's screen is a certain pixel height. This is definitely not permanent. Thank you SO much for the help. I really appreciate it and totally understand that I'm being a total pain by just throwing down tons of CSS and HTML code to edit.

    Read the article

  • Hiding instantiated templates in shared library created with g++

    - by jchl
    I have a file that contains the following: #include <map> class A {}; void doSomething() { std::map<int, A> m; } When compiled into a shared library with g++, the library contains dynamic symbols for all the methods of std::map<int, A>. Since A is private to this file, there is no possibility that std::map will be instantiated in any other shared library with the same parameters, so I'd like to make the template instantiation hidden (for some of the reasons described in this document). I thought I should be able to do this by adding an explicit instantiation of the template class and marking it as hidden, like so: #include <map> class A {}; template class __attribute__((visibility ("hidden"))) std::map<int, A>; void doSomething() { std::map<int, A> m; } However, this has no effect: the symbols are still all exported. I even tried compiling with -fvisibility=hidden, but this also has no effect on the visibility of the methods of std::map<int, A> (although it does hide doSomething). The document I linked to above describes the use of export maps to restrict visibility, but that seems very tedious. Is there a way to do what I want in g++ (other than using export maps)? If so, what is it? If not, is there a good reason why these symbols must always be exported, or is this just a omission in g++?

    Read the article

  • jquery animate boxshadow

    - by mstef
    http://jsfiddle.net/mstefanko/w5aAn/877/ Below, i'm achieving the effect I wanted, but with a pending issue. Since i'm using a separate span and positioning it absolute over-top of a box with relative position. I can not access the inputs until after the animation is finished. I'm guessing the only way to alleviate this would be do something similar with just animating the border of the outer box? But nothing I was doing to animate box-shadow:inset was working. HTML <div id="wow"> <span id="pulse"></span> <input id="form-input"/> <input id="form-input"/> </div>? CSS #wow { width: 500px; height: 200px; display: inline-block; position: relative; border: 1px solid black; } #pulse { width: 100%; height: 100%; box-shadow:inset 0 0 20px #6c95c3; -moz-box-shadow:inset 0 0 10px #6c95c3; position: absolute; z-index: 20000; } JS $('#pulse').stop().animate({"opacity": 0}, "fast"); $('#pulse').effect("pulsate", { times:4 }, 500, function() { $(this).remove(); });

    Read the article

  • Help with 'simple' jQuery question

    - by user318573
    I am trying to get a simple effect working in jQuery, and only have a few hours of experience with it, so have a lot to learn. I managed to get this code working: <script type="text/javascript" > $(document).ready(function() { lastBlock = $("#a1"); maxWidth = 415; minWidth = 75; $("ul li a").hover(function() { $(lastBlock).animate({ width: minWidth + "px" }, { queue: false, duration: 600 }); $(this).animate({ width: maxWidth + "px" }, { queue: false, duration: 600 }); lastBlock = this; }); }); </script> Which gives me exactly what I want, a 6 pane horizontal accordion effect. Each pane however has a 75x75 image on the upper left, which is always visible no matter which pane is active (and it is this image that when hovered over caused the pane to open) - what I want to do is for the image on the selected 'pane', I want to drop the top margin down 10px, and then put it back to 0px when a new one is selected, i.e. so the selected panes image is always 10px lower than the other 5 images. I suspect this should be easy, but not quite grasping the syntax yet. Thanks.

    Read the article

  • Fade in and out a div contains video embedHTML in jquery

    - by Ahmy
    I have a div and three links when a link is clicked the div is set its innerHTML with an embedHTML of video and i need to cause some effects when the current div video is changed on clicking the link Example : <a id="Link1" onClick="ChangeVideo(1)">Link1</a> <a id="Link2" onClick="ChangeVideo(2)">Link2</a> <a id="Link3" onClick="ChangeVideo(3)">Link3</a> <div id="CurrVideo"></div> <script type="text/javascript" language="javascript"> //The ArrEmbed is an array that is set on the load of the page:: var ArrEmbed = new Array(); function ChangeVideo(index) { $('#CurrVideo).fadeTo(1000,0.25,function(){ document.getElementById("CurrVideo").innerHTML = ArrEmbed[index-1]; }); $('#CurrVideo).fadeTo(500,1.25); } </script> The effect is not noticeable and fading with opacity diminshing is not noticed even if the duration time is increased (e.g. 'slow') and when i replace the EmbedHTML with image this code is applicable and the effect of fading is noticed but with the video it is not noticeable so how can i solve this problem ? Thanks in advance for any try Note: The embedHTML differ as it may come from youtube or any server or from my JWPlayer.

    Read the article

  • Force the use of interface instead of concrete implementation in declaration (.NET)

    - by gammelgul
    In C++, you can do the following: class base_class { public: virtual void do_something() = 0; }; class derived_class : public base_class { private: virtual void do_something() { std::cout << "do_something() called"; } }; The derived_class overrides the method do_something() and makes it private. The effect is, that the only way to call this method is like this: base_class *object = new derived_class(); object->do_something(); If you declare the object as of type derived_class, you can't call the method because it's private: derived_class *object = new derived_class(); object->do_something(); // --> error C2248: '::derived_class::do_something' : cannot access private member declared in class '::derived_class' I think this is quite nice, because if you create an abstract class that is used as an interface, you can make sure that nobody accidentally declares a field as the concrete type, but always uses the interface class. Since in C# / .NET in general, you aren't allowed to narrow the access from public to private when overriding a method, is there a way to achieve a similar effect here?

    Read the article

  • CSS: Why an input width:100% doesn't expand in an absolute box?

    - by Alessandro Vernet
    I have 2 inputs: they both have a width: 100%, and the second one is an absolute box: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <style type="text/css"> #box1 { position: absolute } #box1 { background: #666 } input { width: 100% } </style> </head> <body> <form> <input type="text"> <div id="box1"> <input type="text"> </div> </form> </body> </html> On standard-compliant browsers, the width: 100% seems to have no effect on the input inside the absolutely positioned box, but it does on the input which is not inside that absolutely absolute box. On IE7, both inputs take the whole width of the page. Two questions come to mind: Why does the width: 100% have no effect with standard-compliant browsers? I have to say that the way IE7 renders this feels more intuitive to me. How can I get IE7 to render things like the other browsers, if I can't remove the width: 100% and can't set a width on the absolutely positioned box?

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Unable to connect to Samba printer

    - by user127236
    I have a headless Ubuntu 12.04 server for files and printers. It shares files via Samba just fine. However, the HP PSC-750xi connected to the server via USB is not accessible from my Ubuntu 12.04 laptop. I can browse for it in the Printing control panel, but any attempt to authenticate my ID to the printer with my user credentials results in the error "This print share is not accessible". I have included the Samba smb.conf file below. Any help appreciated. Thanks... JGB # # Sample configuration file for the Samba suite for Debian GNU/Linux. # # # This is the main Samba configuration file. You should read the # smb.conf(5) manual page in order to understand the options listed # here. Samba has a huge number of configurable options most of which # are not shown in this example # # Some options that are often worth tuning have been included as # commented-out examples in this file. # - When such options are commented with ";", the proposed setting # differs from the default Samba behaviour # - When commented with "#", the proposed setting is the default # behaviour of Samba but the option is considered important # enough to be mentioned here # # NOTE: Whenever you modify this file you should run the command # "testparm" to check that you have not made any basic syntactic # errors. # A well-established practice is to name the original file # "smb.conf.master" and create the "real" config file with # testparm -s smb.conf.master >smb.conf # This minimizes the size of the really used smb.conf file # which, according to the Samba Team, impacts performance # However, use this with caution if your smb.conf file contains nested # "include" statements. See Debian bug #483187 for a case # where using a master file is not a good idea. # #======================= Global Settings ======================= [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no writeable = yes server string = %h server (Samba, Ubuntu) unix password sync = yes workgroup = WORKGROUP syslog = 0 panic action = /usr/share/samba/panic-action %d usershare allow guests = yes max log size = 1000 pam password change = yes ## Browsing/Identification ### # Change this to the workgroup/NT-domain name your Samba server will part of # server string is the equivalent of the NT Description field # Windows Internet Name Serving Support Section: # WINS Support - Tells the NMBD component of Samba to enable its WINS Server # wins support = no # WINS Server - Tells the NMBD components of Samba to be a WINS Client # Note: Samba can be either a WINS Server, or a WINS Client, but NOT both ; wins server = w.x.y.z # This will prevent nmbd to search for NetBIOS names through DNS. # What naming service and in what order should we use to resolve host names # to IP addresses ; name resolve order = lmhosts host wins bcast #### Networking #### # The specific set of interfaces / networks to bind to # This can be either the interface name or an IP address/netmask; # interface names are normally preferred ; interfaces = 127.0.0.0/8 eth0 # Only bind to the named interfaces and/or networks; you must use the # 'interfaces' option above to use this. # It is recommended that you enable this feature if your Samba machine is # not protected by a firewall or is a firewall itself. However, this # option cannot handle dynamic or non-broadcast interfaces correctly. ; bind interfaces only = yes #### Debugging/Accounting #### # This tells Samba to use a separate log file for each machine # that connects # Cap the size of the individual log files (in KiB). # If you want Samba to only log through syslog then set the following # parameter to 'yes'. # syslog only = no # We want Samba to log a minimum amount of information to syslog. Everything # should go to /var/log/samba/log.{smbd,nmbd} instead. If you want to log # through syslog you should set the following parameter to something higher. # Do something sensible when Samba crashes: mail the admin a backtrace ####### Authentication ####### # "security = user" is always a good idea. This will require a Unix account # in this server for every user accessing the server. See # /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html # in the samba-doc package for details. # security = user # You may wish to use password encryption. See the section on # 'encrypt passwords' in the smb.conf(5) manpage before enabling. # If you are using encrypted passwords, Samba will need to know what # password database type you are using. # This boolean parameter controls whether Samba attempts to sync the Unix # password with the SMB password when the encrypted SMB password in the # passdb is changed. # For Unix password sync to work on a Debian GNU/Linux system, the following # parameters must be set (thanks to Ian Kahan <<[email protected]> for # sending the correct chat script for the passwd program in Debian Sarge). # This boolean controls whether PAM will be used for password changes # when requested by an SMB client instead of the program listed in # 'passwd program'. The default is 'no'. # This option controls how unsuccessful authentication attempts are mapped # to anonymous connections ########## Domains ########### # Is this machine able to authenticate users. Both PDC and BDC # must have this setting enabled. If you are the BDC you must # change the 'domain master' setting to no # ; domain logons = yes # # The following setting only takes effect if 'domain logons' is set # It specifies the location of the user's profile directory # from the client point of view) # The following required a [profiles] share to be setup on the # samba server (see below) ; logon path = \\%N\profiles\%U # Another common choice is storing the profile in the user's home directory # (this is Samba's default) # logon path = \\%N\%U\profile # The following setting only takes effect if 'domain logons' is set # It specifies the location of a user's home directory (from the client # point of view) ; logon drive = H: # logon home = \\%N\%U # The following setting only takes effect if 'domain logons' is set # It specifies the script to run during logon. The script must be stored # in the [netlogon] share # NOTE: Must be store in 'DOS' file format convention ; logon script = logon.cmd # This allows Unix users to be created on the domain controller via the SAMR # RPC pipe. The example command creates a user account with a disabled Unix # password; please adapt to your needs ; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u # This allows machine accounts to be created on the domain controller via the # SAMR RPC pipe. # The following assumes a "machines" group exists on the system ; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u # This allows Unix groups to be created on the domain controller via the SAMR # RPC pipe. ; add group script = /usr/sbin/addgroup --force-badname %g ########## Printing ########## # If you want to automatically load your printer list rather # than setting them up individually then you'll need this # load printers = yes # lpr(ng) printing. You may wish to override the location of the # printcap file ; printing = bsd ; printcap name = /etc/printcap # CUPS printing. See also the cupsaddsmb(8) manpage in the # cupsys-client package. ; printing = cups ; printcap name = cups ############ Misc ############ # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting ; include = /home/samba/etc/smb.conf.%m # Most people will find that this option gives better performance. # See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html # for details # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 # socket options = TCP_NODELAY # The following parameter is useful only if you have the linpopup package # installed. The samba maintainer and the linpopup maintainer are # working to ease installation and configuration of linpopup and samba. ; message command = /bin/sh -c '/usr/bin/linpopup "%f" "%m" %s; rm %s' & # Domain Master specifies Samba to be the Domain Master Browser. If this # machine will be configured as a BDC (a secondary logon server), you # must set this to 'no'; otherwise, the default behavior is recommended. # domain master = auto # Some defaults for winbind (make sure you're not using the ranges # for something else.) ; idmap uid = 10000-20000 ; idmap gid = 10000-20000 ; template shell = /bin/bash # The following was the default behaviour in sarge, # but samba upstream reverted the default because it might induce # performance issues in large organizations. # See Debian bug #368251 for some of the consequences of *not* # having this setting and smb.conf(5) for details. ; winbind enum groups = yes ; winbind enum users = yes # Setup usershare options to enable non-root users to share folders # with the net usershare command. # Maximum number of usershare. 0 (default) means that usershare is disabled. ; usershare max shares = 100 # Allow users who've been granted usershare privileges to create # public shares, not just authenticated ones #======================= Share Definitions ======================= # Un-comment the following (and tweak the other settings below to suit) # to enable the default home directory shares. This will share each # user's home director as \\server\username ;[homes] ; comment = Home Directories ; browseable = no # By default, the home directories are exported read-only. Change the # next parameter to 'no' if you want to be able to write to them. ; read only = yes # File creation mask is set to 0700 for security reasons. If you want to # create files with group=rw permissions, set next parameter to 0775. ; create mask = 0700 # Directory creation mask is set to 0700 for security reasons. If you want to # create dirs. with group=rw permissions, set next parameter to 0775. ; directory mask = 0700 # By default, \\server\username shares can be connected to by anyone # with access to the samba server. Un-comment the following parameter # to make sure that only "username" can connect to \\server\username # The following parameter makes sure that only "username" can connect # # This might need tweaking when using external authentication schemes ; valid users = %S # Un-comment the following and create the netlogon directory for Domain Logons # (you need to configure Samba to act as a domain controller too.) ;[netlogon] ; comment = Network Logon Service ; path = /home/samba/netlogon ; guest ok = yes ; read only = yes # Un-comment the following and create the profiles directory to store # users profiles (see the "logon path" option above) # (you need to configure Samba to act as a domain controller too.) # The path below should be writable by all users so that their # profile directory may be created the first time they log on ;[profiles] ; comment = Users profiles ; path = /home/samba/profiles ; guest ok = no ; browseable = no ; create mask = 0600 ; directory mask = 0700 [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers browseable = yes writeable = no path = /var/lib/samba/printers # Uncomment to allow remote administration of Windows print drivers. # You may need to replace 'lpadmin' with the name of the group your # admin users are members of. # Please note that you also need to set appropriate Unix permissions # to the drivers directory for these users to have write rights in it ; write list = root, @lpadmin # A sample share for sharing your CD-ROM with others. ;[cdrom] ; comment = Samba server's CD-ROM ; read only = yes ; locking = no ; path = /cdrom ; guest ok = yes # The next two parameters show how to auto-mount a CD-ROM when the # cdrom share is accesed. For this to work /etc/fstab must contain # an entry like this: # # /dev/scd0 /cdrom iso9660 defaults,noauto,ro,user 0 0 # # The CD-ROM gets unmounted automatically after the connection to the # # If you don't want to use auto-mounting/unmounting make sure the CD # is mounted on /cdrom # ; preexec = /bin/mount /cdrom ; postexec = /bin/umount /cdrom [mediafiles] path = /media/multimedia/

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • PowerShell: can't modify environment variables

    - by IttayD
    I have an environment variable set via "system properties - advanced - Environment Variables". I modified the variable's value. In cmd, I see the new value. In PowerShell, the value is still the old value. Trying to set it with [Environment]::SetEnvironmentVariable doesn't have any effect.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >