Search Results

Search found 3205 results on 129 pages for 'unexpected shutdown'.

Page 71/129 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Parallelism in .NET – Part 3, Imperative Data Parallelism: Early Termination

    - by Reed
    Although simple data parallelism allows us to easily parallelize many of our iteration statements, there are cases that it does not handle well.  In my previous discussion, I focused on data parallelism with no shared state, and where every element is being processed exactly the same. Unfortunately, there are many common cases where this does not happen.  If we are dealing with a loop that requires early termination, extra care is required when parallelizing. Often, while processing in a loop, once a certain condition is met, it is no longer necessary to continue processing.  This may be a matter of finding a specific element within the collection, or reaching some error case.  The important distinction here is that, it is often impossible to know until runtime, what set of elements needs to be processed. In my initial discussion of data parallelism, I mentioned that this technique is a candidate when you can decompose the problem based on the data involved, and you wish to apply a single operation concurrently on all of the elements of a collection.  This covers many of the potential cases, but sometimes, after processing some of the elements, we need to stop processing. As an example, lets go back to our previous Parallel.ForEach example with contacting a customer.  However, this time, we’ll change the requirements slightly.  In this case, we’ll add an extra condition – if the store is unable to email the customer, we will exit gracefully.  The thinking here, of course, is that if the store is currently unable to email, the next time this operation runs, it will handle the same situation, so we can just skip our processing entirely.  The original, serial case, with this extra condition, might look something like the following: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) break; customer.LastEmailContact = DateTime.Now; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re processing our loop, but at any point, if we fail to send our email successfully, we just abandon this process, and assume that it will get handled correctly the next time our routine is run.  If we try to parallelize this using Parallel.ForEach, as we did previously, we’ll run into an error almost immediately: the break statement we’re using is only valid when enclosed within an iteration statement, such as foreach.  When we switch to Parallel.ForEach, we’re no longer within an iteration statement – we’re a delegate running in a method. This needs to be handled slightly differently when parallelized.  Instead of using the break statement, we need to utilize a new class in the Task Parallel Library: ParallelLoopState.  The ParallelLoopState class is intended to allow concurrently running loop bodies a way to interact with each other, and provides us with a way to break out of a loop.  In order to use this, we will use a different overload of Parallel.ForEach which takes an IEnumerable<T> and an Action<T, ParallelLoopState> instead of an Action<T>.  Using this, we can parallelize the above operation by doing: Parallel.ForEach(customers, (customer, parallelLoopState) => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) parallelLoopState.Break(); else customer.LastEmailContact = DateTime.Now; } }); There are a couple of important points here.  First, we didn’t actually instantiate the ParallelLoopState instance.  It was provided directly to us via the Parallel class.  All we needed to do was change our lambda expression to reflect that we want to use the loop state, and the Parallel class creates an instance for our use.  We also needed to change our logic slightly when we call Break().  Since Break() doesn’t stop the program flow within our block, we needed to add an else case to only set the property in customer when we succeeded.  This same technique can be used to break out of a Parallel.For loop. That being said, there is a huge difference between using ParallelLoopState to cause early termination and to use break in a standard iteration statement.  When dealing with a loop serially, break will immediately terminate the processing within the closest enclosing loop statement.  Calling ParallelLoopState.Break(), however, has a very different behavior. The issue is that, now, we’re no longer processing one element at a time.  If we break in one of our threads, there are other threads that will likely still be executing.  This leads to an important observation about termination of parallel code: Early termination in parallel routines is not immediate.  Code will continue to run after you request a termination. This may seem problematic at first, but it is something you just need to keep in mind while designing your routine.  ParallelLoopState.Break() should be thought of as a request.  We are telling the runtime that no elements that were in the collection past the element we’re currently processing need to be processed, and leaving it up to the runtime to decide how to handle this as gracefully as possible.  Although this may seem problematic at first, it is a good thing.  If the runtime tried to immediately stop processing, many of our elements would be partially processed.  It would be like putting a return statement in a random location throughout our loop body – which could have horrific consequences to our code’s maintainability. In order to understand and effectively write parallel routines, we, as developers, need a subtle, but profound shift in our thinking.  We can no longer think in terms of sequential processes, but rather need to think in terms of requests to the system that may be handled differently than we’d first expect.  This is more natural to developers who have dealt with asynchronous models previously, but is an important distinction when moving to concurrent programming models. As an example, I’ll discuss the Break() method.  ParallelLoopState.Break() functions in a way that may be unexpected at first.  When you call Break() from a loop body, the runtime will continue to process all elements of the collection that were found prior to the element that was being processed when the Break() method was called.  This is done to keep the behavior of the Break() method as close to the behavior of the break statement as possible. We can see the behavior in this simple code: var collection = Enumerable.Range(0, 20); var pResult = Parallel.ForEach(collection, (element, state) => { if (element > 10) { Console.WriteLine("Breaking on {0}", element); state.Break(); } Console.WriteLine(element); }); If we run this, we get a result that may seem unexpected at first: 0 2 1 5 6 3 4 10 Breaking on 11 11 Breaking on 12 12 9 Breaking on 13 13 7 8 Breaking on 15 15 What is occurring here is that we loop until we find the first element where the element is greater than 10.  In this case, this was found, the first time, when one of our threads reached element 11.  It requested that the loop stop by calling Break() at this point.  However, the loop continued processing until all of the elements less than 11 were completed, then terminated.  This means that it will guarantee that elements 9, 7, and 8 are completed before it stops processing.  You can see our other threads that were running each tried to break as well, but since Break() was called on the element with a value of 11, it decides which elements (0-10) must be processed. If this behavior is not desirable, there is another option.  Instead of calling ParallelLoopState.Break(), you can call ParallelLoopState.Stop().  The Stop() method requests that the runtime terminate as soon as possible , without guaranteeing that any other elements are processed.  Stop() will not stop the processing within an element, so elements already being processed will continue to be processed.  It will prevent new elements, even ones found earlier in the collection, from being processed.  Also, when Stop() is called, the ParallelLoopState’s IsStopped property will return true.  This lets longer running processes poll for this value, and return after performing any necessary cleanup. The basic rule of thumb for choosing between Break() and Stop() is the following. Use ParallelLoopState.Stop() when possible, since it terminates more quickly.  This is particularly useful in situations where you are searching for an element or a condition in the collection.  Once you’ve found it, you do not need to do any other processing, so Stop() is more appropriate. Use ParallelLoopState.Break() if you need to more closely match the behavior of the C# break statement. Both methods behave differently than our C# break statement.  Unfortunately, when parallelizing a routine, more thought and care needs to be put into every aspect of your routine than you may otherwise expect.  This is due to my second observation: Parallelizing a routine will almost always change its behavior. This sounds crazy at first, but it’s a concept that’s so simple its easy to forget.  We’re purposely telling the system to process more than one thing at the same time, which means that the sequence in which things get processed is no longer deterministic.  It is easy to change the behavior of your routine in very subtle ways by introducing parallelism.  Often, the changes are not avoidable, even if they don’t have any adverse side effects.  This leads to my final observation for this post: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • In HLSL pixel shader , why is SV_POSITION different to other semantics?

    - by tina nyaa
    In my HLSL pixel shader, SV_POSITION seems to have different values to any other semantic I use. I don't understand why this is. Can you please explain it? For example, I am using a triangle with the following coordinates: (0.0f, 0.5f) (0.5f, -0.5f) (-0.5f, -0.5f) The w and z values are 0 and 1, respectively. This is the pixel shader. struct VS_IN { float4 pos : POSITION; }; struct PS_IN { float4 pos : SV_POSITION; float4 k : LOLIMASEMANTIC; }; PS_IN VS( VS_IN input ) { PS_IN output = (PS_IN)0; output.pos = input.pos; output.k = input.pos; return output; } float4 PS( PS_IN input ) : SV_Target { // screenshot 1 return input.pos; // screenshot 2 return input.k; } technique10 Render { pass P0 { SetGeometryShader( 0 ); SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); } } Screenshot 1: http://i.stack.imgur.com/rutGU.png Screenshot 2: http://i.stack.imgur.com/NStug.png (Sorry, I'm not allowed to post images until I have a lot of 'reputation') When I use the first statement (result is first screenshot), the one that uses the SV_POSITION semantic, the result is completely unexpected and is yellow, whereas using any other semantic will produce the expected result. Why is this?

    Read the article

  • AutoVue Integrates with Primavera P6

    - by celine.beck
    Oracle's Primavera P6 Enterprise Project Portfolio Management is an integrated project portfolio management (PPM) application that helps select the right strategic mix of projects, balance resource capacity, manage project risk and complete projects on time and within budget. AutoVue 19.3 and later versions (release 20.0) now integrate out of the box with the Web version of Oracle Primavera P6 release 7. The integration between the two products, which was announced during Oracle Open World 2009, provides project teams with ready access to any project documents directly from within the context of P6 in support for project scope definition and project planning and execution. You can learn more about the integration between AutoVue and Primavera P6 by: Listening to the Oracle Appcast entitled Enhance Primavera Project Document Collaboration with AutoVue Enterprise Visualization Watching an Oracle Webcast about how to improve project success with document visualization and collaboration Watching a recorded demo of the integrated solution Teams involved in complex projects like construction or plant shutdown activities are highly interdependent: the decisions of one affecting the actions of many others. This coupled with increasing project complexity, a vast array of players and heavy engineering and document-intensive workflows makes it more challenging to complete jobs on time and within budget. Organizations need complete visibility into project information, as well as robust project planning, risk analysis and resource balancing capabilities similar to those featured in Primavera P6 ; they also need to make sure that all project stakeholders, even those who neither understand engineering drawings nor are interested in engineering details that go beyond their specific needs, have ready access to technically advanced project information. This is exactly what the integration between AutoVue and Primavera delivers: ready access to any project information attached to Primavera projects, tasks or activities via AutoVue. There is no need for users to waste time searching for project-related documents or disrupting engineers for printouts, users have all the context they need to make sound decisions right from within Primavera P6 with a single click of a button. We are very excited about this new integration. If you are using Primavera and / or Primavera tied with AutoVue, we would be interested in getting your feedback on this integration! Please do not hesitate to post your comments / reactions on the blog!

    Read the article

  • sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0

    - by 7UR7L3
    Whenever I try to do anything at all that requires my password it returns this: u7ur7l3@ubuntu:~$ sudo sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0 sudo: fatal error, unable to load plugins u7ur7l3@ubuntu:~$ So I can't install anything from the Software Center / package manager or run any commands in terminal that require my password. I can log in, but that's pretty much it. I accidentally changed the permissions of some files, then changed some more trying to fix it :/. Now I'm completely lost as to what to do. This is what happened when I tried to get sudo working again using pkexec: u7ur7l3@ubuntu:~$ pkexec chown root /usr/lib/sudo/sudoers.so Error getting authority: Error initializing authority: Error calling StartServiceByName for org.freedesktop.PolicyKit1: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ExecFailed: Failed to execute program /usr/lib/dbus-1.0/dbus-daemon-launch-helper: Success u7ur7l3@ubuntu:~$ sudo ls sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0 sudo: fatal error, unable to load plugins And to change permissions I was using Root Actions as a dolphin service/ plugin thing, so history doesn't show me the permission changes. I just realized that sounds don't work at all anymore. When I go into Phonon my default settings and playback devices aren't even there. Also I don't have the option to shutdown, I can only log out or leave.

    Read the article

  • SQLAuthority News – Best SQLAuthority Posts of May

    - by pinaldave
    Month of May is always interesting and full of enthusiasm. Lots of good articles shared and lots of enthusiast communication on technology. This month we had 140 Character Cartoon Challenge Winner. We also had interesting conversation on what kind of lock WITH NOLOCK takes on objects as well. A quick tutorial on how to import CSV files into Database using SSIS started few other related questions. I also had fun time with community activities. I attended MVP Open Day. Vijay Raj also took awesome photos of my daughter – Shaivi. I have gain my faith back in Social Media and have created my Facebook Page, if you like SQLAuthority.com I request you to Like Facebook page as well. I am very active on twitter (@pinaldave) and answer lots of technical question if I am online during that time. During this month couple of old thing, I did learn by accident 1) Restart and Shutdown Remote Computer 2) SSMS has web browser. If you have made it till here – I suggest you to take participation in very interesting conversation here – Why SELECT * throws an error but SELECT COUNT(*) does not? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Why / how does XNA's right-handed coordinate system effect anything if you can specify near/far Z values?

    - by vargonian
    I am told repeatedly that XNA Game Studio uses a right-handed coordinate system, and I understand the difference between a right-handed and left-handed coordinate system. But given that you can use a method like Matrix.CreateOrthographicOffCenter to create your own custom projection matrix, specifying the left, right, top, bottom, zNear and zFar values, when does XNA's coordinate system come into play? For example, I'm told that in a right-handed coordinate system, increasingly negative Z values go "into" the screen. But I can easily create my projection matrix like this: Matrix.CreateOrthographicOffCenter(left, right, bottom, top, 0.1f, 10000f); I've now specified a lower value for the near Z than the far Z, which, as I understand it, means that positive Z now goes into the screen. I can similarly tweak the values of left/right/top/bottom to achieve similar results. If specifying a lower zNear than zFar value doesn't affect the Z direction of the coordinate system, what does it do? And when is the right-handed coordinate system enforced? The reason I ask is that I'm trying to implement a 2.5D camera that supports zooming and rotation, and I've spent two full days encountering one unexpected result after another.

    Read the article

  • Java Spotlight Episode 105: Mark Reinhold on the Future of Java

    - by Roger Brinkley
    Our yearly interview with Mark Reinhold, Chief Java Architect, Java Platform Group on the future of Java. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Two Java Update Releases New Java SE 6 software updates from Apple for OS X 10.8, 10.7 and 10.6 are now live and available to all customers via the Mac App Store / Software Update. The JavaFX Community Site on Java.net JSR 360: Connected Limited Device Configuration 8 JSR 361: Java ME Embedded Profile 2012 JCP EC Election Ballot open Meet the EC Candidates Recording and Materials Events Oct 22-23, Freescale Technology Forum - Japan, Tokyo, Japan Oct 23-25, EclipseCon Europe, Ludwigsburg, Germany Oct 30-Nov 1, Arm TechCon, Santa Clara, United States of America Oct 31, JFall, Hart van Holland, Netherlands Nov 2-3, JMaghreb, Rabat, Morocco Nov 5-9, Øredev Developer Conference, Malmö, Sweden Nov 13-17, Devoxx, Antwerp, Belgium Nov 20-22, DOAG 2012, Nuremberg, Germany Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Feature InterviewMark Reinhold is Chief Architect of the Java Platform Group at Oracle, where he works on the Java Platform, Standard Edition, and OpenJDK. His past contributions to the platform include character-stream readers and writers, reference objects, shutdown hooks, the NIO high-performance I/O APIs, library generification, and service loaders. Mark was the lead engineer for the 1.2 and 5.0 releases and the specification lead for Java SE 6. He is currently leading the Jigsaw and JDK 7 Projects in the OpenJDK Community. Mark holds a Ph.D. in Computer Science from the Massachusetts Institute of Technology. In this interview he discusses the future of Java Platform with regards to Jigsaw, Lambda, and Nashorn components as well as the OpenJDK community. What’s Cool QotD: Ubuntu 12.10 Release Notes on OpenJDK 7 New Lambda binary drop Development forest for Compact Profiles (JEP 161)

    Read the article

  • Read-only file system

    - by John
    The title might not be as descriptive as I would like it to be but couldn't come up with a better one. My server's file system went into Read-only. And I don't understand why it does so and how to solve it. I can SSH into the server and when trying to start apache2 for example I get the following : username@srv1:~$ sudo service apache2 start [sudo] password for username: sudo: unable to open /var/lib/sudo/username/1: Read-only file system * Starting web server apache2 (30)Read-only file system: apache2: could not open error log file /var/log/apache2/error.log. Unable to open logs Action 'start' failed. The Apache error log may have more information. When I try restarting the server I get : username@srv1:~$ sudo shutdown -r now [sudo] password for username: sudo: unable to open /var/lib/sudo/username/1: Read-only file system Once I restart it manually it just start up without any warning or message saying something is wrong. I hope somebody could point me into the right direction to resolve this issue. Thanks in advance!

    Read the article

  • TFS 2010 Server Name Change

    - by PearlFactory
    So I thought I would  change the name of my machine so that the other devs can find the TFS server easily. TFS 2005 would use the cool cmd line util tfsadminutil.....alas he is now gone HERE Are the steps to complete Edit the web.config and is usually located on default install C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\web.config <add key="applicationDatabase" value="Data Source=JUSTIN\SQLI01;Initial Catalog=Tfs_Configuration;Integrated Security=True;" /> Next step is to edit previous Solutions/Projects 1) Open the Solution file i.e ProductApp.sln 2) Edit the SccTeamFoundationServer URL under Global section i.e Change this to new name   If you have DB server on same machine ...you will need to go in and remove existing db user account assigned to the tfs DB Remove old [%machine_name%] value i.e Tuned_Dev_PC_12\Justin user from the above DBs No add the new Justin\Justin user account associated with the new machine name to the TFS & Reporing dbs ... dbo or the TFSADMIN & TFSEXEC roles either will do in this case. (or add both ) Now either ReApply user or add New account (remove old account i.e Tuned_Dev_PC_12\justin) If DB permisions are setup correctyly you will get a screen that looks like this   If it pauses or gets stuck you need to look back at the adding correct DB Perms to the i.e JUSTIN\Justin user account Also if your project is still complaining about old TFS name 1) Team\Connect new Team Foundation Server 2) Add\Remove TFS 3) Add New TFS Name  Once you have connected to the new TFS server Reload your project from TFS..this way it removes a lot of the bugs that hang around in the local project\solution This is similar to a VSS2005 and older fix Cheers ( eta about 60-90 mins so weigh up the the need vs payoff. ) Shutdown restart

    Read the article

  • Disaster Recovery Plan&ndash;Rebuild System Disk (Dell Server 2900 with PERC RAID controller)

    - by Jim Lahman
    Goal: Since the system disk is a RAID 1 mirrored set, we can rebuild the shadow set by replacing one of the good sets with a blank disk Steps Shutdown and power down server Remove the disk from bay 9, which is part of the system shadow set. Put this disk on the shelf Insert blank/old disk into the empty bay     Label the new disk before inserting it into the empty bay       Power up server During the booting process, the following message appears: “Some configured disks have been removed from your system…”       Press ‘C’ to Load Configuration utility             Press 'Y' to confirm to load the foreign configuration       In this example, the system shadow set is Disk Group 2.  (Before proceeding, confirm this is the disk group in your case).  Expanding the physical disks shows a disk in bay 8 and a missing disk in bay 9.  This is correct.   Now, we have to include the new inserted disk in this group       RAID controller reporting bay 9 is empty       There may be times when the new disk is seen as a foreign disk.  In this case, do the following:     Foreign disk is reported in bay 9 CTRL-N (Next Page) to Foreign Mgt All the disk groups will be displayed.  Typically, the disk group containing the foreign disk will be grey.  To remove the foreign disk Highlight Controller Press F2 Select Foreign Select Clear (do NOT import the configuration!)       Clear the foreign configuration Now the disk can be brought into the system shadow set disk group as a hot spare   To include the newly inserted disk into the system shadowset disk group, it must be brought in as a hot spare Highlight Disk Group 2 (VD Management) Hit F2 Select 'Manage Ded. HS'     Manage dedicated hot swap Select the disk in bay 9 (Hit space bar to select) Tab to 'OK'.  Hit the return key     Select hot spare to bring into RAID 1 mirror set   Rebuild automatically commences     Rebuild in process   Restart now or restart after rebuild is completed

    Read the article

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • Mouse Over YouTube Previews YouTube Videos in Chrome

    - by ETC
    If you’re an avid YouTube video watcher, Mouse Over YouTube is a free Chrome extension that pops up a preview of any video you mouse over. Install the extension, put your mouse cursor over any YouTube video thumbnail, and a preview pops up in the upper right corner of your Chrome browser window. The only request we’d direct at the developer is either the ability to adjust the mouse over delay or to simply extend the delay. As it is now the video preview starts almost instantly which can make a whole page of YouTube thumbnails like a mine field of unexpected videos. Hit up the link below to grab a free copy. Mouse Over YouTube [Google Chrome Extensions via Addictive Tips] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Access and Manage Your Ubuntu One Account in Chrome and Iron Mouse Over YouTube Previews YouTube Videos in Chrome Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video] Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron

    Read the article

  • Multiple stores for the same niche

    - by pandronic
    I started developing a new niche of products in my country about 3 years ago. That's when I opened my first store. Everything went fine, until a year ago, when someone I thought was a friend secretly stole my idea and made his own competing store. I was pretty upset when I caught him and decided to make it as difficult as possible for him, so I made another 4 stores, trying to get him as low as possible in the search results. The new sites have similar products (although not 100% identical), slightly different titles, images and prices. They look different and are built on different e-commerce platforms. They are all hosted on the same server, have roughly the same backlinks, use the same Google account for Analytics, have the same support phone numbers etc etc. I wasn't thinking that I'm doing something fishy, so I didn't try to hide anything. Trouble is that those sites, after doing fine for a few months, dropped like bricks in search results, almost to the point that they can't be found at all. At the moment, the only site that ranks relatively well is the original one and a couple of secondary pages with no importance from one of the other sites. How did this happen? Does Google have something against this practice? Did they take action by themselves when they realized that I was trying to monopolize this niche, or did my competitor report me for some kind of webspam? And more importantly, what do I do now? Do I shutdown all but my original site and 301 redirect users to it from the others? Can I report my competitor for engaging in the same practice? (He fought back and now he has 3-4 sites, some of which still rank kind of OKish, also he has no idea about web development, SEO or marketing, he just crudely copies what I do and is slowly but surely starting to do better than me).

    Read the article

  • Click Once Deployment Process and Issue Resolution

    - by Geordie
    Introduction We are adopting Click Once as a deployment standard for Thick .Net application clients.  The latest version of this tool has matured it to a point where it can be used in an enterprise environment.  This guide will identify how to use Click Once deployment and promote code trough the dev, test and production environments. Why Use Click Once over SCCM If we already use SCCM why add Click Once to the deployment options.  The advantages of Click Once are their ability to update the code in a single location and have the update flow automatically down to the user community.  There have been challenges in the past with getting configuration updates to download but these can now be achieved.  With SCCM you can do the same thing but it then needs to be packages and pushed out to users.  Each time a new user is added to an application, time needs to be spent by an administrator, to push out any required application packages.  With Click Once the user would go to a web link and the application and pre requisites will automatically get installed. New Deployment Steps Overview The deployment in an enterprise environment includes several steps as the solution moves through the development life cycle before being released into production.  To make mitigate risk during the release phase, it is important to ensure the solution is not deployed directly into production from the development tools.  Although this is the easiest path, it can introduce untested code into production and result in unexpected results. 1. Deploy the client application to a development web server using Visual Studio 2008 Click Once deployment tools.  Once potential production versions of the solution are being generated, ensure the production install URL is specified when deploying code from Visual Studio.  (For details see ‘Deploying Click Once Code from Visual Studio’) 2. xCopy the code to the test server.  Run the MageUI tool to update the URLs, signing and version numbers to match the test server. (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) 3. xCopy the code to the production server.  Run the MageUI tool to update the URLs, signing and version numbers to match the production server. The certificate used to sign the code should be provided by a certificate authority that will be trusted by the client machines.  Finally make sure the setup.exe contains the production install URL.  If not redeploy the solution from Visual Studio to the dev environment specifying the production install URL.  Then xcopy the install.exe file from dev to production.  (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) Detailed Deployment Steps Deploying Click Once Code From Visual Studio Open Visual Studio and create a new WinForms or WPF project.   In the solution explorer right click on the project and select ‘Publish’ in the context menu.   The ‘Publish Wizard’ will start.  Enter the development deployment path.  This could be a local directory or web site.  When first publishing the solution set this to a development web site and Visual basic will create a site with an install.htm page.  Click Next.  Select weather the application will be available both online and offline. Then click Finish. Once the initial deployment is completed, republish the solution this time mapping to the directory that holds the code that was just published.  This time the Publish Wizard contains and additional option.   The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application. Visual studio will publish the application to the desired location in the process it will create an anonymous ‘pfx’ certificate to sign the deployment configuration files.  A production certificate should be acquired in preparation for deployment to production.   Directory structure created by Visual Studio     Application files created by Visual Studio   Development web site (install.htm) created by Visual Studio Migrating Click Once Code to a new Server without using Visual Studio To migrate the Click Once application code to a new server, a tool called MageUI is needed to modify the .application and .manifest files.  The MageUI tool is usually located – ‘C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin’ folder or can be downloaded from the web. When deploying to a new environment copy all files in the project folder to the new server.  In this case the ‘ClickOnceSample’ folder and contents.  The old application versions can be deleted, in this case ‘ClickOnceSample_1_0_0_0’ and ‘ClickOnceSample_1_0_0_1’.  Open IIS Manager and create a virtual directory that points to the project folder.  Also make the publish.htm the default web page.   Run the ManeUI tool and then open the .application file in the root project folder (in this case in the ‘ClickOnceSample’ folder). Click on the Deployment Options in the left hand list and update the URL to the new server URL and save the changes.   When MageUI tries to save the file it will prompt for the file to be signed.   This step cannot be bypassed if you want the Click Once deployment to work from a web site.  The easiest solution to this for test is to use the auto generated certificate that Visual Studio created for the project.  This certificate can be found with the project source code.   To save time go to File>Preferences and configure the ‘Use default signing certificate’ fields.   Future deployments will only require application files to be transferred to the new server.  The only difference is then updating the .application file the ‘Version’ must be updated to match the new version and the ‘Application Reference’ has to be update to point to the new .manifest file.     Updating the Configuration File of a Click Once Deployment Package without using Visual Studio When an update to the configuration file is required, modifying the ClickOnceSample.exe.config.deploy file will not result in current users getting the new configurations.  We do not want to go back to Visual Studio and generate a new version as this might introduce unexpected code changes.  A new version of the application can be created by copying the folder (in this case ClickOnceSample_1_0_0_2) and pasting it into the application Files directory.  Rename the directory ‘ClickOnceSample_1_0_0_3’.  In the new folder open the configuration file in notepad and make the configuration changes. Run MageUI and open the manifest file in the newly copied directory (ClickOnceSample_1_0_0_3).   Edit the manifest version to reflect the newly copied files (in this case 1.0.0.3).  Then save the file.  Open the .application file in the root folder.  Again update the version to 1.0.0.3.  Since the file has not changed the Deployment Options/Start Location URL should still be correct.  The application Reference needs to be updated to point to the new versions .manifest file.  Save the file. Next time a user runs the application the new version of the configuration file will be down loaded.  It is worth noting that there are 2 different types of configuration parameter; application and user.  With Click Once deployment the difference is significant.  When an application is downloaded the configuration file is also brought down to the client machine.  The developer may have written code to update the user parameters in the application.  As a result each time a new version of the application is down loaded the user parameters are at risk of being overwritten.  With Click Once deployment the system knows if the user parameters are still the default values.  If they are they will be overwritten with the new default values in the configuration file.  If they have been updated by the user, they will not be overwritten. Settings configuration view in Visual Studio Production Deployment When deploying the code to production it is prudent to disable the development and test deployment sites.  This will allow errors such as incorrect URL to be quickly identified in the initial testing after deployment.  If the sites are active there is no way to know if the application was downloaded from the production deployment and not redirected to test or dev.   Troubleshooting Clicking the install button on the install.htm page fails. Error: URLDownloadToCacheFile failed with HRESULT '-2146697210' Error: An error occurred trying to download <file>   This is due to the setup.exe file pointing to the wrong location. ‘The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application.’

    Read the article

  • Create a Shortcut To Group Policy Editor in Windows 7

    - by Mysticgeek
    If you’re a system administrator and find yourself making changes in Group Policy Editor, you might want to make a shortcut to it. Here we look at creating a shortcut, pinning it to the Taskbar, and adding it to Control Panel. Note: Local Group Policy Editor is not available in Home versions of Windows 7. Typing gpedit.msc into the search box in the Start menu to access Group Policy Editor can get old fast. To create a shortcut, right-click on the desktop and select New \ Shortcut. Next type or copy the following path into the location field and click Next. c:\windows\system32\gpedit.msc Then give your shortcut a name…something like Group Policy, or whatever you want it to be and click Finish. Now you have your Group Policy shortcut… If you want it on the Taskbar just drag it there to pin it. And that’s all there is to it!   If you want to change the icon, you can use one of the following guides… Customize Icons in Windows 7 Change a File Type Icon in Windows 7 Add Group Policy to Control Panel If you’re using non Home versions of XP, Vista, or Windows 7, check out The Geek’s article on how to Add Group Policy Editor to Control Panel. Similar Articles Productive Geek Tips Add Group Policy Editor to Control PanelQuick Tip: Disable Search History Display in Windows 7Remove Shutdown and Restart Buttons In Windows 7How To Disable Control Panel in Windows 7Allow Users To Run Only Specified Programs in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Dell Docking Station Doesn’t Detect USB Mouse and Keyboard

    - by Ben Griswold
    I’ve found myself in this situation with multiple Dell docking stations and multiple Dell laptops running various Windows operating systems.  I don’t know why the docking station stops recognizing my USB mouse and keyboard – it just does.  It’s black magic.  The last time around I just starting plugging the mouse and keyboard into the docked laptop directly and went about my business (as if I wasn’t completing missing out on a couple of the core benefits of using a docking station.)  I guess that’s what happens when you forget how you got yourself out of the mess the last time around.  I had been in this half-assed state for a couple of weeks now, but a coworker fortunately got themselves in and out of the same pickle this morning.  Procrastinate long enough and the solution will just come to you, right? Here’s how to get yourself out of this mess: Undock your computer Unplug your docking station Count to an arbitrary number greater than 12.  (Not sure this is really required, but…) Plug your docking station back in Redock your machine I put my machine to sleep before taking the aforementioned actions.  My coworker completely shutdown his laptop instead.  The steps worked on both of our Win 7 machines this morning and, who knows, it might just work for you too. 

    Read the article

  • vsftpd: chroot_local_user causes GNU/TLS-error

    - by akrosikam
    Distro: Ubuntu 12.04.2 Server 32-bit Server client: vsftpd 2.3.5 (from default "main" repository) Problem: Since upgrading from Ubuntu 10.04 to Ubuntu 12.04 (nothing changed on client-side), vsftp has refused to make chroot-jails with the "chroot_local_user" directive on FTP(e/i)S-connections. Here's my vsftpd.conf: anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES xferlog_std_format=YES ftpd_banner=How are you gentlemen. listen=YES pam_service_name=vsftpd userlist_enable=YES userlist_deny=NO tcp_wrappers=YES connect_from_port_20=YES ftp_data_port=20 listen_port=21 pasv_enable=YES pasv_promiscuous=NO pasv_min_port=4242 pasv_max_port=4252 pasv_addr_resolve=YES pasv_address=your.domain.com ssl_enable=YES allow_anon_ssl=NO force_local_logins_ssl=YES force_local_data_ssl=YES ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO rsa_cert_file=/home/maw/ssl_ftp_test/vsftpd.pem rsa_private_key_file=/home/maw/ssl_ftp_test/vsftpd.pem debug_ssl=YES log_ftp_protocol=YES ssl_ciphers=HIGH chroot_local_user=NO How to reproduce: Have a working SSL/TLS-secured vsftpd-configuration (I suggest similar to the one above) ready. Try to connect with an FTP user client and upload some files. With my setup, the above listed config works well at this point. Edit /etc/vsftpd.conf and set chroot_local_user= to YES. Make sure that chroot_list_enable= and/or chroot_list_file= are not set. Comment them out if they are. Save and exit. Run sudo restart vsftpd (or sudo service vsftpd restart if you like) in a terminal. Try to connect with an FTP user client. You should see a message more or less like this: GnuTLS error -15: An unexpected TLS packet was received. This is an issue for me, as I do not want FTP-sessions to be able to list files outside the user's home folder. I have checked with several client-side apps, and I get the same results with every one of them. Filezilla is not so good regarding cipher methods nowadays, but as I am able to make an FTP(e)s-connection over TLS (as long as chroot'ing is disabled and ssl_ciphers is set to HIGH) I have a feeling ciphers are not the issue this time, and that I won't find the answer by tweaking configs on the client side. My vsftpd.log stays empty, even though debug_ssl and log_ftp_protocol are enabled, so no info there either.

    Read the article

  • How to configure SoapUI with client certificate authentication

    - by gvdmaaden
    SoapUI is one of the best free tools around to test web services. Some time ago I was trying to send a soap message towards a SSL web service that was set up for client certificate authentication. I pretty soon got stuck at the “javax.net.ssl.SSLException: HelloRequest followed by an unexpected handshake message” error, but after reading several posts on the internet I solved that issue. It’s not really that complicated after all, but since I could not find a decent place on the internet that explains this scenario in a proper way, here’s a list of steps that you need to do to make it work. Note: this following steps are based on a Windows environment   Step one: Export your certificate (the one that you want to use as the client certificate) using the export wizard with the private key and with all certificates in the certification path: Give it a password (anything you want): And export it as a PFX file to a location somewhere on disk: Step two: Install the newest version of SOAP UI (currently it is 3.6.1) Open the file C:\Program Files\eviware\soapUI-3.6.1\bin\ soapUI-3.6.1.vmoptions and add this line at the bottom: -Dsun.security.ssl.allowUnsafeRenegotiation=true This is needed because of a JAVA security feature in their newest frameworks (For further reading about this issue, read this: http://www.soapui.org/forum/viewtopic.php?t=4089 and this: http://java.sun.com/javase/javaseforbusiness/docs/TLSReadme.html).   Open SOAPUI and go to preferences>SSL Settings and configure your certificate in the keystore (use the same password as in step one): That should be it. Just create a new project and import the WSDL from the client authenticated SSL webservice: And now you should be able to send soap messages with client certificate authentication. The above steps worked for me, but please drop a note if it does not work for you.

    Read the article

  • Output = MAXDOP 1

    - by Dave Ballantyne
    It is widely know that data modifications on table variables do not support parallelism, Peter Larsson has a good example of that here .  Whilst tracking down a performance issue,  I saw that using the OUTPUT clause also causes parallelism to not be used. By way of example,  first lets create two tables with a simple parent and child (one to one) relationship, and then populate them with 100,000 rows. Drop table ParentDrop table Childgocreate table Parent(id integer identity Primary Key,data1 char(255))Create Table Child(id integer Primary Key)goinsert into Parent(data1)Select top 1000000 NULL from sys.columns a cross join sys.columns b insert into ChildSelect id from Parentgo If we then execute update Parent set data1 =''from Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1 We should see an execution plan using parallelism such as   However,  if the OUTPUT clause is now used update Parent set data1 =''output inserted.idfrom Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1   The execution plan shows that Parallelism was not used Make of that what you will, but i thought that this was a pretty unexpected outcome. Update : Laurence Hoff has mailed me to note that when the OUTPUT results are captured to a temporary table using the INTO clause,  then parallelism is used.  Naturally if you use a table variable then there is still no parallelism  

    Read the article

  • steam won't open after install

    - by Dan Cooper
    I've looked all over the place for a solution but no one seems to be getting the same error codes as me. When I try to run Steam through terminal I get the following error: Running Steam on ubuntu 13.04 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1367621987_client) Installing breakpad exception handler for appid(steam)/version(1367621987_client) unlinked 0 orphaned pipes Gtk-Message: Failed to load module "overlay-scrollbar" Installing breakpad exception handler for appid(steam)/version(1367621987_client) [1013/104817:WARNING:proxy_service.cc(646)] PAC support disabled because there is no system implementation /home/buildbot/buildslave_steam/steam_rel_client_ubuntu12_linux/build/src/steamUI/../common/steam/client_api.cpp (281) : Assertion Failed: ClientAPI_InitGlobalInstance: InternalAPI_Init_Internal failed. Assert( Assertion Failed: ClientAPI_InitGlobalInstance: InternalAPI_Init_Internal failed. ):/home/buildbot/buildslave_steam/steam_rel_client_ubuntu12_linux/build/src/steamUI/../common/steam/client_api.cpp:281 Installing breakpad exception handler for appid(steam)/version(1367621987_client) Uploading dump (out-of-process) [proxy ''] /tmp/dumps/assert_20131013104817_1.dmp /home/buildbot/buildslave_steam/steam_rel_client_ubuntu12_linux/build/src/steamUI/SteamStartup.cpp (627) : Assertion Failed: ! "There was a problem with your Steam installation.\n" "Please reinstall steam.\n" unlinked 2 orphaned pipes CAsyncIOManager: 0 threads terminating. 0 reads, 0 writes, 0 deferrals. CAsyncIOManager: 75 single object sleeps, 0 multi object sleeps CAsyncIOManager: 0 single object alertable sleeps, 1 multi object alertable sleeps [2013-10-13 10:48:16] Startup - updater built May 3 2013 15:08:27 [2013-10-13 10:48:16] Verifying installation... [2013-10-13 10:48:16] Verification complete Shutting down. . . [2013-10-13 10:48:17] Shutdown Finished uploading minidump (out-of-process): success = yes response: CrashID=bp-d172a742-b7dd-419c-b235-d60c32131013 I've tried sudo apt-get purge and terminal tries to tell me I don't have Steam installed. I've tried reinstalling with software center but that doesn't help either.

    Read the article

  • Out of space despite lots of free space remaining

    - by Kristian Thomsen
    When upgrading Ubuntu from 11.10 to 12.04 I discovered an unexpected problem. The upgrade was stopped because there wasn't enough free space for the installation. I managed to free some space and do the upgrade but now a prompt appears after logging in saying I'm out of space. This prompt asks me if I want to examine the problem. The "Disk Usage Analyser" is opened. In the top it says: Total filesystem capacity: 47.0 GB (used: 13.5 GB available: 33.4 GB) Folder -- Usage -- Size / -- 100% -- 12.5 GB usr -- 44.8 % -- 5.6 GB home -- 30.3 % -- 3.8 GB lib -- 13.0 % -- 1.6 GB var -- 9.1 % -- 1.1 GB boot 2.5 % 309.5 GB and a lot of small contributors like: etc, opt, sbin, bin etc. I do not really understand this problem since the analyser in the top says that I have 33.4 GB left in this file system. What can I do to make Ubuntu use the remaining space? Running df -i in the terminal gives: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 610800 576874 33926 95% / udev 213451 563 212888 1% /dev tmpfs 218524 486 218038 1% /run none 218524 3 218521 1% /run/lock none 218524 7 218517 1% /run/shm /dev/sda8 2264752 16371 2248381 1% /home What does this mean?

    Read the article

  • [MINI HOW-TO] Remove the Search Helper Extension from Firefox

    - by Asian Angel
    If you found a new surprise extension added to Firefox after the June Patch from Microsoft, then you are likely to be rather unhappy right now. Join us as we show you how to remove the Search Helper extension from your browser. An Unexpected Addition to Your Extensions You may be wondering what the new mysterious extension that showed up is for. Its’ purpose is to help the Bing Toolbar better integrate with your browser. Unless you have the Bing Toolbar installed you really do not need this cluttering your browser up. So how do you get rid of it? Removing the Extension In order to remove the extension you will need to navigate to the following location: C:\Program Files\Microsoft\Search Enhancement Pack\Search Helper Once there delete the “firefoxextension folder”…that is all there is to it. If you want to remove the search helper add-on for Internet Explorer then delete the “SEPsearchhelperie.dll file” while you are here. Note: You may need to have administrator rights in order to delete the folder. No more Search Helper Extension! If you are unhappy about this update being snuck into your system, following these instructions will remove it. Microsoft Support Page About Update KB982217 Similar Articles Productive Geek Tips Remove the New Tab Button in FirefoxAdd Search Forms to the Firefox Search BarAdd Notes to Zoho Notebook in FirefoxOrganize Your Firefox Search Engines Into FoldersManually Remove Skype Extension from Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Steve Jobs’ iPhone 4 Keynote Video Watch World Cup Online On These Sites Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites

    Read the article

  • Ubuntu 12.04 LTS loops the login screen unless you login as Guest

    - by Mário Silva
    I am running a VMWare Player with a Ubuntu 12.04 LTS Precise Pangloin as Guest on my Windows 7. Sometimes I get the shutdown blue screen error in Windows, this time it happened when I was running the Player. When I restarted everything Ubuntu gave the not so unfamiliar in this forum Login Loop in adminstrator login. I login and there's this black screen where I can only read: "piix4...smbus:0.0.0.07.3 Host Smbus controller not enabled" . When I go to the Prompt in root mode it fails to update and only upgraded, specially some plugins ( I think graphic plugins) which also appear in one an error message after quitting the prompt, but they´are successfully installed. They are not the error message. After that I have been working with the Fail/safe Mode recovery panel. When I try to update via Root I get errors like this: W:failed to fetch http://extras.ubuntu.com/ubuntu/dists/precise/release.gpg could not resolve 'extras/ubuntu.com There are 8 more like this referring to areas like: -archive/canonical.com -ppa.Launchpad.net -security.Ubuntu.com -Us.archive.ubuntu.com - release.gpg precise-updates/release.gpg precise_backport/release.gpg Final Message: some index files failed to download.....they have been ignored or old files are used. The black screens most of the time pass by too fast for me to pick up any information. But in general I think I have done everything I was able to in the recovery panel including updating network and graphic packages and recovering filesystem packages and the basic stuff ( I am a beginner regarding Linux ) in the root prompt. Now I am stuck in this screen with graphic options: - Run in low-graphics mode just for one session - Reconfigure Graphics - Troubleshoot the error - Exit to console login I am trying to choose to reconfigure graphics but the mouse disappears in the virtual machine screen and sometimes when options change ity´s only the first and last option. ut this happens from the blue without messages. This particular option menu is in the regular GUI style against a black screen in Terminal style. Really strange. Thanx in advance, all is welcome and appreciated.

    Read the article

  • Can't add to panel nor delete panel

    - by david
    Hello everybody! I cannot add any applet to any (top or bottom) panel, cannot delete any panel nor create a new panel. When I right-click on the panel the only options available are: Properties, Help or About panels. [I cannot post an image because of spam prevention, so I'll do my best] I can see when I right-click (bold means clickable): Add to panel Properties Delete this panel New panel Help About Panels Trying to solve this I did what is usually suggested: gconftool-2 –-recursive-unset /apps/panel # might be optional rm -rf ~/.gconf/apps/panel pkill gnome-panel but I only got a nice empty panel (no Applications Places System, no clock, no shutdown button...) to which I couldn't add any applet, so I decided to take the default profiles in .gconf and .gconfd from a live CD and overwrite mines. Now we are back to the beginning. I also have tried to lock completely the panel (with both gconf-editor and pessulus) and later unlock it completely but it didn't work. Here is the system information: $ lsb_release Distributor ID: Ubuntu Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid Thank you very much.

    Read the article

  • Getting Started with MySQL Cluster, Hands-on Lab, Next Saturday, MySQL Connect

    - by user13819847
    Hi!I'm speaking at MySQL Connect next Saturday, Sep. 29. My Session is a hands-on lab (HOL) on MySQL Cluster.If you are interested in familiarize a bit with MySQL Cluster this is definitely a session for you. I will start by briefly introducing MySQL Cluster and its architecture. Then I will guide you through the needed steps to install a local MySQL Cluster, connect to it (using the command line), monitor its logs, and safe shutdown it.We will hence have a chance to see which are the most common commands using in MySQL Cluster administration (e.g. Cluster backup) as well as the most common operations (e.g. online datanode add). Cluster's users and customers have the flexibility to choose whether they prefer to use a SQL or NoSQL approach to connect to MySQL Cluster, so, during the last part of the HOL, we will see how to connect to MySQL Cluster using the NoSQL NDB API. If there is enough time at the end, we will also compile and execute some simple Java programs that make use of Connector J to connect to the SQL Nodes of our Cluster. I hope this HOL will be of your interest! Below are some details if you decide to attend:When:   Saturday Sep. 29, 4 pmWhere: Hilton San Francisco - Plaza Room AIf you are interested in other MySQL Cluster sessions, you will find the info you need in this post. The full program of the MySQL Connect Conference is here, and if you are not registered yet, remember that you can still save US $300 over the on-site fee – Register Now! See you at MySQL Connect!

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >