Search Results

Search found 5419 results on 217 pages for 'warning'.

Page 123/217 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • System can't sleep, can't shutdown (might be networking related)

    - by Kevin Q
    I speculate it's networking related since I could only shut down my system if I do /etc/init.d/networking stop first. If I tried to put the system to sleep, it disconnect the network then reconnect it automatically, never goes to sleep. I uninstalled the network manager, then /etc/init.d/networking stop will not help shutdown the system anymore. When I boot into the OS, there were some warning about Unknown job:S20acpid,S20network-interface, S20network-manager . I got some advices to do update-rc.d -f to remove all those script, which I am not certain if it's right thing to do. The messages are gone but the problem remains. When I tried to restart, I used to get this message It doesn't seem to be the modem manager since I later removed it. Once upon a time, the fronze shutting down/restarting can be restarted when I presses Ctrl+Alt+Delete, and it says something like networking is stopped externally. If I log in single user mode, and shutdown or reboot, it show this screen I also tried GRUB_CMDLINE_LINUX_DEFAULT = "acpi=force" This is a Ubuntu 12.04 running on Thinkpad T520, not new installation, no such problems occurred before.

    Read the article

  • BizTalk 2009 - The Scope of the Table Looping Functoid

    - by StuartBrierley
    When mapping in BizTalk you will find there are times when you need to map from flat and dispersed elemements in your source schema to a repeated record with child elements in your destination schema.  Below is an example of how you can make use of the Table Looping Functoid to bring together these flat elements and create your repeated group.  Although this example is purposely simple, I have previsouly encounted this issue on a much more complex scale when mapping the response from a credit scoring agency where all the applicant details were supplied in separate parts of a very flat schema. Consider the source and destination schemas as follows:   Although the Table Looping Functoid states that the first input must be a scoping element linked from a repeating group, you can actually also make use of a constant value.  In this case I know that the source schema always contains two people, so I set this to two. Then you need to set the number of columns in your table, in this case 2 (name and sex) and link all the required fields from the source schema. Following this you can configure the table. You can then add the Table Extractor functoids and complete the map. If you now validate this map you will see that BizTalk will warn you about the scoping link for the Table Looping Functoid, but this can be safely ignored. C:\Code\Developer Folders\Stuart Brierley\Test Mapping\TableLooping.btm: warning btm1071: A first input of the Table-Looping functoid must be a link from a Source Tree Node which acts as the scoping parameter. Testing the map will produce the following output:

    Read the article

  • virtualbox, MAAS: help needed

    - by Roberto Attias
    Ok, I made some progress wrt the original question (still below). I found /etc/maas/dhcpd.conf contained option domain-name-servers 10.0.3.15, and changed it to 192.168.0.11. After restarting the daemon, I now see "node" getting the right DNS, unfortunately this doesn't fix the main problem, which I believe is the reference to 169.254.169.254. It does introduce a new question: while the remaining information from /etc/maas/dhcp.conf is present in the maas GUI, there is no field to enter the dns address. Why? Anyway, my original problem still stands... Any idea? Original question follows. In VirtualBox, I have: master VM: ubuntu 12.04.3 server eth0: Internal Network, IP= 192.168.0.11 eth1: NAT, IP= 10.0.3.15 eth2: Host-only, IP= 192.168.56.102 running MAAS region and cluster controlller, with DHCP and DNS enabled node VM: eth0: Internal Network node VM boots in PXEboot. DHCP succeeds, and the boot process starts, but during boot I see some issues. One of them is "disk drive not ready yet or not present" for / and /tmp. I've googled this issue, and some people say it happens when the fisical disk is a SSD, which is my case. Anywaythe system seems to recover from this eventually. Immediately after it starts printing a lot of messages of the form: 2013-10-01 16:52:37,142 - url_helper.py[WARNING]: Calling 'http://169.254.168.254/2009-04-04/meta-data/instance-id failed [x/y]: url error [[Errno 113] No route to host] That IP address is clearly bogous, not sure where it came from. Before that point, I had seen the following network configuration: address: 192.168.0.100 broadcast: 192.168.0.255 netmask: 255.255.255.0 gateway: 192.168.0.1 dns0 : 10.0.3.15 dns1 : 0.0.0.0 Not sure if related, but the dns doesn't seem right, as node doesn't have an interface to reach 10.0.3.15. If that's the problem, what should I change to have the DNS point to 192.168.0.11? Thanks, Roberto

    Read the article

  • Pro BizTalk 2009

    - by Sean Feldman
    I have finished reading Pro BizTalk 2009 book from APress. This is a great book  if you’ve never dealt with BizTalk in the past and want to have a quick “on-ramp”. Although the book is very concerned about right way of building traditional BizTalk applications, it also dedicates a chapter to ESB Toolkit and does a good job in analyzing it. The fact that authors were concerned with subject such as coupling, hard-coding, automation, etc. makes it very interesting. One warning, if you are expecting to have a book that will guide you how to apply step by step examples, forget it. Nor this book does it, neither it’s possible due to multiple erratas found in it. I really was disappointed by the number of typos, inaccuracies, and technical mistakes. These kind of things turn readers away, especially when they had no experience with BT in the past. But this is the only bad thing about it. As for the rest – great content. Next week I am taking a deep dive course on BizTalk 2009. I really feel that this book has helped me a lot to get my feet wet. Next target will be ESB Tookit. To get to that, I will use what the book authors wrote “you have to understand how BizTalk as an engine works”.

    Read the article

  • How to Enable Firefox’s Built-in PDF Reader

    - by Taylor Gibb
    Firefox 15 includes an all new PDF reader built into the browser–for those of you wondering, that means you can finally disable the Adobe PDF Plugin and uninstall it once and for all. Note: obviously if you need to access more advanced PDF features, you’ll still need the Adobe plugin. For most of us, however, the built-in viewer is fine, or you could download PDF files and read them in the offline Adobe Reader. Enabling Firefox’s Built-in PDF Reader Open Firefox and navigate to about:config. This will bring up a sarcastic warning telling you that you might void your warranty, just click the “I’ll be careful, I promise!” button to move on. Now you will need to search for: browser.preferences.inContent When you find it, right click on it and select Toggle from the context menu. Next you will need enable the actual PDF Reader feature, you can do this by searching for: pdfjs.disabled That’s all there is to it, you can even drag PDF files on your local machine on t0 the Firefox windows to view them! HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows?

    Read the article

  • Trust: A New Line of Business

    - by ruth.donohue
    What do you think are the key factors in building and maintaining your company's reputation... Innovation? Price? Surprisingly, according to the recent 2010 Edelman Trust Barometer, survey respondents in the US valued transparency of business practices as well as company trustworthiness as the two most important factors influencing corporate reputation. What is trust? It's the confidence in a company's ability to do what is right for all its stakeholders -- shareholders, customers, employees, and the broader society at large -- and not just shareholders. Trust is an increasingly important component to maintaining your company's reputation and brand, and Western countries have seen an increase in global trust. Global businesses headquartered in the United States in particular have seen an 18 point boost to 54 percent in global trust. Whether this uptick represents the start of a new trend or a mere blip in the barometer remains to be seen. The Edelman report notes that the increase is "tenuous" as people expect companies to return to "business as usual" after the economy rebounds. This warning underscores the need for companies to continue engaging in open and frequent communications and business practices with its stakeholders across multiple channels and view trust as a "new line of business" to cultivate.

    Read the article

  • How do I create a popup banner before login with Lightdm?

    - by Rich Loring
    When Ubuntu was using gnome I was able to create a popup banner like the banner below before the login screen using zenity in the /etc/gdm/Init/Default. The line of code would be like this: if [ -f "/usr/bin/zenity" ]; then /usr/bin/zenity --info --text="`cat /etc/issue`" --no-wrap; else xmessage -file /etc/issue -button ok -geometry 540X480; fi How can I accomplish this with Unity? NOTICE TO USERS This is a Federal computer system (and/or it is directly connected to a BNL local network system) and is the property of the United States Government. It is for authorized use only. Users (authorized or unauthorized) have no explicit or implicit expectation of privacy. Any or all uses of this system and all files on this system may be intercepted, monitored, recorded, copied, audited, inspected, and disclosed to authorized site, Department of Energy, and law enforcement personnel, as well as authorized officials of other agencies, both domestic and foreign. By using this system, the user consents to such interception, monitoring, recording, copying, auditing, inspection, and disclosure at the discretion of authorized site or Department of Energy personnel. Unauthorized or improper use of this system may result in administrative disciplinary action and civil and criminal penalties. By continuing to use this system you indicate your awareness of and consent to these terms and conditions of use. LOG OFF IMMEDIATELY if you do not agree to the conditions stated in this warning.

    Read the article

  • What happens to C# 4 optional parameters when compiling against 3.5?

    - by Bertrand Le Roy
    Here’s a method declaration that uses optional parameters: public Path Copy( Path destination, bool overwrite = false, bool recursive = false) Something you may not know is that Visual Studio 2010 will let you compile this against .NET 3.5, with no error or warning. You may be wondering (as I was) how it does that. Well, it takes the easy and rather obvious way of not trying to be too smart and just ignores the optional parameters. So if you’re compiling against 3.5 from Visual Studio 2010, the above code is equivalent to: public Path Copy( Path destination, bool overwrite, bool recursive) The parameters are not optional (no such thing in C# 3), and no overload gets magically created for you. If you’re building a library that is going to have both 3.5 and 4.0 versions, and you want 3.5 users to have reasonable overloads of your methods, you’ll have to provide those yourself, which means that providing a version with optional parameters for the benefit of 4.0 users is not going to provide that much value, except for the ability to provide named parameters out of order. I guess that’s not so bad… Providing all of the following overloads will compile against both 3.5 and 4.0: public Path Copy(Path destination)public Path Copy(Path destination, bool overwrite)public Path Copy( Path destination, bool overwrite = false, bool recursive = false)

    Read the article

  • Resolving "PLS-00201: identifier 'DBMS_SYSTEM.XXXX' must be declared" Error

    - by Giri Mandalika
    Here is a failure sample. SQL set serveroutput on SQL alter package APPS.FND_TRACE compile body; Warning: Package Body altered with compilation errors. SQL show errors Errors for PACKAGE BODY APPS.FND_TRACE: LINE/COL ERROR -------- ----------------------------------------------------------------- 235/6 PL/SQL: Statement ignored 235/6 PLS-00201: identifier 'DBMS_SYSTEM.SET_EV' must be declared .. By default, DBMS_SYSTEM package is accessible only from SYS schema. Also there is no public synonym created for this package. So, the solution is to create the public synonym and grant "execute" privilege on DBMS_SYSTEM package to all database users or a specific user. eg., SQL CREATE PUBLIC SYNONYM dbms_system FOR dbms_system; Synonym created. SQL GRANT EXECUTE ON dbms_system TO APPS; Grant succeeded. - OR - SQL GRANT EXECUTE ON dbms_system TO PUBLIC; Grant succeeded. SQL alter package APPS.FND_TRACE compile body; Package body altered. Note that merely granting execute privilege is not enough -- creating the public synonym is as important to resolve this issue.

    Read the article

  • Can't exec "locale": No such file or directory

    - by Alex
    I am new in Linux. I was trying to install wine and after /i followed instructions from a youtube video i got to the point where I needed to install wine from Ubuntu Software Center. The problem is the Ubuntu Software Center doesn't work anymore, it ask me to reparir it, and when I push the Repair button it gives me this error: installArchives() failed: Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (2) Please help me. Thank you :D

    Read the article

  • Read-only file system

    - by John
    The title might not be as descriptive as I would like it to be but couldn't come up with a better one. My server's file system went into Read-only. And I don't understand why it does so and how to solve it. I can SSH into the server and when trying to start apache2 for example I get the following : username@srv1:~$ sudo service apache2 start [sudo] password for username: sudo: unable to open /var/lib/sudo/username/1: Read-only file system * Starting web server apache2 (30)Read-only file system: apache2: could not open error log file /var/log/apache2/error.log. Unable to open logs Action 'start' failed. The Apache error log may have more information. When I try restarting the server I get : username@srv1:~$ sudo shutdown -r now [sudo] password for username: sudo: unable to open /var/lib/sudo/username/1: Read-only file system Once I restart it manually it just start up without any warning or message saying something is wrong. I hope somebody could point me into the right direction to resolve this issue. Thanks in advance!

    Read the article

  • SQL Server 2012 Memory Manager KB articles

    - by SQLOS Team
    Since the release of SQL Server 2012 with a redesigned memory manager, a steady stream of KB articles have been produced by CSS to provide guidance on the new or changed options, as well as fixes that have been published..   How has memory sizing changed in SQL 2012? 2663912 Memory configuration and sizing considerations in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2663912     Setting "locked pages" to avoid SQL Server memory pages getting swapped has been simplified, particularly for Standard Edition, the details can be found here: 2659143 How to enable the "locked pages" feature in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2659143   Note the following deprecation (particularly relevant for 32-bit installations): 2644592 The "AWE enabled" SQL Server feature is deprecated - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2644592   Note the following fixes available: 2708594 FIX: Locked page allocations are enabled without any warning after you upgrade to SQL Server 2012 - http://support.microsoft.com/kb/2708594/EN-US 2688697 FIX: Out-of-memory error when you run an instance of SQL Server 2012 on a computer that uses NUMA - http://support.microsoft.com/kb/2688697/EN-US Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • shared hosting: ssl domain receives all server's ssl requests, google gets it wrong

    - by pixeline
    This website is hosted on a shared server. I've recently had my hosting provider secure our website using SSL (https://domain.com instead of http://domain.com). Ever since then, all https requests sent to the server are redirected to my website. https://otherdomain.com leads to a warning, then, if you continue, to my website. Ok, my fault, i should have known SSL means 1 IP, otherwise, this thing can happen. But... Google Search results for my website's target keywords is now displaying these websites above my own, even though they have nothing even remotely related to the target keywords! already done: provide canonical url in the html page. told the problem to the server manager, who tells me it's normal but he'll look for a solution. This was one week ago, no answer since. I have no idea why Google is providing these https urls: i thought someone would have to submit them, or have them inside an html page in order for Google to actually index dummy https domains, but i see no reason why someone would do that. Any suggestion on how to solve this situation? Golive is in one week and SEO looks really bad because of that.

    Read the article

  • E.T. II – Extinction [Fake Movie Sequel Video]

    - by Asian Angel
    It has been years since E.T. returned home and Elliot is all grown up now. But things are about to get a lot more interesting when E.T. returns to help save the Earth. The only problem is that the invaders are the people from E.T.’s own planet. Forget cute and cuddly…extinction is the name of the game in this well edited fake sequel trailer from YouTube user Robert Blankenheim. Warning: If you have fond, warm memories of the original E.T. movie, then you may want to skip this video. These extra-terrestrials are literally portrayed as pure, bloodthirsy evil. ET Sequel: “ET-X” (Extended Trailer) [via Geeks are Sexy] Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic E.T. II – Extinction [Fake Movie Sequel Video] Remastered King’s Quest Games Offer Classic Gaming on Modern Machines Compare Your Internet Cost and Speed to Global Averages [Infographic] Orbital Battle for Terra Wallpaper WizMouse Enables Mouse Over Scrolling on Any Window

    Read the article

  • Nautilus crashes after Ubuntu Tweak Package Cleaner [fixed]

    - by Ka7anax
    Few days ago I started having some problems with nautilus. Basically when I'm trying to get into a folder it crashes. It's not happening all the time, but in 85% it does... Sometimes, after the crash all my desktop icons are also gone. The only thing that I think causes this is Ubuntu Tweak - I'm not sure, but the issues started after I did the Package cleaner from Ubuntu Tweaks... Any ideas? ------- EDIT 2 - IMPORTANT !!! ---------- It seems I fixed this problem doing these: 1) I uninstall this nautilus script - http://mundogeek.net/nautilus-scripts/#nautilus-send-gmail 2) I installed nautilus elementary So far is back to normal... If anything bad happens again I will come back! -------- EDIT 1 ---------- First time, after running the command (nautilus --quit; nautilus --no-desktop) 3 times all the system crashed (except the mouse, I could move the mouse). After restart I run it and obtain this: ----- Initializing nautilus-gdu extension Initializing nautilus-dropbox 0.6.7 (nautilus:2966): GConf-CRITICAL **: gconf_value_free: assertion value != NULL' failed (nautilus:2966): GConf-CRITICAL **: gconf_value_free: assertionvalue != NULL' failed Nautilus-Share-Message: Called "net usershare info" but it failed: 'net usershare' returned error 255: net usershare: cannot open usershare directory /var/lib/samba/usershares. Error No such file or directory Please ask your system administrator to enable user sharing. and then this: cristi@cris-laptop:~$ nautilus --quit; nautilus --no-desktop (nautilus:3810): Unique-DBus-WARNING **: Error while sending message: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.

    Read the article

  • Gparted resize of an extended partition fails with error "can't have overlapping partitions".

    - by Marcus
    I just decided to install Ubuntu 12.04 alongside Windows 7 on my Dell laptop. However I didn't do this manually but instead used the "Install Ubuntu alongside Windows 7" option during the installation. Now the partition that Ubuntu runs in has very little space and I am getting warning messages. I'm trying to use gparted 0.12.1-5 (via a live CD) to give Windows less space and give Ubuntu more. I've managed to remove 100GB from the Windows partition so I now have some unallocated space between Windows and Ubuntu. This is what it looks like inside Ubuntu (not using the live CD, since it won't let me mount a USB to save a screenshot): So first I take sda4 (extended?) and resize it to the left so it takes up all the unallocated space. Then I resize sda5 (ext4) as well so it takes up all the new space. However, when I hit apply, it fails on the first action (resizing sd4) with the error message can't have overlapping partitions. Any ideas as to why this happens? I also tried resizing sda4 by just a few MB so that it definitely didn't overlap anything, but I still got the same error message. To clarify, I am using gparted from the LiveCD, I just took the screenshot from Ubuntu. I couldn't attach the details file containing the error information from gparted because I can't mount a USB drive when I'm running from the LiveCD. I'm tried following the guide on the gparted website but it says Invalid argument or something like that. If the gparted details are needed, I may need some hints on how to solve the USB issue as well. :)

    Read the article

  • RAID5 over LVM on Ubuntu Server 12.04.3

    - by April Ethereal
    I'm trying to create a RAID5 software array using LVM. I use VirtualBox as I'm only learning how LVM works. So I've created 4 virtual SCSI drives and then did the following: pvcreate /dev/sd[b-e] vgcreate /dev/sd[b-e] raid5_vg lvcreate --type raid5 -i 3 -L 1G -n raid_lv raid5_vg However, I get an error after the last command: WARNING: Unrecognised segment type raid5 Using default stripesize 64.00 KiB Rounding size (256 extents) up to stripe boundary size (258 extents) Cannot update volume group raid5_vg with unknown segments in it! So it looks like raid5 is not a valid segment type. "lvm segtypes" also doesn't contain 'raid5' entry: root@ubuntu-lvm:~# lvm segtypes striped zero error free snapshot mirror So my question is - how could I create RAID5 logical volume using LVM only? It seems that it is possible, I saw a few references (not for Ubuntu, unfortunately) for RedHat and Gentoo systems. I don't want to use mdadm for now, until I find out that it is mandatory. Some info about my system is below: root@ubuntu-lvm:~# uname -a Linux ubuntu-lvm 3.8.0I use Ubuntu Server 12.04.3 (i686)-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 15:31:16 UTC 2013 i686 i686 i386 GNU/Linux root@ubuntu-lvm:~# dpkg -l | grep lvm ii lvm2 2.02.66-4ubuntu7.3 The Linux Logical Volume Manager Thanks.

    Read the article

  • Sign of a Good Game

    - by Matt Christian
    (Warning: This post contains spoilers about SILENT HILL 2.  If you haven't played this game, you are dumb) What is one sign of a great game? One of the signs I realized recently is when a game continues to stun and surprise you years and years after you've played and beaten it.  As a major Silent Hill fan, I recently was reminded of Silent Hill 2 and even though see it as one of my favorite Silent Hill games, there are still things I'm learning about it that are neat little additions that add to the atmosphere (atmosphere also makes a great game!). For instance, when you start the game you are given a letter by your wife who has been deceased for years and years.  You are directed to Silent Hill and start treking through hell all by your lonesome (with the exception of a few psychos).  As you continue through the game, pieces of the letter begin to fade and disappear until eventually it is completely non-existent, thus implying the letter was never real and the letter was a delusion you created. Another example is the game's use of imagery the player knows about but might not notice at first.  For me, the most apparent of these was the dress you find near the start when you find the flashlight, which is the same dress you see Mary (your wife) wearing in the flashback sequences.  However, one thing I didn't know was that several deceased bodies you encounter laying around Silent Hill are actually the body of the main character (James) which invokes an idea you've seen that body before but can't pinpoint where... It's amazing to see a game go to such unique lengths to provide a psychological horror game.  Sure, all the dead bodies could be randomly modelled and the dress could be any ol' dress, but just the idea of your brain knowing something deep down but you can't pinpoint it is a really unique idea.  In my opinion, it ties less into subconscious and more into natural tendencies, it taps into the fear hidden inside us all.

    Read the article

  • Juju Zookeeper & Provisioning Agent Not Deployed

    - by Keith Tobin
    I am using juju with the openstack provider, i expected that when i bootstrap that zookeeper and provisioning agent would get deployed on the bootstrap vm in openstack. This dose not seem to be the case. the bootstrap vm gets deployed but it seems that nothing gets deployed to the VM. See logs below, I may be missing something, also how is it possible to log on the bootstrap vm. Could I manual deploy, if so what do I need to do. Juju Bootstrap commend root@cinder01:/home/cinder# juju -v bootstrap 2012-10-12 03:21:20,976 DEBUG Initializing juju bootstrap runtime 2012-10-12 03:21:20,982 WARNING Verification of xxxxS certificates is disabled for this environment. Set 'ssl-hostname-verification' to ensure secure communication. 2012-10-12 03:21:20,982 DEBUG openstack: using auth-mode 'userpass' with xxxx:xxxxxx.10:35357/v2.0/ 2012-10-12 03:21:21,064 DEBUG openstack: authenticated til u'2012-10-13T08:21:13Z' 2012-10-12 03:21:21,064 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors' 2012-10-12 03:21:21,091 DEBUG openstack: 200 '{"flavors": [{"id": "3", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/3", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/3", "rel": "bookmark"}], "name": "m1.medium"}, {"id": "4", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/4", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/4", "rel": "bookmark"}], "name": "m1.large"}, {"id": "1", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "bookmark"}], "name": "m1.tiny"}, {"id": "5", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/5", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/5", "rel": "bookmark"}], "name": "m1.xlarge"}, {"id": "2", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/2", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/2", "rel": "bookmark"}], "name": "m1.small"}]}' 2012-10-12 03:21:21,091 INFO Bootstrapping environment 'openstack' (origin: ppa type: openstack)... 2012-10-12 03:21:21,091 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state 2012-10-12 03:21:21,092 DEBUG openstack: GET 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state' 2012-10-12 03:21:21,165 DEBUG openstack: 200 '{}\n' 2012-10-12 03:21:21,165 DEBUG Verifying writable storage 2012-10-12 03:21:21,165 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/bootstrap-verify 2012-10-12 03:21:21,166 DEBUG openstack: PUT 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/bootstrap-verify' 2012-10-12 03:21:21,251 DEBUG openstack: 201 '201 Created\n\n\n\n ' 2012-10-12 03:21:21,251 DEBUG Launching juju bootstrap instance. 2012-10-12 03:21:21,271 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/juju_master_id 2012-10-12 03:21:21,273 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-groups 2012-10-12 03:21:21,273 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-groups' 2012-10-12 03:21:21,321 DEBUG openstack: 200 '{"security_groups": [{"rules": [{"from_port": -1, "group": {}, "ip_protocol": "icmp", "to_port": -1, "parent_group_id": 1, "ip_range": {"cidr": "0.0.0.0/0"}, "id": 7}, {"from_port": 22, "group": {}, "ip_protocol": "tcp", "to_port": 22, "parent_group_id": 1, "ip_range": {"cidr": "0.0.0.0/0"}, "id": 38}], "tenant_id": "d5f52673953f49e595279e89ddde979d", "id": 1, "name": "default", "description": "default"}]}' 2012-10-12 03:21:21,322 DEBUG Creating juju security group juju-openstack 2012-10-12 03:21:21,322 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-groups' 2012-10-12 03:21:21,401 DEBUG openstack: 200 '{"security_group": {"rules": [], "tenant_id": "d5f52673953f49e595279e89ddde979d", "id": 48, "name": "juju-openstack", "description": "juju group for openstack"}}' 2012-10-12 03:21:21,401 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-group-rules' 2012-10-12 03:21:21,504 DEBUG openstack: 200 '{"security_group_rule": {"from_port": 22, "group": {}, "ip_protocol": "tcp", "to_port": 22, "parent_group_id": 48, "ip_range": {"cidr": "0.0.0.0/0"}, "id": 54}}' 2012-10-12 03:21:21,504 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-group-rules' 2012-10-12 03:21:21,647 DEBUG openstack: 200 '{"security_group_rule": {"from_port": 1, "group": {"tenant_id": "d5f52673953f49e595279e89ddde979d", "name": "juju-openstack"}, "ip_protocol": "tcp", "to_port": 65535, "parent_group_id": 48, "ip_range": {}, "id": 55}}' 2012-10-12 03:21:21,647 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-group-rules' 2012-10-12 03:21:21,791 DEBUG openstack: 200 '{"security_group_rule": {"from_port": 1, "group": {"tenant_id": "d5f52673953f49e595279e89ddde979d", "name": "juju-openstack"}, "ip_protocol": "udp", "to_port": 65535, "parent_group_id": 48, "ip_range": {}, "id": 56}}' 2012-10-12 03:21:21,792 DEBUG Creating machine security group juju-openstack-0 2012-10-12 03:21:21,792 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-security-groups' 2012-10-12 03:21:21,871 DEBUG openstack: 200 '{"security_group": {"rules": [], "tenant_id": "d5f52673953f49e595279e89ddde979d", "id": 49, "name": "juju-openstack-0", "description": "juju group for openstack machine 0"}}' 2012-10-12 03:21:21,871 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/detail 2012-10-12 03:21:21,871 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/detail' 2012-10-12 03:21:21,906 DEBUG openstack: 200 '{"flavors": [{"vcpus": 2, "disk": 10, "name": "m1.medium", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/3", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/3", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 40, "ram": 4096, "id": "3", "swap": ""}, {"vcpus": 4, "disk": 10, "name": "m1.large", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/4", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/4", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 80, "ram": 8192, "id": "4", "swap": ""}, {"vcpus": 1, "disk": 0, "name": "m1.tiny", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "ram": 512, "id": "1", "swap": ""}, {"vcpus": 8, "disk": 10, "name": "m1.xlarge", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/5", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/5", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 160, "ram": 16384, "id": "5", "swap": ""}, {"vcpus": 1, "disk": 10, "name": "m1.small", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/flavors/2", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/2", "rel": "bookmark"}], "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 20, "ram": 2048, "id": "2", "swap": ""}]}' 2012-10-12 03:21:21,907 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers 2012-10-12 03:21:21,907 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers' 2012-10-12 03:21:22,284 DEBUG openstack: 202 '{"server": {"OS-DCF:diskConfig": "MANUAL", "id": "a598b402-8678-4447-baeb-59255409a023", "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "bookmark"}], "adminPass": "SuFp48cZzdo4"}}' 2012-10-12 03:21:22,284 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/juju_master_id 2012-10-12 03:21:22,285 DEBUG openstack: PUT 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/juju_master_id' 2012-10-12 03:21:22,375 DEBUG openstack: 201 '201 Created\n\n\n\n ' 2012-10-12 03:21:27,379 DEBUG Waited for 5 seconds for networking on server u'a598b402-8678-4447-baeb-59255409a023' 2012-10-12 03:21:27,380 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023 2012-10-12 03:21:27,380 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023' 2012-10-12 03:21:27,556 DEBUG openstack: 200 '{"server": {"OS-EXT-STS:task_state": "networking", "addresses": {"private": [{"version": 4, "addr": "10.0.0.8"}]}, "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "bookmark"}], "image": {"id": "5bf60467-0136-4471-9818-e13ade75a0a1", "links": [{"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/images/5bf60467-0136-4471-9818-e13ade75a0a1", "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "building", "OS-EXT-SRV-ATTR:instance_name": "instance-00000060", "flavor": {"id": "1", "links": [{"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "bookmark"}]}, "id": "a598b402-8678-4447-baeb-59255409a023", "user_id": "01610f73d0fb4922aefff09f2627e50c", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 0, "config_drive": "", "status": "BUILD", "updated": "2012-10-12T08:21:23Z", "hostId": "1cdb25708fb8e464d83a69fe4a024dcd5a80baf24a82ec28f9d9f866", "OS-EXT-SRV-ATTR:host": "nova01", "key_name": "", "OS-EXT-SRV-ATTR:hypervisor_hostname": null, "name": "juju openstack instance 0", "created": "2012-10-12T08:21:22Z", "tenant_id": "d5f52673953f49e595279e89ddde979d", "metadata": {}}}' 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 2012-10-12 03:21:27,557 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-floating-ips 2012-10-12 03:21:27,557 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/os-floating-ips' 2012-10-12 03:21:27,815 DEBUG openstack: 200 '{"floating_ips": [{"instance_id": "a0e0df11-91c0-4801-95b3-62d910d729e9", "ip": "xxxx.35", "fixed_ip": "10.0.0.5", "id": 447, "pool": "nova"}, {"instance_id": "b84f1a42-7192-415e-8650-ebb1aa56e97f", "ip": "xxxx.36", "fixed_ip": "10.0.0.6", "id": 448, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.37", "fixed_ip": null, "id": 449, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.38", "fixed_ip": null, "id": 450, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.39", "fixed_ip": null, "id": 451, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.40", "fixed_ip": null, "id": 452, "pool": "nova"}, {"instance_id": null, "ip": "xxxx.41", "fixed_ip": null, "id": 453, "pool": "nova"}]}' 2012-10-12 03:21:27,815 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023/action 2012-10-12 03:21:27,816 DEBUG openstack: POST 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023/action' 2012-10-12 03:21:28,356 DEBUG openstack: 202 '' 2012-10-12 03:21:28,356 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state 2012-10-12 03:21:28,357 DEBUG openstack: PUT 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state' 2012-10-12 03:21:28,446 DEBUG openstack: 201 '201 Created\n\n\n\n ' 2012-10-12 03:21:28,446 INFO 'bootstrap' command finished successfully Juju Status Command root@cinder01:/home/cinder# juju -v status 2012-10-12 03:23:28,314 DEBUG Initializing juju status runtime 2012-10-12 03:23:28,320 WARNING Verification of xxxxS certificates is disabled for this environment. Set 'ssl-hostname-verification' to ensure secure communication. 2012-10-12 03:23:28,320 DEBUG openstack: using auth-mode 'userpass' with xxxx:xxxxxx.10:35357/v2.0/ 2012-10-12 03:23:28,320 INFO Connecting to environment... 2012-10-12 03:23:28,403 DEBUG openstack: authenticated til u'2012-10-13T08:23:20Z' 2012-10-12 03:23:28,403 DEBUG access object-store @ xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state 2012-10-12 03:23:28,403 DEBUG openstack: GET 'xxxx:xx10.49.113.11:8080/v1/AUTH_d5f52673953f49e595279e89ddde979d/juju-hpc-az1-cb/provider-state' 2012-10-12 03:23:35,480 DEBUG openstack: 200 'zookeeper-instances: [a598b402-8678-4447-baeb-59255409a023]\n' 2012-10-12 03:23:35,480 DEBUG access compute @ xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023 2012-10-12 03:23:35,480 DEBUG openstack: GET 'xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023' 2012-10-12 03:23:35,662 DEBUG openstack: 200 '{"server": {"OS-EXT-STS:task_state": null, "addresses": {"private": [{"version": 4, "addr": "10.0.0.8"}, {"version": 4, "addr": "xxxx.37"}]}, "links": [{"href": "xxxx:xxxxxx.15:8774/v1.1/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "self"}, {"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/servers/a598b402-8678-4447-baeb-59255409a023", "rel": "bookmark"}], "image": {"id": "5bf60467-0136-4471-9818-e13ade75a0a1", "links": [{"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/images/5bf60467-0136-4471-9818-e13ade75a0a1", "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-ATTR:instance_name": "instance-00000060", "flavor": {"id": "1", "links": [{"href": "xxxx:xxxxxx.15:8774/d5f52673953f49e595279e89ddde979d/flavors/1", "rel": "bookmark"}]}, "id": "a598b402-8678-4447-baeb-59255409a023", "user_id": "01610f73d0fb4922aefff09f2627e50c", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "config_drive": "", "status": "ACTIVE", "updated": "2012-10-12T08:21:40Z", "hostId": "1cdb25708fb8e464d83a69fe4a024dcd5a80baf24a82ec28f9d9f866", "OS-EXT-SRV-ATTR:host": "nova01", "key_name": "", "OS-EXT-SRV-ATTR:hypervisor_hostname": null, "name": "juju openstack instance 0", "created": "2012-10-12T08:21:22Z", "tenant_id": "d5f52673953f49e595279e89ddde979d", "metadata": {}}}' 2012-10-12 03:23:35,663 DEBUG Connecting to environment using xxxx.37... 2012-10-12 03:23:35,663 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="xxxx.37" remote_port="2181" local_port="45859". 2012-10-12 03:23:36,173:4355(0x7fd581973700):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-10-12 03:23:36,173:4355(0x7fd581973700):ZOO_INFO@log_env@662: Client environment:host.name=cinder01 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-23-generic 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@671: Client environment:os.version=#36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@679: Client environment:user.name=cinder 2012-10-12 03:23:36,174:4355(0x7fd581973700):ZOO_INFO@log_env@687: Client environment:user.home=/root 2012-10-12 03:23:36,175:4355(0x7fd581973700):ZOO_INFO@log_env@699: Client environment:user.dir=/home/cinder 2012-10-12 03:23:36,175:4355(0x7fd581973700):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=localhost:45859 sessionTimeout=10000 watcher=0x7fd57f9146b0 sessionId=0 sessionPasswd= context=0x2c1dab0 flags=0 2012-10-12 03:23:36,175:4355(0x7fd577fff700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:45859] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2012-10-12 03:23:39,512:4355(0x7fd577fff700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:45859] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2012-10-12 03:23:42,848:4355(0x7fd577fff700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:45859] zk retcode=-4, errno=111(Connection refused): server refused to accept the client ^Croot@cinder01:/home/cinder#

    Read the article

  • Fix: Azure Disabled over 49 cents? Beware of using a Java Virtual Machine on Microsoft Azure

    - by Ken Cox [MVP]
    I love my MSDN Azure account. I can spin up a demo/dev app or VM in seconds. In fact, it is so easy to create a virtual machine that Azure shut down my whole account! Last night I spun up a Java Virtual Machine to play with some Android stuff. My mistake was that I didn’t read the Virtual Machine pricing warning: “I have a MSDN Azure Benefit subscription. Can I use my monthly Azure credits to purchase Oracle software?” “No, Azure credits in our MSDN offers are not applicable to Oracle software. In order to purchase Oracle software in the MSDN Azure Benefit subscription, customers need to turn off their {0} spending limit and pay at the regular pay-as-you-go rate. Otherwise, Oracle usage will hit the {1} spending limit and the subscription will be immediately disabled.”  Immediately disabled? Yup. Everything connected to the subscription was shut off, deallocated, rendered useless - even the free Web sites and the free Sendgrid email service.  The fix? I had to remove the spending limit from my account so I could pay $0.49 (49 cents) for the JVM usage. I still had $134.10 in credits remaining for regular usage with 6 days left in the billing month.  Now the restoration/clean-up begins… figuring out how to get the web sites and services back online.  To me, the preferable way would be for Azure to warn me when setting up a JVM that I had no way of paying for the service. In the alternative, shut down just the offending services – the ones that can’t be covered by the regular credits. What a mess.

    Read the article

  • Booby Traps and Locked-in Kids: An Interview with a Safecracker

    - by Jason Fitzpatrick
    While most of our articles focus on security of the digital sort, this interview with a professional safecracker is an interesting look the physical side of securing your goods. As part of their Interviews with People Who Have Interesting or Unusual Jobs series over at McSweeney’s, they interviewed Ken Doyle, a professional a locksmithing and safecracking veteran with 30 years of industry experience. The interview is both entertaining and an interesting read. One of the more unusual aspects of safecracking he highlights: Q: Do you ever look inside? A: I NEVER look. It’s none of my business. Involving yourself in people’s private affairs can lead to being subpoenaed in a lawsuit or criminal trial. Besides, I’d prefer not knowing about a client’s drug stash, personal porn, or belly button lint collection. When I’m done I gather my tools and walk to the truck to write my invoice. Sometimes I’m out of the room before they open it. I don’t want to be nearby if there is a booby trap. Q: Why would there be a booby trap? A: The safe owner intentionally uses trip mechanisms, explosives or tear gas devices to “deter” unauthorized entry into his safe. It’s pretty stupid because I have yet to see any signs warning a would-be culprit about the danger. HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • How do I set up live audio streams to a DLNA compliant device?

    - by Takkat
    Is there a way to stream the live output of the soundcard from our 12.04.1 LTS amd64 desktop to a DLNA-compliant external device in our network? Selecting media content in shared directories using Rygel, miniDLNA, and uShare is always fine - but so far we completely failed to get a live audio stream to a client via DLNA. Pulseaudio claims to have a DLNA/UPnP media server that together with Rygel is supposed to do just this. But we were unable to get it running. We followed the steps outlined in live.gnome.org, this answer here, and also in another similar guide. As soon as we select the local audio device, or our GST-Launch stream in the DLNA client Rygel displays the following message and the client states it reached the end of the playlist: (rygel:7380): Rygel-WARNING **: rygel-http-request.vala:97: Invalid seek request This is how we configured GST-Launch in rygel.conf: [GstLaunch] enabled=true launch-items=mypulseaudiosink mypulseaudiosink-title=Audio on @HOSTNAME@ mypulseaudiosink-mime=audio/x-wav mypulseaudiosink-launch=pulsesrc device=<device> ! wavpackenc For <device> we tried with the default sink name, this name appended with .monitor, and in addition with upnp-sink and upnp.monitor that was created when we selected DLNA media server from paprefs. We also tried to encode using lamemp3enc with no luck. These are our pulseaudio modules: http://paste.ubuntu.com/1202913/ These are our sinks: http://paste.ubuntu.com/1202916/ Did we miss any other additional configuration needed to get this running? Are there any other alternatives for sending the audio of our soundcard as live stream to a DLNA client?

    Read the article

  • Problem running Qreator on Xubuntu-14.04

    - by Seyed Mohammad
    I installed Qreator using apt-get on Xubuntu-14.04: $ sudo apt-get install qreator But the application fails to start! When I try to run it via Terminal, the following error messages are printed and the program aborts: $ qreator ** (qreator:3859): WARNING **: Couldn't connect to accessibility bus: Failed to connect to socket /tmp/dbus-Gh2FPHrMr2: Connection refused No handlers could be found for logger "qreator_lib" Traceback (most recent call last): File "/usr/bin/qreator", line 47, in <module> qreator.main() File "/usr/lib/python2.7/dist-packages/qreator/__init__.py", line 63, in main window = QreatorWindow.QreatorWindow() File "/usr/lib/python2.7/dist-packages/qreator_lib/Window.py", line 48, in __new__ new_object.finish_initializing(builder) File "/usr/lib/python2.7/dist-packages/qreator/QreatorWindow.py", line 79, in finish_initializing self.init_qr_types() File "/usr/lib/python2.7/dist-packages/qreator/QreatorWindow.py", line 135, in init_qr_types self.qr_types = [d(self.update_qr_code) for d in QRCodeType.dataformats] File "/usr/lib/python2.7/dist-packages/qreator/qrcodes/QRCodeType.py", line 71, in __init__ self.create_widget() # pylint: disable=E1101 File "/usr/lib/python2.7/dist-packages/qreator/qrcodes/QRCodeLocation.py", line 29, in create_widget self.widget = QRCodeLocationGtk(self.qr_code_update_func) File "/usr/lib/python2.7/dist-packages/qreator/qrcodes/QRCodeLocationGtk.py", line 49, in __init__ latitude, longitude = get_current_location() File "/usr/lib/python2.7/dist-packages/qreator/qrcodes/QRCodeLocationGtk.py", line 109, in get_current_location '/org/freedesktop/Geoclue/Providers/Hostip') File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 241, in get_object follow_name_owner_changes=follow_name_owner_changes) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 248, in __init__ self._named_service = conn.activate_name_owner(bus_name) File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 180, in activate_name_owner self.start_service_by_name(bus_name) File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 278, in start_service_by_name 'su', (bus_name, flags))) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 651, in call_blocking message, timeout) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Geoclue.Providers.Hostip was not provided by any .service files How can I fix this ?

    Read the article

  • emca fails with "Database instance is unavailable" though available

    - by Giri Mandalika
    The following example shows the symptoms of failure, and the exact error message. $ emca -repos create ... Password for SYSMAN user: Do you wish to continue? [yes(Y)/no(N)]: Y Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks \ checkDbAvailabilityImpl WARNING: ORA-01034: ORACLE not available Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks \ throwDBUnavailableException SEVERE: Database instance is unavailable. Fix the ORA error thrown and run EM Configuration Assistant again. Some of the possible reasons may be : 1) Database may not be up. 2) Database is started setting environment variable ORACLE_HOME with trailing '/'. Reset ORACLE_HOME and bounce the database. For eg. Database is started setting environment variable ORACLE_HOME=/scratch/db/ . Reset ORACLE_HOME=/scratch/db and bounce the database. Fix: Ensure that the ORACLE_HOME is pointing to the right location in $ORACLE_HOME/bin/emca file. Rather than installing from scratch, if ORACLE_HOME was copied over from another location, likely it results in wrong location for ORACLE_HOME in several Enterprise Manager (EM) specific scripts and files. It usually happens when the directory structure on the target machine is not identical to the structure on the original/source machine, including the top level directory location where Oracle RDBMS was installed properly using the installer.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >