Search Results

Search found 972 results on 39 pages for 'grant crofton'.

Page 12/39 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Likelihood of a hard drive PCB replacement working?

    - by Grant Limberg
    I have a 1.5TB Seagate 7200.11 that died. The platter still spins up initially when attached to a machine, but then the drive clicks and spins down. Then the cycle repeats. I've found a few sites that sell replacement PCBs, however I don't know if the PCB is the issue, or something else. Given the symptoms above, is it at all likely that a PCB replacement would help? If not, I won't waste my money on the PCB replacement. Note: I put the drive in an external ESATA controller and tried hooking it up to a Linux box here at work and got some error messages in the logs. I can post them if anyone thinks it will help in determining whether a PCB swap would fix the problem I'm running into.

    Read the article

  • useful JMX metrics for monitoring WebSphere Application Server (and apps inside it)?

    - by Justin Grant
    When managing custom Java applications hosted inside WebSphere Application Server, what JMX metrics do you find most useful for monitoring performance, monitoring availability, and troubleshooting problems? And how do you prefer to slice and visualize those metrics (e.g. chart by top 10 hosts, graph by app, etc.). The more details I can get, the better, as I need to specify a standard set of reports which IT can offer to owners of applications hosted by IT, which those owners can customize but many won't bother. So I'll need to come up with a bunch of generally-applicable reports which most groups can use out-of-the-box. Obviously there's no one perfect answer to this question, so I'll accept the answer with the most comprehensive details and I'll be generous about upvoting any other useful answer. My question is WebSphere-specific, but I realize that most JMX metrics are equally applicable across any container, so feel free to give an answer for JBoss, Tomcat, WebLogic, etc.

    Read the article

  • My server's been hacked EMERGENCY

    - by Grant unwin
    I'm on my way into work at 9.30 p.m. on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere. I'm not sure till I get there. How can I track this down quickly? We're in for a whole lot of litigation if I don't get the server back up ASAP. Any help is appreciated. UPDATE Thanks to everyone for your help. Luckily I WASN'T the only person responsible for this server, just the nearest. We managed to resolve this problem, although it may not apply to many others in a different situation. I'll detail what we did. We unplugged the server from the net. It was performing (attempting to perform) a Denial Of Service attack on another server in Indonesia, and the guilty party was also based there. We firstly tried to identify where on the server this was coming from, considering we have over 500 sites on the server, we expected to be moonlighting for some time. However, with SSH access still, we ran a command to find all files edited or created in the time the attacks started. Luckily, the offending file was created over the winter holidays which meant that not many other files were created on the server at that time. We were then able to identify the offending file which was inside the uploaded images folder within a ZenCart website. After a short cigarette break we concluded that, due to the files location, it must have been uploaded via a file upload facility that was inadequetly secured. After some googling, we found that there was a security vulnerability that allowed files to be uploaded, within the ZenCart admin panel, for a picture for a record company. (The section that it never really even used), posting this form just uploaded any file, it did not check the extension of the file, and didn't even check to see if the user was logged in. This meant that any files could be uploaded, including a PHP file for the attack. We secured the vulnerability with ZenCart on the infected site, and removed the offending files. The job was done, and I was home for 2 a.m. The Moral - Always apply security patches for ZenCart, or any other CMS system for that matter. As when security updates are released, the whole world is made aware of the vulnerability. - Always do backups, and backup your backups. - Employ or arrange for someone that will be there in times like these. To prevent anyone from relying on a panicy post on Server Fault. Happy servering!

    Read the article

  • Wireless WAN (WWAN) on a Lenovo T500 - built-in or do I need a WWAN modem?

    - by Justin Grant
    I use a Lenovo ThinkPad 2055-3AU at work and I want to get a Wireless WAN data plan with a local mobile telecom provider. I've read conflicting reports online about whether my system is "WWAN-ready" or not. How can I find out which wireless WAN providers (if any) my system can support without buying a separate modem? I looked through Device Manager for anything resembling a WWAN device and didn't see anything, but I also wiped the machine when I bought it and clean-installed Windows 7 with only out-of-the-box Windows and Windows-Update drivers, so it's possible that the device is there but the drivers aren't installed. FWIW, the support page at http://www-307.ibm.com/pc/support/site.wss/quickPath.do?quickPathEntry=20553AU does not specfically list anything about Wireless WAN.

    Read the article

  • Computer will not boot - disk read error - cannot boot from HD or DVD

    - by Grant Palin
    This is a 3 year-old system: HP a1640n. There have been no issues with it in the past. I added a video card 2 years ago, and more memory 1 year ago, both without issues. There haven't been any recent hardware changes. I did install Win7 in Oct., but there were no issues with that either. I used the computer fine two nights ago, and turned it off. Yesterday, I tried to turn it on, and got the error: "A Disk Read Error Occurred. Press CTRL ALT DEL to restart" So I restart, see the initial start screen (HP) and enter the BIOS. The hard drive and dvd drive appear to be listed, but the names are gibberish text. I tried putting a Windows disk in the dvd drive, and continued with the boot, but the disk did not get recognized. Even though the BIOS was set to check for optical media before the hard drive. Back to the error screen. If the computer would boot from a cd or dvd, I would just figure the hard drive needed replacing. But both being problematic worries me. Is this a matter of replacing both the hard drive and dvd drive, or might it be an indication of a bigger problem? Thanks for any advice.

    Read the article

  • Create an image of a CentOS system

    - by Grant unwin
    I provide a server (and site) to a client via Rackspace Cloud Hosting, and my client wants to now host the entire thing within his own account. Since it's not possible to just transfer the ownership, I need to somehow create an image of the machine via SSH which I can then use on a new server. Is this possible, and does anyone know of a way of doing this. Note I am talking about virtualised machines here, but I only have access to the virtualised partition and not the system as a whole.

    Read the article

  • Tool to track bandwidth by domain name?

    - by Grant Limberg
    I'm running an Ubuntu 10.04 server that hosts several domain names. All domains point to the same IP address and use the same network interface. I'm really only concerned with the main domain name such as my-domain1.com and my-domain2.com. It should include subdomains such as www.my-domain1.com with the totals for my-domain1.com. Is there a tool out there that is configurable to track bandwidth usage on a per-domain name basis? Edit: I'm not looking for only web usage. I'm looking for all traffic.

    Read the article

  • My servers been hacked EMERGENCY

    - by Grant unwin
    I'm on my way into work at 9.30 pm on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere I'm not sure till j get there. Does anyone have any tips on how I can track this down quickly. Were in for a whole lot of litigation if I dont get the server back up asap. Any help appreciated.

    Read the article

  • mysqld stopped working..can't restart...need help?

    - by grant tailor
    i was just checking somethings and noticed mysqld is not running in parallels power panel control panel...but my websites on the server were all working fine, which use mysql databases...so really strange So i tried to restart mysqld but got errors and can't restart and now all my websites are all offline now saying error connecting to database. logged in as root and tried /etc/init.d/mysqld start and got this error ERROR! Manager of pid-file quit without updating file What do i do next? What do i do? Please help!

    Read the article

  • Autossh startup on Ubuntu 10.04 - fails after powering off

    - by grant
    I'm using upstart to keep a reverse ssh tunnel alive using auto ssh similar to Using Upstart to Manage AutoSSH Reverse Tunnel. This works fine, except after a manual power down I can no longer connect to the machine through the "central server" using the tunnel. I receive "ssh_exchange_identification: Connection closed by remote host". The autossh process is running on the client. I can connect again after re-starting networking. I'm trying to figure out why this is failing consistently after a manual shutdown. Is it possible that I need to do some cleanup on startup that would allow the tunnel to work in this situation, or are there some other debugging/troubleshooting steps I can take to determine the problem? Machine A is the client machine, using autossh. This machine sits behind a firewall and uses the following command in upstart to create an ssh tunnel: /usr/bin/autossh -fN -i /keyfile -o StrictHostKeyChecking=no -R 20098:localhost:22 user@centralserver Machine B we'll call the "central server", which sits in the cloud and is the host. This machine is "centralserver" in the command above. When Machine A is hard powered off, and back on, I cannot connect to it by SSH'ing from my machine (C) to Machine B in the cloud, then using the following command to get to Machine A: ssh -p 2098 user@localhost Again, after a reboot of the client (A), this works fine. It is only after a hard power down that the problem occurs. There are autossh processes that are running on the client machine (A) after powering down and back up, but they just don't seem to doing their job.

    Read the article

  • Automatically make user local administrator on their computer through GPO?

    - by Grant
    In our AD 2003 domain each user gets local admin permissions on their computer. Everyone else can login with their domain account as normal user. Right now this means going to the desktop and manually adding the user as a local administrator. Is there any way to automate this process through logon scripts or GPOs? I have found ways to use a gpo to make everyone who logs in to a computer a local admin, but really only want to give it to the primary user (or in some cases users) of the computer. I've also seen methods that required adding a group for each computer...but really dont want to clutter AD like that. I do have a list mapping each user to each computer name. If it matters the desktops are a mix of xp and win7.

    Read the article

  • Word 2007 cell formatting

    - by Michelle Grant
    I have created a form template in Word 2007 which include various fields. Some of the fields are to show time (i.e. 15:47:32). I've set the text field properties as "Date" HH:mm:ss. The trouble arises when the form is them completed. If I input 15.47.32 it correctly reverts to 15:47:32 but if I input 12.12.31 it reverts to 00:00:00. This also happens if I input 12:12:32. Please help as I've looked at this all afternoon now and it's driving me insane

    Read the article

  • How come Core i7 (desktop) dominates Xeon (server)?

    - by grant tailor
    I have been using this performance benchmark results to select what CPUs to use on my web server and to my surprise, looks like Core i7 CPUs dominates the list pushing Xeon CPUs into the bush. Why is this? Why is Intel making the Core i7 perform better than the Xeon. Are Desktop CPUs supposed to perform better than server grade Xeon CPUs? I really don't get this and will like to know what you think or why this is so. Also I am thinking about getting a new web server and thinking between the i7-2600 VS the Xeon E3-1245. The i7-2600 is higher up in the performance benchmark but I am thinking the Xeon E3-1245 is server grade. What do you guys think? Should I go for the i7-2600? Or is the Xeon E3-1245 a server grade CPU for a reason?

    Read the article

  • How come i7 (desktop) dominates Xeon (server)?

    - by grant tailor
    I have been using this performance benchmark results http://www.cpubenchmark.net/high_end_cpus.html to select what CPUs to use on my web server and to my surprise...looks like i7 CPUs dominates the list pushing Xeon CPUs into the bush. Why is this? Why is Intel making the i7 perform better than the Xeon. Are Desktop CPUs supposed to perform better than server grade Xeon CPUs? I really don't get this and will like to know what you think or why this is so. Also i am thinking about getting a new web server and thinking between the i7-2600 VS the Xeon E3-1245. The i7-2600 is higher up in the performance benchmark but i am thinking the Xeon E3-1245 is server grade...so what do you guys think? Should i go for the i7-2600? Or is the Xeon E3-1245 a server grade CPU for a reason?

    Read the article

  • Oracle Forms Migration to ADF - Webinar vom ORACLE Partner PITSS

    - by Thomas Leopold
      Tuesday, February 22, 2011 5:00 PM - 6:00 PM CET Free Webinar Re-Engineering Legacy Oracle Forms Migration from Forms to ADF - A Case Study Join Oracle's Grant Ronald and PITSS to see a software architecture comparison of Oracle Forms and ADF and a live step-by-step presentation on how to achieve a successful migration. Learn about various migration options, challenges and best practices to protect your current investment in Oracle Forms. PL/SQL is without match for what it does: manipulating data in the database. If you blindly migrate all your PL/SQL to Java you will, in all probability, end up with less maintainable and less efficient code. Instead you should consider which code it best left as PL/SQL..." Grant Ronald - "Migrating Oracle Forms to Fusion: Myth or Magic Bullet?" Re-Engineering existing business logic is mandatory for your legacy Forms application to take advantage of the new Software architectures like ADF. The PITSS.CON solution combines the deep understanding of Oracle Forms and Reports applications architecture with powerful re-engineering capabilities that allows the developer community to protect the investment in the existing Forms applications and to concentrate on fine-tuning and customization of the modernized functionality rather than manually recreating every module and business logic from bottom up. Registration: https://www2.gotomeeting.com/register/971702250   PITSS GmbHKönigsdorferstrasse 25D-82515 WolfratshausenDo not forget to check out these Free Webinars in March! Thursday, March 3, 2011 Upgrade and Modernize Your Application to Forms 11g Registration/Information Tuesday, March 15, 2011 Shaping the Future for your Oracle Forms Application:Forms 11g, ADF, APEX Registration/Information Tuesday, March 29, 2011 Oracle Forms Modernization to APEX Registration/Information Registration is limited, so sign up  today!Presented By:        Grant Ronald, Senior Group Product Manager,Oracle       Magdalena Serban, Product Manager,PITSS   Contact Us:            PITSS in Americas +1 248.740.0935 [email protected] www.pitssamerica.com       PITSS in Europe +49 (0) 717287 5200 [email protected] www.pitss.com   White Paper:      From Oracle Forms to Oracle ADF and JEE     © Copyright 2010 PITSS GmbH, Wolfratshausen, Stuttgart, München; Managing Directors: Dipl.-Ing. Andreas Gaede, Michael Kilimann, Dipl.-Ing. Dirk Fleischmann Commercial Register: HRB 125471 at District Court Munich. All rights reserved. Any duplication or further treatment in any medium, in parts or as a whole, requires a written agreement. If you do not want to receive invitations for events, meetings and seminars from us, then please click here.

    Read the article

  • Application Performance Episode 2: Announcing the Judges!

    - by Michaela Murray
    The story so far… We’re writing a new book for ASP.NET developers, and we want you to be a part of it! If you work with ASP.NET applications, and have top tips, hard-won lessons, or sage advice for avoiding, finding, and fixing performance problems, we want to hear from you! And if your app uses SQL Server, even better – interaction with the database is critical to application performance, so we’re looking for database top tips too. There’s a Microsoft Surface apiece for the person who comes up with the best tip for SQL Server and the best tip for .NET. Of course, if your suggestion is selected for the book, you’ll get full credit, by name, Twitter handle, GitHub repository, or whatever you like. To get involved, just email your nuggets of performance wisdom to [email protected] – there are examples of what we’re looking for and full competition details at Application Performance: The Best of the Web. Enter the judges… As mentioned in my last blogpost, we have a mystery panel of celebrity judges lined up to select the prize-winning performance pointers. We’re now ready to reveal their secret identities! Judging your ASP.NET  tips will be: Jean-Phillippe Gouigoux, MCTS/MCPD Enterprise Architect and MVP Connected System Developer. He’s a board member at French software company MGDIS, and teaches algorithms, security, software tests, and ALM at the Université de Bretagne Sud. Jean-Philippe also lectures at IT conferences and writes articles for programming magazines. His book Practical Performance Profiling is published by Simple-Talk. Nik Molnar,  a New Yorker, ASP Insider, and co-founder of Glimpse, an open source ASP.NET diagnostics and debugging tool. Originally from Florida, Nik specializes in web development, building scalable, client-centric solutions. In his spare time, Nik can be found cooking up a storm in the kitchen, hanging with his wife, speaking at conferences, and working on other open source projects. Mitchel Sellers, Microsoft C# and DotNetNuke MVP. Mitchel is an experienced software architect, business leader, public speaker, and educator. He works with companies across the globe, as CEO of IowaComputerGurus Inc. Mitchel writes technical articles for online and print publications and is the author of Professional DotNetNuke Module Programming. He frequently answers questions on StackOverflow and MSDN and is an active participant in the .NET and DotNetNuke communities. Clive Tong, Software Engineer at Red Gate. In previous roles, Clive spent a lot of time working with Common LISP and enthusing about functional languages, and he’s worked with managed languages since before his first real job (which was a long time ago). Long convinced of the productivity benefits of managed languages, Clive is very interested in getting good runtime performance to keep managed languages practical for real-world development. And our trio of SQL Server specialists, ready to select your top suggestion, are (drumroll): Rodney Landrum, a SQL Server MVP who writes regularly about Integration Services, Analysis Services, and Reporting Services. He’s authored SQL Server Tacklebox, three Reporting Services books, and contributes regularly to SQLServerCentral, SQL Server Magazine, and Simple–Talk. His day job involves overseeing a large SQL Server infrastructure in Orlando. Grant Fritchey, Product Evangelist at Red Gate and SQL Server MVP. In an IT career spanning more than 20 years, Grant has written VB, VB.NET, C#, and Java. He’s been working with SQL Server since version 6.0. Grant volunteers with the Editorial Committee at PASS and has written books for Apress and Simple-Talk. Jonathan Allen, leader and founder of the PASS SQL South West user group. He’s been working with SQL Server since 1999 and enjoys performance tuning, development, and using SQL Server for business solutions. He’s spoken at SQLBits and SQL in the City, as well as local user groups across the UK. He’s also a moderator at ask.sqlservercentral.com.

    Read the article

  • AdventureWorks2012 now available for all on SQL Azure

    - by jamiet
    Three days ago I tweeted this: Idea. MSFT could host read-only copies of all the [AdventureWorks] DBs up on #sqlazure for the SQL community to use. RT if agree #sqlfamily — Jamie Thomson (@jamiet) March 24, 2012 Evidently I wasn't the only one that thought this was a good idea because as you can see from the screenshot that tweet has, so far, been retweeted more than fifty times. Clearly there is a desire to see the AdventureWorks databases made available for the community to noodle around on so I am pleased to announce that as of today you can do just that - [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use* for their own means. *By use I mean "issue some SELECT statements". You don't have permission to issue INSERTs, UPDATEs, DELETEs or EXECUTEs I'm afraid - if you want to do that then you can get the bits and host it yourself. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected]: Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more that we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. OK, with the prickly subject of begging for cash out of the way let me share the details that you need to connect to [AdventureWorks2012] on SQL Azure: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly That user sqlfamily has all the permissions required to enable you to query away to your heart's content. Here is the code that I used to set it up: CREATE USER sqlfamily FOR LOGIN sqlfamily;CREATE ROLE sqlfamilyrole;EXEC sp_addrolemember 'sqlfamilyrole','sqlfamily';GRANT VIEW DEFINITION ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT VIEW DATABASE STATE ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT SHOWPLAN TO sqlfamilyrole;EXEC sp_addrolemember 'db_datareader','sqlfamilyrole'; You can connect to the database using SQL Server Management Studio (instructions to do that are provided at Walkthrough: Connecting to SQL Azure via the SSMS) or you can use the web interface at https://mhknbn2kdz.database.windows.net: Lastly, just for a bit of fun I created a table up there called [dbo].[SqlFamily] into which you can leave a small calling card. Simply execute the following SQL statement (changing the values of course): INSERT [dbo].[SqlFamily]([Name],[Message],[TwitterHandle],[BlogURI])VALUES ('Your name here','Some Message','your twitter handle (optional)','Blog URI (optional)'); [Id] is an IDENTITY field and there is a default constraint on [DT] hence there is no need to supply a value for those. Note that you only have INSERT permissions, not UPDATE or DELETE so make sure you get it right first time! Any offensive or distasteful remarks will of course be deleted :) Thank you for reading this far and have fun using AdventureWorks on Azure. I hope it proves to be useful for some of you. @jamiet AdventureWorks on Azure - Provided by the SQL Server community, for the SQL Server community!

    Read the article

  • Java: What are the various available security settings for applets

    - by bguiz
    I have an applet that throws this exception when trying to communicate with the server (running on localhost). This problem is limited to Applets only - a POJO client is able to communicate with the exact same server without any problem. Exception in thread "AWT-EventQueue-1" java.security.AccessControlException: access denied (java.net .SocketPermission 127.0.0.1:9999 connect,resolve) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) My applet.policy file's contents is: grant { permission java.security.AllPermission; }; My question is what are the other places where I need to modify my security settings to grant an Applet more security settings? Thank you. EDIT: Further investigation has lead me to find that this problem only occurs on some machines - but not others. So it could be a machine level (global) setting that is causing this, rather than a application-specific setting such as the one in the applet.policy file. EDIT: Another SO question: Socket connection to originating server of an unsigned Java applet This seems to describe the exact same problem, and Tom Hawtin - tackline 's answer provides the reason why (a security patch released that disallows applets from connecting to localhost). Bearing this in mind, how do I grant the applet the security settings such that in can indeed run on my machine. Also why does it run as-is on other machines but not mine?

    Read the article

  • restart of web dev server every time load ASP.NET MVC application

    - by kjm
    Hi, I continously get this problem (stack trace below) when I start my ASP.NET MVC application and have to restart the web dev server and then it goes away. It appears to be happening on when I make modification in my jquery and then try to restart the application. protected void Application_Start() { InitialiseIocContainer(); RegisterViewEngine(ViewEngines.Engines); RegisterRoutes(RouteTable.Routes); SetupLogging(); } It appears to get caugth on the Application_start in global.asax. I've done lots of search in google but no luck. ITS DRIVING ME BONKERS!!!! can anyone help please Server Error in '/' Application. -------------------------------------------------------------------------------- Loading this assembly would produce a different grant set from other instances. (Exception from HRESULT: 0x80131401) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.FileLoadException: Loading this assembly would produce a different grant set from other instances. (Exception from HRESULT: 0x80131401) Source Error: Line 30: RegisterRoutes(RouteTable.Routes); Line 31: SetupLogging(); Line 32: } Line 33: Line 34: private void SetupLogging() Source File: C:\UserData\SourceControl\LLNP4\Trunk\Web\Global.asax.cs Line: 32 Stack Trace: [FileLoadException: Loading this assembly would produce a different grant set from other instances. (Exception from HRESULT: 0x80131401)] LLNP4.MvcApplication.Application_Start() in C:\UserData\SourceControl\LLNP4\Trunk\Web\Global.asax.cs:32

    Read the article

  • MySQL Access denied error

    - by dancingbush
    I am trying to install mySQL on a Mac OS 10.8 and set up a user account. NOTE I am a abs beginner when it comes to using the command line in Terminal window. I used these instructions to install: http://www.macminivault.com/mysql-mountain-lion/ I set my own password for all users here: GRANT ALL ON *.* TO 'root'@'localhost' IDENTIFIED BY 'mypass' WITH GRANT OPTION; quit Every time i try to execute mySQL as a root user on the command line i get this: Ciarans-MacBook-Pro:~ callanmooneys$ mysql -u root ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) I read around on the net and tried various things including tried this to change password: mysqladmin -u root -pyourcurrentmysqlrootpassword password yournewmysqlrootpassword, it returns -> -> USE mysql -> If i simply type 'mysql' and launch the mySQL monitor then try to crete a user account: mysql> USE mysql ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'mysql' mysql> Also tried answers on forum: access is denied for user 'root'@localhost mysql error 1045 returned '[email protected] command not found And MySQL - ERROR 1045 - Access denied: Ciarans-MacBook-Pro:~ callanmooneys$ mysqld_safe --skip-grant-tables 131105 21:44:41 mysqld_safe Logging to '/usr/local/mysql/data/Ciarans-MacBook-Pro.local.err'. 131105 21:44:41 mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data /usr/local/mysql/bin/mysqld_safe: line 129: /usr/local/mysql/data/Ciarans-MacBook-Pro.local.err: Permission denied /usr/local/mysql/bin/mysqld_safe: line 166: /usr/local/mysql/data/Ciarans-MacBook-Pro.local.err: Permission denied 131105 21:44:41 mysqld_safe mysqld from pid file /usr/local/mysql/data/Ciarans-MacBook-Pro.local.pid ended /usr/local/mysql/bin/mysqld_safe: line 129: /usr/local/mysql/data/Ciarans-MacBook-Pro.local.err: Permission denied Ciarans-MacBook-Pro:~ callanmooneys$ mysql -u root ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) Ciarans-MacBook-Pro:~ callanmooneys$ Feedback appreciated.

    Read the article

  • JAAS and WebLogic 10.3: Granting specific codebase permissions to a JAR bundled within an EAR

    - by Jason
    Here's my scenario: I have a JAR within the APP-INF/lib of my EAR, to be deployed within WebLogic 10g Release 3 against which I wish to grant specific permissions. e.g., grant codebase "file:/c:/somedir/my.jar" { permission java.net.SocketPermission "*:-","accept,connect,listen, resolve"; permission java.net.SocketPermission "localhost:-","accept,connect,listen,resolve"; permission java.net.SocketPermission "127.0.0.1:-","accept,connect,listen,resolve"; permission java.net.SocketPermission "230.0.0.1:-","accept,connect,listen,resolve"; permission java.util.PropertyPermission "*", "read,write"; permission java.lang.RuntimePermission "*"; permission java.io.FilePermission "<<ALL FILES>>","read,write,delete"; permission javax.security.auth.AuthPermission "*"; permission java.security.SecurityPermission "*"; }; Questions: Where is the best place to define this grant - in the java.policy of the JRE, WL server's weblogic.policy, or within a XML packaged within the EAR How do I define the codebase URL to the JAR? The examples I have seen have an explicit reference to the JAR on the file system, however I am deploying the JAR packaged up within an EAR. Thanks!

    Read the article

  • Mixed Mode C++ DLL function call failure when app launched from network share. Called from unmanage

    - by Steve
    Mixed-mode DLL called from native C application fails to load: An unhandled exception of type 'System.IO.FileLoadException' occurred in Unknown Module. Additional information: Could not load file or assembly 'XXSharePoint, Version=0.0.0.0, Culture=neutral, PublicKeyToken=e0fbc95fd73fff47' or one of its dependencies. Failed to grant minimum permission requests. (Exception from HRESULT: 0x80131417) My environment is: Native C application calling a mixed mode C++ DLL, which then loads a C# DLL.. This works correctly when loaded from a local drive, but when launched from a network drive, it fails with the above messages. The call to LoadLibrary succeeds, as does the GetProcAddress. The load error happens when I call the function. I have digitally signed the C application, and I've performed "strong name" signing on the 2 DLLs. The PublickKeyToken in the message above does match the named DLL. I have also issued the CASPOLcommands on my client to grant FullTrust to that strong name keytoken. When that failed to work, I tried the CASPOL command to grant FullTrust to the URL of the network drive (including path to my application's directory); no change in results. I tried removing all dependencies, so that there was just the initial mixed-mode DLL... I replaced the bodies of all the functions with just a return of a "success" integer value. Results unchanged. Only when I changed it from Mixed Mode to Win32, and changed the Configuration Properties General Common Language Runtime Support from "Common Language Runtime Support" to "No Common Language Runtime Support" did calling the DLL produce the expected result (just returned the "success" integer return value).

    Read the article

  • Problem prompting user for extended permissions using showPermissionDialog in FB page tab

    - by snipe
    I have an FBML app that will use the tab as a promo tab before the full app goes live. The purpose of the promo tab is to allow users to opt in to email notifications (using the FB API sendNotifications call), so I need to prompt them to allow the app and grant extended permissions on that promo tab. The tab code is: <?php require_once 'config.php'; ?> <form id="form1"> <h1> <a href="#" clickrewriteform="form1" clickrewriteurl="http://www.mydomain.com/fanpageajax/result.php" clickrewriteid="allowapp">Step 1. Allow the Application</a> </h1> <div id="allowapp"></div> </form> <h1><a onclick="Facebook.showPermissionDialog('email');return false;"> Step 2. Grant extended permissions (intab)</a></h1> The result.php page just tags the API to ensure the allow prompt will show up. The problem is with the Step 2. Once the user has allowed the app, and they click on the Step 2, nothing happens. If they click on it twice, THEN the extended permissions dialog box popups up, but it asks them to grant extended permissions TWICE. OR.... If the user clicks on Step 1, and allows the app, and then reloads the fan page tab, they only have to click on the Step 2 link once, and the permissions show up. Anyone have any ideas? I have been beating myself in the head over this for hours.

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Problem with icacls on Windows 2003: "Acl length is incorrect"

    - by Andrew J. Brehm
    I am confused by the output of icacls on Windows 2003. Everything appears to work on Windows 2008. I am trying to change permissions on a directory: icacls . /grant mydomain\someuser:(OI)(CI)(F) This results in the following error: .: Acl length is incorrect. .: An internal error occurred. Successfully processed 0 files; Failed processing 1 files The same command used on a file named "file" works: icacls file /grant mydomain\someuser:(OI)(CI)(F) Result is: processed file: file Successfully processed 1 files; Failed processing 0 files What's going on?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >