Search Results

Search found 5623 results on 225 pages for 'prevent deletion'.

Page 208/225 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • How to Implement Single Sign-On between Websites

    - by hmloo
    Introduction Single sign-on (SSO) is a way to control access to multiple related but independent systems, a user only needs to log in once and gains access to all other systems. a lot of commercial systems that provide Single sign-on solution and you can also choose some open source solutions like Opensso, CAS etc. both of them use centralized authentication and provide more robust authentication mechanism, but if each system has its own authentication mechanism, how do we provide a seamless transition between them. Here I will show you the case. How it Works The method we’ll use is based on a secret key shared between the sites. Origin site has a method to build up a hashed authentication token with some other parameters and redirect the user to the target site. variables Status Description ssoEncode required hash(ssoSharedSecret + , + ssoTime + , + ssoUserName) ssoTime required timestamp with format YYYYMMDDHHMMSS used to prevent playback attacks ssoUserName required unique username; required when a user is logged in Note : The variables will be sent via POST for security reasons Building a Single Sign-On Solution Origin Site has function to 1. Create the URL for your Request. 2. Generate required authentication parameters 3. Redirect to target site. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string postbackUrl = "http://www.targetsite.com/sso.aspx"; string ssoTime = DateTime.Now.ToString("yyyyMMddHHmmss"); string ssoUserName = User.Identity.Name; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string ssoHash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); string value = string.Format("{0}:{1},{2}", ssoHash,ssoTime, ssoUserName); Response.Clear(); StringBuilder sb = new StringBuilder(); sb.Append("<html>"); sb.AppendFormat(@"<body onload='document.forms[""form""].submit()'>"); sb.AppendFormat("<form name='form' action='{0}' method='post'>", postbackUrl); sb.AppendFormat("<input type='hidden' name='t' value='{0}'>", value); sb.Append("</form>"); sb.Append("</body>"); sb.Append("</html>"); Response.Write(sb.ToString()); Response.End(); } } Target Site has function to 1. Get authentication parameters. 2. Validate the parameters with shared secret. 3. If the user is valid, then do authenticate and redirect to target page. 4. If the user is invalid, then show errors and return. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { if (User.Identity.IsAuthenticated) { Response.Redirect("~/Default.aspx"); } } if (Request.Params.Get("t") != null) { string ticket = Request.Params.Get("t"); char[] delimiters = new char[] { ':', ',' }; string[] ssoVariable = ticket.Split(delimiters, StringSplitOptions.None); string ssoHash = ssoVariable[0]; string ssoTime = ssoVariable[1]; string ssoUserName = ssoVariable[2]; DateTime appTime = DateTime.MinValue; int offsetTime = 60; // get this from config or similar try { appTime = DateTime.ParseExact(ssoTime, "yyyyMMddHHmmss", null); } catch { //show error return; } if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } bool isValid = false; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string hash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); if (string.Compare(ssoHash, hash, true) == 0) { if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } else { isValid = true; } } if (isValid) { //Do authenticate; } else { //show error return; } } else { //show error } } } Summary This is a very simple and basic SSO solution, and its main advantage is its simplicity, only needs to add a single page to do SSO authentication, do not need to modify the existing system infrastructure.

    Read the article

  • Common reasons for the &lsquo;Sys is undefined&rsquo; error in ASP.NET Ajax applications

      In this blog I will try to summarize the most common reasons for getting the famous 'Sys is undefined' error when running an Ajax enabled web site or application (there are almost one milion results on Google for that phrase). Where does it come from? In every Ajax web pages source you will see a code like this: <script type="text/javascript"> //<![CDATA[ Sys.WebForms.PageRequestManager._initialize('ScriptManager1', document.getElementById('form1')); Sys.WebForms.PageRequestManager.getInstance()._updateControls([], [], [], 90); //]]> </script>   This is the initialization script of the ScriptManager. So, if for some reason the Sys namespace is not available when the code executes you get the Sys is undefined error. Here are the most common reasons and solutions for that problem:   1. The error occurs when you have added a control from RadControls for ASP.NET AJAX, but your application is not configured to use ASP.NET AJAX. For example, in VS 2005 you created a new Blank Site instead of a new Ajax-Enabled Web Site and the Sys is undefined message pops up. To fix it you need to follow the steps described at Configuring ASP.NET Ajax article (check the topic called Adding ASP.NET AJAX Configuration Elements to an Existing Web Site) or simply create the Ajax-Enabled Web Site. You can also check my other blog post on the matter: Visual Studio 2008: Where is the new ASP.NET Ajax-Enabled Web Site template?   2. Authentication - as the website denies access to all pages to unauthorized users, access to the Telerik.Web.UI.WebResource.axd handler is unauthorized (this is the default handler of RadScriptManager). This causes the handler to serve the content of the login page instead of the combined scripts, hence the error. To solve it - add a <location> section to the application configuration file to allow access to Telerik.Web.UI.WebResource.axd to all users, like: <configuration> ... <location path="Telerik.Web.UI.WebResource.axd"> <system.web> <authorization> <allow users="*"/> </authorization> </system.web> </location> ... </configuration>   Note that the access to the standard ScriptResource.axd and WebResource.axd is automatically allowed for all users (authenticated and unauthenticated), so if you use the ScriptManager instead of RadScriptManager - you will not face this problem. The authentication problem does not manifest when you disable script combining or use the CDN. Adding the above configuration section will make it work with RadScriptManagers combined script.   3. The IE6 browser fails to load the compressed script. The problem does not appear in any other browser. There is a well known bug in the older versions of IE6 which lose the first 2,048 bytes of data that are sent back from a Web server that uses HTTP compression. Latest versions of RadScriptManager does not compress the output at all if the client is IE6, but in the previous versions you need to manually disable the output compression to prevent the error. So, if you get the Sys is undefined error in IE6 - update to the latest version of RadControls or simply disable the output compression.   4. Requests to the *.axd files returns Error Code 404 - Not Found. This could  be fixed easily: Check in the IIS management console that the .axd extension (the default HTTP handler extension) is allowed:     Also check if the Verify if file exists checkbox is unchecked (click on the Edit button appearing in the previous screenshot to check). More information can be found in our troubleshooting article and from the ASP.NET QA team blog post   5. The virtual directory in IIS is not marked as Web Application. Converting it to Web Application should fix the problem.   6. Check for the <xhtmlConformance mode="Legacy"/> option in your web.config and remove it. It would be rather rare to become a victim of this exact case, but still have it in mind. Scott Guthrie describes it in more details   In the above points I mentioned several times the terms web resources, javascript output, compressed script. If you want to find out more about these please see the Web Resources Demystified series of my friend and colleague Atanas Korchev   I hope that one of the above solutions will help you get rid of the Sys is undefined error.   Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Taking the Plunge - or Dipping Your Toe - into the Fluffy IAM Cloud by Paul Dhanjal (Simeio Solutions)

    - by Greg Jensen
    In our last three posts, we’ve examined the revolution that’s occurring today in identity and access management (IAM). We looked at the business drivers behind the growth of cloud-based IAM, the shortcomings of the old, last-century IAM models, and the new opportunities that federation, identity hubs and other new cloud capabilities can provide by changing the way you interact with everyone who does business with you. In this, our final post in the series, we’ll cover the key things you, the enterprise architect, should keep in mind when considering moving IAM to the cloud. Invariably, what starts the consideration process is a burning business need: a compliance requirement, security vulnerability or belt-tightening edict. Many on the business side view IAM as the “silver bullet” – and for good reason. You can almost always devise a solution using some aspect of IAM. The most critical question to ask first when using IAM to address the business need is, simply: is my solution complete? Typically, “business” is not focused on the big picture. Understandably, they’re focused instead on the need at hand: Can we be HIPAA compliant in 6 months? Can we tighten our new hire, employee transfer and termination processes? What can we do to prevent another password breach? Can we reduce our service center costs by the end of next quarter? The business may not be focused on the complete set of services offered by IAM but rather a single aspect or two. But it is the job – indeed the duty – of the enterprise architect to ensure that all aspects are being met. It’s like remodeling a house but failing to consider the impact on the foundation, the furnace or the zoning or setback requirements. While the homeowners may not be thinking of such things, the architect, of course, must. At Simeio Solutions, the way we ensure that all aspects are being taken into account – to expose any gaps or weaknesses – is to assess our client’s IAM capabilities against a five-step maturity model ranging from “ad hoc” to “optimized.” The model we use is similar to Capability Maturity Model Integration (CMMI) developed by the Software Engineering Institute (SEI) at Carnegie Mellon University. It’s based upon some simple criteria, which can provide a visual representation of how well our clients fair when evaluated against four core categories: ·         Program Governance ·         Access Management (e.g., Single Sign-On) ·         Identity and Access Governance (e.g., Identity Intelligence) ·         Enterprise Security (e.g., DLP and SIEM) Often our clients believe they have a solution with all the bases covered, but the model exposes the gaps or weaknesses. The gaps are ideal opportunities for the cloud to enter into the conversation. The complete process is straightforward: 1.    Look at the big picture, not just the immediate need – what is our roadmap and how does this solution fit? 2.    Determine where you stand with respect to the four core areas – what are the gaps? 3.    Decide how to cover the gaps – what role can the cloud play? Returning to our home remodeling analogy, at some point, if gaps or weaknesses are discovered when evaluating the complete impact of the proposed remodel – if the existing foundation wouldn’t support the new addition, for example – the owners need to decide if it’s time to move to a new house instead of trying to remodel the old one. However, with IAM it’s not an either-or proposition – i.e., either move to the cloud or fix the existing infrastructure. It’s possible to use new cloud technologies just to cover the gaps. Many of our clients start their migration to the cloud this way, dipping in their toe instead of taking the plunge all at once. Because our cloud services offering is based on the Oracle Identity and Access Management Suite, we can offer a tremendous amount of flexibility in this regard. The Oracle platform is not a collection of point solutions, but rather a complete, integrated, best-of-breed suite. Yet it’s not an all-or-nothing proposition. You can choose just the features and capabilities you need using a pay-as-you-go model, incrementally turning on and off services as needed. Better still, all the other capabilities are there, at the ready, whenever you need them. Spooling up these cloud-only services takes just a fraction of the time it would take a typical organization to deploy internally. SLAs in the cloud may be higher than on premise, too. And by using a suite of software that’s complete and integrated, you can dramatically lower cost and complexity. If your in-house solution cannot be migrated to the cloud, you might consider using hardware appliances such as Simeio’s Cloud Interceptor to extend your enterprise out into the network. You might also consider using Expert Managed Services. Cost is usually the key factor – not just development costs but also operational sustainment costs. Talent or resourcing issues often come into play when thinking about sustaining a program. Expert Managed Services such as those we offer at Simeio can address those concerns head on. In a cloud offering, identity and access services lend to the new paradigms described in my previous posts. Most importantly, it allows us all to focus on what we're meant to do – provide value, lower costs and increase security to our respective organizations. It’s that magic “silver bullet” that business knew you had all along. If you’d like to talk more, you can find us at simeiosolutions.com.

    Read the article

  • how to label a cuboid using open gl?

    - by usha
    hi this is how my 3dcuboid looks ,i have attached complete code , i want to label this cuboid using different name across sides how is it possible using opengl in android...plz help me out public class MyGLRenderer implements Renderer { Context context; Cuboid rect; private float mCubeRotation; // private static float angleCube = 0; // Rotational angle in degree for cube (NEW) // private static float speedCube = -1.5f; // Rotational speed for cube (NEW) public MyGLRenderer(Context context) { rect = new Cuboid(); this.context = context; } public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset the model-view matrix gl.glTranslatef(0.2f, 0.0f, -8.0f); // Translate right and into the screen gl.glScalef(0.8f, 0.8f, 0.8f); // Scale down (NEW) gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f); // gl.glRotatef(angleCube, 1.0f, 1.0f, 1.0f); // rotate about the axis (1,1,1) (NEW) rect.draw(gl); mCubeRotation -= 0.15f; //angleCube += speedCube; } public void onSurfaceChanged(GL10 gl, int width, int height) { // TODO Auto-generated method stub if (height == 0) height = 1; // To prevent divide by zero float aspect = (float)width / height; // Set the viewport (display area) to cover the entire window gl.glViewport(0, 0, width, height); // Setup perspective projection, with aspect ratio matches viewport gl.glMatrixMode(GL10.GL_PROJECTION); // Select projection matrix gl.glLoadIdentity(); // Reset projection matrix // Use perspective projection GLU.gluPerspective(gl, 45, aspect, 0.1f, 100.f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select model-view matrix gl.glLoadIdentity(); // Reset } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // TODO Auto-generated method stub gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set color's clear-value to black gl.glClearDepthf(1.0f); // Set depth's clear-value to farthest gl.glEnable(GL10.GL_DEPTH_TEST); // Enables depth-buffer for hidden surface removal gl.glDepthFunc(GL10.GL_LEQUAL); // The type of depth testing to do gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); // nice perspective view gl.glShadeModel(GL10.GL_SMOOTH); // Enable smooth shading of color gl.glDisable(GL10.GL_DITHER); // Disable dithering for better performance }} public class Cuboid{ private FloatBuffer mVertexBuffer; private FloatBuffer mColorBuffer; private ByteBuffer mIndexBuffer; private float vertices[] = { //width,height,depth -2.5f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -2.5f, 1.0f, -1.0f, -2.5f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -2.5f, 1.0f, 1.0f }; private float colors[] = { // R,G,B,A COLOR 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; private byte indices[] = { // VERTEX 0,1,2,3,4,5,6,7 REPRESENTATION FOR FACES 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; public Cuboid() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mVertexBuffer = byteBuf.asFloatBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(colors.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mColorBuffer = byteBuf.asFloatBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } } public class Draw3drect extends Activity { private GLSurfaceView glView; // Use GLSurfaceView // Call back when the activity is started, to initialize the view @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setRenderer(new MyGLRenderer(this)); // Use a custom renderer this.setContentView(glView); // This activity sets to GLSurfaceView } // Call back when the activity is going into the background @Override protected void onPause() { super.onPause(); glView.onPause(); } // Call back after onPause() @Override protected void onResume() { super.onResume(); glView.onResume(); } }

    Read the article

  • CodePlex Daily Summary for Saturday, November 03, 2012

    CodePlex Daily Summary for Saturday, November 03, 2012Popular ReleasesZXMAK2: Version 2.6.8.2: fix save to SZX snapshot for +3 model fix +3 ULA timingsLaunchbar: Launchbar 4.2.2.0: This release is the first step in cleaning up the code and using all the latest features of .NET 4.5 Changes 4.2.2 (2012-11-02) Improved handling of left clicks 4.1.0 (2012-10-17) Removed tray icon Assembly renamed and signed with strong name Note When you upgrade, Launchbar will start with the default settings. You can import your previous settings by following these steps: Run Launchbar and just save the settings without configuring anything Shutdown Launchbar Go to the folder %LOCA...CommonLibrary.NET: CommonLibrary.NET 0.9.8.8: Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.8 - Final ReleaseApplication: FluentScript Version: 0.9.8.8 Build: 0.9.8.8 Changeset: 77368 ( CommonLibrary.NET ) Release date: November 2nd, 2012 Binaries: CommonLibrary.dll Namespace: ComLib.Lang Project site: http://fluentscript.codeplex.com/ Download: http://commonlibrarynet.codeplex.com/releases/view/90426 Source code: http://common...Mouse Jiggler: MouseJiggle-1.3: This adds the much-requested minimize-to-tray feature to Mouse Jiggler.Umbraco CMS: Umbraco 4.10.0 Release Candidate: This is a Release Candidate, which means that if we do not find any major issues in the next week, we will release this version as the final release of 4.10.0 on November 9th, 2012. The documentation for the MVC bits still lives in the Github version of the docs for now and will be updated on our.umbraco.org with the final release of 4.10.0. Browse the documentation here: https://github.com/umbraco/Umbraco4Docs/tree/4.8.0/Documentation/Reference/Mvc If you want to do only MVC then make sur...Skype Auto Recorder: SkypeAutoRecorder 1.3.4: New icon and images. Reworked settings window. Implemented high-quality sound encoding. Implemented a possibility to produce stereo records. Added buttons with system-wide hot keys for manual starting and canceling of recording. Added buttons for opening folder with records. Added Help button. Fixed an issue when recording is continuing after call end. Fixed an issue when recording doesn't start. Fixed several bugs and improved stability. Major refactoring and optimization...Access 2010 Application Platform - Build Your Own Database: Application Platform - 0.0.2: Release 0.0.2 Created two new users. One belongs to the Administrators group and the other to the Public group. User: admin Pass: admin User: guest Pass: guest Initial Release This is the first version of the database. At the moment is all contained in one file to make development easier, but the obvious idea would be to split it into Front and Back End for a production version of the tool. The features it contains at the moment are the "Core" features.Python Tools for Visual Studio: Python Tools for Visual Studio 1.5: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, etc. support. For a quick overview of the general IDE experience, please watch this video There are a number of exciting improvement in this release comp...BCF.Net: BCF.Net: BCF.Net-20121024 source codeAssaultCube Reloaded: 2.5.5: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we try to package for those OSes. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) Changelog: Fixed potential bot bugs: Map change, OpenAL...Edi: Edi 1.0 with DarkExpression: Added DarkExpression theme (dialogs and message boxes are not completely themed, yet)DirectX Tool Kit: October 30, 2012 (add WP8 support): October 30, 2012 Added project files for Windows Phone 8MCEBuddy 2.x: MCEBuddy 2.3.6: Changelog for 2.3.6 (32bit and 64bit) 1. Fixed a bug in multichannel audio conversion failure. AAC does not support 6 channel audio, MCEBuddy now checks for it and force the output to 2 channel if AAC codec is specified 2. Fixed a bug in Original Broadcast Date and Time. Original Broadcast Date and Time is reported in UTC timezone in WTV metadata. TVDB and MovieDB dates are reported in network timezone. It is assumed the video is recorded and converted on the same machine, i.e. local timezone...MVVM Light Toolkit: MVVM Light Toolkit V4.1 for Visual Studio 2012: This version only supports Visual Studio 2012 (and all Express editions too). If you use Visual Studio 2010, please stay tuned, we will publish an update in a few days with support for VS10. V4.1 supports: Windows Phone 8 Windows 8 (Windows RT) Silverlight 5 Silverlight 4 WPF 4.5 WPF 4 WPF 3.5 And the following development environments: Visual Studio 2012 (Pro, Premium, Ultimate) Visual Studio 2012 Express for Windows 8 Visual Studio 2012 Express for Windows Phone 8 Visual...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.73: Fix issue in Discussion #401101 (unreferenced var in a for-in statement was getting removed). add the grouping operator to the parsed output so that unminified parsed code is closer to the original. Will still strip unneeded parens later, if minifying. more cleaning of references as they are minified out of the code.RiP-Ripper & PG-Ripper: PG-Ripper 1.4.03: changes NEW: Added Support for the phun.org forum FIXED: Kitty-Kats new Forum UrlLiberty: v3.4.0.1 Release 28th October 2012: Change Log -Fixed -H4 Fixed the save verification screen showing incorrect mission and difficulty information for some saves -H4 Hopefully fixed the issue where progress did not save between missions and saves would not revert correctly -H3 Fixed crashes that occurred when trying to load player information -Proper exception dialogs will now show in place of crashesPlayer Framework by Microsoft: Player Framework for Windows 8 (Preview 7): This release is compatible with the version of the Smooth Streaming SDK released today (10/26). Release 1 of the player framework is expected to be available next week. IMPROVEMENTS & FIXESIMPORTANT: List of breaking changes from preview 6 Support for the latest smooth streaming SDK. Xaml only: Support for moving any of the UI elements outside the MediaPlayer (e.g. into the appbar). Note: Equivelent changes to the JS version due in coming week. Support for localizing all text used in t...Send multiple SMS via Way2SMS C#: SMS 1.1: Added support for 160by2Quick Launch: Quick Launch 1.0: A Lightweight and Fast Way to Manage and Launch Thousands of Tools and ApplicationsPress Win+Q and start to search and run. http://www.codeplex.com/Download?ProjectName=quicklaunch&DownloadId=523536New Projectsappnitiren: This project's main objective is to deepen my knowledge on developing apps for windows 8 and windows phone and Buddhist philosophy of Nichiren Daishonin. Aspect - TypeScript AOP Framework: A simple AOP framework for TypeScript.AWF's PowerPoint Tab: A collection of little powerpoint utilities, including "resize all shapes".Base64 Encoder-Decoder: (Base64 Encoder/Decoder using C#)BombaJob-WF: Official Windows desktop client for BombaJob.bgBungie Test: !CCAudio: A project to create an audio recording/mixing and processing framework in C#/C++ with a look to creating a Digital Audio Workstation. EvaluationSyarem: for yuanguangexporttfschangesets: Small utility that runs agains tftp.exe(tfs power tools) and exports a given range of changesets.fjfdszjtfs: fjfdszjtfsKladionica: Ovo je mala web aplikacija koja je specijalizovana za f1 kladionicu na forumu KrstaricaListViewItem Float Test: WPF testing ground for development of a "floating" listviewitem (preeettty)Macconnell: This is my test projectNWebsec Demo site: This is the NWebsec demo website project.Ocean Flattened: Summary of this projectQuagmire: QUAGMIRE aims to make distributed data processing and visualisation simple and flexible.Scheduled SQL Jobs for DotNetNuke by IowaComputerGurus Inc.: Keeping a DotNetNuke site database clean can be a real nightmare for those hosting sites on shared hosting providers.Secretary Tool: Tool for secretaries of congregations of Jehovah's Witnesses.Secure My Install for DotNetNuke by IowaComputerGurus: This module is a single-use per DotNetNuke installation utility module that will make a number of security enhancing modifications to your "out of the box" DotNServStop: ServStop is a .NET application that makes it easy to stop several system services at once. Now you don't have to change startup types or stop them one at a time. It has a simple list-based interface with the ability to save and load lists of user services to stop. Written in C#.Simple File List for DotNetNuke by IowaComputerGurus Inc.: The Simple File List module is a Free DotNetNuke module that will list all files within a specific folder under the specific portal folder within DNN for users SOLUS3 Config Discovery: An open-source community developed tool for generating a simple text file detailing your local SIMS server settings to simplify new SOLUS3 configurations.Sort Project: Project Name: Sort Project Description: This project holds a class that extends objects type of IEnumerable by enabling them using sorting algorithms other than the buil in one within the microsoft staff SQL Azure Data Protector: This tool leverages the currently available options that Azure platform has to provide an integrated and automated solution to back up the SQL Azure databases.Throttling Suite for ASP.NET Applications: The Throttling Suite provides throttling control capabilities to the ASP.NET applications. It is highly customizable; including "log-only" mode.TidyBackups: Allows for automation of deletion of Microsoft SQL backup files based on file type (.bak) creation age as well as compression using standard ZIP technology.TrainingData: TrainingData is a set of libraries for reading and modifying training data files for Garmin and Polar heart rate monitors.Virtual eCommerce: ??? ? ?????? ????????? ??????. ??????: ??????????? ?????????, ?????????? ?????????? ?? ????????????? ?? Microsoft - ASP Web Pages with Razor Engine.WebNext: My personal web page / blog with cms on asp mvc 3Windows 8 Mouse Unsticky: Makes Windows 8 top right/left hot corners unsticky.WPForms: WPForms is simple framework for Form-driven Silverlight Windows Phone applications. Using the framework allows developers to easily create and display forms.

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Application Composer Series: Where and When to use Groovy

    - by Richard Bingham
    This brief post is really intended as more of a reference than an article. The table below highlights two things, firstly where you can add you own custom logic via groovy code (end column), and secondly (middle column) when you might use each particular feature. Obviously this applies only where Application Composer exists, namely Fusion CRM and Oracle Sales Cloud, and is based on current (release 8) functionality. Feature Most Common Use Case Groovy Field Triggers React to run-time data changes. Only fired when the field is changed and upon submit. Y Object Triggers To extend the standard processing logic for an object, based on record creation, updates and deletes. There is a split between these firing events, with some related to UI/ADF actions and others originating in the database. UI Trigger Points: After Create - fires when a new object record is created. Commonly used to set default values for fields. Before Modify - Fires when the end-user tries to modify a field value. Could be used for generic warnings or extra security logic. Before Invalidate - Fires on the parent object when one of its child object records is created, updated, or deleted. For building in relationship logic. Before Remove - Fires when an attempt is made to delete an object record. Can be used to create conditions that prevent deletes. Database Trigger Points: Before Insert in Database - Fires before a new object is inserted into the database. Can be used to ensure a dependent record exists or check for duplicates. After Insert in Database - Fires after a new object is inserted into the database. Could be used to create a complementary record. Before Update in Database -Fires before an existing object is modified in the database. Could be used to check dependent record values. After Update in Database - Fires after an existing object is modified in the database. Could be used to update a complementary record. Before Delete in Database - Fires before an existing object is deleted from the database. Could be used to check dependent record values. After Delete in Database - Fires after an existing object is deleted from the database. Could be used to remove dependent records. After Commit in Database - Fires after the change pending for the current object (insert, update, delete) is made permanent in the current transaction. Could be used when committed data that has passed all validation is required. After Changes Posted to Database - Fires after all changes have been posted to the database, but before they are permanently committed. Could be used to make additional changes that will be saved as part of the current transaction. Y Field Validation Displays a user entered error message based groovy logic validating the field value. The message is shown only when the validation logic returns false, and the logic is triggered only when tabbing out of the field on the user interface. Y Object Validation Commonly used where validation is needed across multiple related fields on the object. Triggered on the submit UI action. Y Object Workflows All Object Workflows are fired upon either record creation or update, along with the option of adding a custom groovy firing condition. Y Field Updates - change another field when a specified one changes. Intended as an easy way to set different run-time values (e.g. pick values for LOV's) plus the value field permits groovy logic entry. Y E-Mail Notification - sends an email notification to specified users/roles. Templates support using run-time value tokens and rich text. N Task Creation - for adding standard tasks for use in the worklist functionality. N Outbound Message - will create and send an XML payload of the related object SDO to a specified endpoint. N Business Process Flow - intended for approval using the seeded process, however can also trigger custom BPMN flows. N Global Functions Utility functions that can be called from any groovy code in Application Composer (across applications). Y Object Functions Utility functions that are local to the parent object. Usually triggered from within 'Buttons and Actions' definitions in Application Composer, although can be called from other code for that object (e.g. from a trigger). Y Add Custom Fields When adding custom fields there are a few places you can include groovy logic. Y Default Value - to add logic within setting the default value when new records are entered. Y Conditionally Updateable - to add logic to set the field to read-only or not. Y Conditionally Required - to add logic to set the field to required or not. Y Formula Field - Used to provide a new aggregate field that is entirely based on groovy logic and other field values. Y Simplified UI Layouts - Advanced Expressions Used for creating dynamic layouts for simplified UI pages where fields and regions show/hide based on run-time context values and logic. Also includes support for the depends-on feature as a trigger. Y Related References This Blog: Application Composer Series Extending Sales Guide: Using Groovy Scripts Groovy Scripting Reference Guide

    Read the article

  • How to break out of if statement

    - by TheBroodian
    I'm not sure if the title is exactly an accurate representation of what I'm actually trying to ask, but that was the best I could think of. I am experiencing an issue with my character class. I have developed a system so that he can perform chain attacks, and something that was important to me was that 1)button presses during the process of an attack wouldn't interrupt the character, and 2) at the same time, button presses should be stored so that the player can smoothly queue up chain attacks in the middle of one so that gameplay doesn't feel rigid or unresponsive. This all begins when the player presses the punch button. Upon pressing the punch button, the game checks the state of the dpad at the moment of the button press, and then translates the resulting combined buttons into an int which I use as an enumerator relating to a punch method for the character. The enumerator is placed into a List so that the next time the character's Update() method is called, it will execute the next punch in the list. It only executes the next punch if my character is flagged with acceptInput as true. All attacks flag acceptInput as false, to prevent the interruption of attacks, and then at the end of an attack, acceptInput is set back to true. While accepting input, all other actions are polled for, i.e. jumping, running, etc. In runtime, if I attack, and then queue up another attack behind it (by pressing forward+punch) I can see the second attack visibly execute, which should flag acceptInput as false, yet it gets interrupted and my character will stop punching and start running if I am still holding down the dpad. Included is some code for context. This is the input region for my character. //Placed this outside the if (acceptInput) tree because I want it //to be taken into account whether we are accepting input or not. //This will queue up attacks, which will only be executed if we are accepting input. //This creates a desired effect that helps control the character in a // smoother fashion for the player. if (Input.justPressed(buttonManager.Punch)) { int dpadPressed = Input.DpadState(0); if (attackBuffer.Count() < 1) { attackBuffer.Add(CheckPunch(dpadPressed)); } else { attackBuffer.Clear(); attackBuffer.Add(CheckPunch(dpadPressed)); } } if (acceptInput) { if (attackBuffer.Count() > 0) { ExecutePunch(attackBuffer[0]); attackBuffer.RemoveAt(0); } //If D-Pad left is being held down. if (Input.DpadDirectionHeld(0, buttonManager.Left)) { flipped = false; if (onGround) { newAnimation = "run"; } velocity = new Vector2(velocity.X - acceleration, velocity.Y); if (walking == true && velocity.X <= -walkSpeed) { velocity.X = -walkSpeed; } else if (walking == false && velocity.X <= -maxSpeed) { velocity.X = -maxSpeed; } } //If D-Pad right is being held down. if (Input.DpadDirectionHeld(0, buttonManager.Right)) { flipped = true; if (onGround) { newAnimation = "run"; } velocity = new Vector2(velocity.X + acceleration, velocity.Y); if (walking == true && velocity.X >= walkSpeed) { velocity.X = walkSpeed; } else if (walking == false && velocity.X >= maxSpeed) { velocity.X = maxSpeed; } } //If jump/accept button is pressed. if (Input.justPressed(buttonManager.JumpAccept)) { if (onGround) { Jump(); } } //If toggle element next button is pressed. if (Input.justPressed(buttonManager.ToggleElementNext)) { if (elements.Count != 0) { elementInUse++; if (elementInUse >= elements.Count) { elementInUse = 0; } } } //If toggle element last button is pressed. if (Input.justPressed(buttonManager.ToggleElementLast)) { if (elements.Count != 0) { elementInUse--; if (elementInUse < 0) { elementInUse = Convert.ToSByte(elements.Count() - 1); } } } //If character is in the process of jumping. if (jumping == true) { if (Input.heldDown(buttonManager.JumpAccept)) { velocity.Y -= fallSpeed.Y; maxJumpTime -= elapsed; } if (Input.justReleased(buttonManager.JumpAccept) || maxJumpTime <= 0) { jumping = false; maxJumpTime = 0; } } //Won't execute abilities if input isn't being accepted. foreach (PlayerAbility ability in playerAbilities) { if (buffer.Matches(ability)) { if (onGround) { ability.Activate(); } if (!onGround && ability.UsableInAir) { ability.Activate(); } else if (!onGround && !ability.UsableInAir) { buffer.Clear(); } } } } When the attackBuffer calls ExecutePunch(int) method, ExecutePunch() will call one of the following methods: private void NeutralPunch1() //0 { acceptInput = false; busy = true; newAnimation = "punch1"; numberOfAttacks++; timeSinceLastAttack = 0; } private void ForwardPunch2(bool toLeft) //true == 7, false == 4 { forwardPunch2Timer = 0f; acceptInput = false; busy = true; newAnimation = "punch2begin"; numberOfAttacks++; timeSinceLastAttack = 0; if (toLeft) { velocity.X -= 800; } if (!toLeft) { velocity.X += 800; } } I assume the attack is being interrupted due to the fact that ExecutePunch() is in the same if statement as running, but I haven't been able to find a suitable way to stop this happening. Thank you ahead of time for reading this, I apologize for it having become so long winded.

    Read the article

  • Pirates, Treasure Chests and Architectural Mapping

    Pirate 1: Why do pirates create treasure maps? Pirate 2: I do not know.Pirate 1: So they can find their gold. Yes, that was a bad joke, but it does illustrate a point. Pirates are known for drawing treasure maps to their most prized possession. These documents detail the decisions pirates made in order to hide and find their chests of gold. The map allows them to trace the steps they took originally to hide their treasure so that they may return. As software engineers, programmers, and architects we need to treat software implementations much like our treasure chest. Why is software like a treasure chest? It cost money, time,  and resources to develop (Usually) It can make or save money, time, and resources (Hopefully) If we operate under the assumption that software is like a treasure chest then wouldn’t make sense to document the steps, rationale, concerns, and decisions about how it was designed? Pirates are notorious for documenting where they hide their treasure.  Shouldn’t we as creators of software do the same? By documenting our design decisions and rationale behind them will help others be able to understand and maintain implemented systems. This can only be done if the design decisions are correctly mapped to its corresponding implementation. This allows for architectural decisions to be traced from the conceptual model, architectural design and finally to the implementation. Mapping gives software professional a method to trace the reason why specific areas of code were developed verses other options. Just like the pirates we need to able to trace our steps from the start of a project to its implementation,  so that we will understand why specific choices were chosen. The traceability of a software implementation that actually maps back to its originating design decisions is invaluable for ensuring that architectural drifting and erosion does not take place. The drifting and erosion is prevented by allowing others to understand the rational of why an implementation was created in a specific manor or methodology The process of mapping distinct design concerns/decisions to the location of its implemented is called traceability. In this context traceability is defined as method for connecting distinctive software artifacts. This process allows architectural design models and decisions to be directly connected with its physical implementation. The process of mapping architectural design concerns to a software implementation can be very complex. However, most design decision can be placed in  a few generalized categories. Commonly Mapped Design Decisions Design Rationale Components and Connectors Interfaces Behaviors/Properties Design rational is one of the hardest categories to map directly to an implementation. Typically this rational is mapped or document in code via comments. These comments consist of general design decisions and reasoning because they do not directly refer to a specific part of an application. They typically focus more on the higher level concerns. Components and connectors can directly be mapped to architectural concerns. Typically concerns subdivide an application in to distinct functional areas. These functional areas then can map directly back to their originating concerns.Interfaces can be mapped back to design concerns in one of two ways. Interfaces that pertain to specific function definitions can be directly mapped back to its originating concern(s). However, more complicated interfaces require additional analysis to ensure that the proper mappings are created. Depending on the complexity some Behaviors\Properties can be translated directly into a generic implementation structure that is ready for business logic. In addition, some behaviors can be translated directly in to an actual implementation depending on the complexity and architectural tools used. Mapping design concerns to an implementation is a lot of work to maintain, but is doable. In order to ensure that concerns are mapped correctly and that an implementation correctly reflects its design concerns then one of two standard approaches are usually used. All Changes Come From ArchitectureBy forcing all application changes to come through the architectural model prior to implementation then the existing mappings will be used to locate where in the implementation changes need to occur. Allow Changes From Implementation Or Architecture By allowing changes to come from the implementation and/or the architecture then the other area must be kept in sync. This methodology is more complex compared to the previous approach.  One reason to justify the added complexity for an application is due to the fact that this approach tends to detect and prevent architectural drift and erosion. Additionally, this approach is usually maintained via software because of the complexity. Reference:Taylor, R. N., Medvidovic, N., & Dashofy, E. M. (2009). Software architecture: Foundations, theory, and practice Hoboken, NJ: John Wiley & Sons  

    Read the article

  • SOA, Governance, and Drugs

    Why is IT governance important in service oriented architecture (SOA)? IT Governance provides a framework for making appropriate decisions based on company guidelines and accepted standards. This framework also outlines each stakeholder’s responsibilities and authority when making important architectural or design decisions. Furthermore, this framework of governance defines parameters and constraints that are used to give context and perspective when making decisions. The use of governance as it applies to SOA ensures that specific design principles and patterns are used when developing and maintaining services. When governance is consistently applied systems the following benefits are achieved according to Anne Thomas Manes in 2010. Governance makes sure that services conform to standard interface patterns, common data modeling practices, and promotes the incorporation of existing system functionality by building on top of other available services across a system. Governance defines development standards based on proven design principles and patterns that promote reuse and composition. Governance provides developers a set of proven design principles, standards and practices that promote the reduction in system based component dependencies.  By following these guidelines, individual components will be easier to maintain. For me personally, I am a fan of IT governance, and feel that it valuable part of any corporate IT department. However, depending on how it is implemented can really affect the value of using IT governance.  Companies need to find a way to ensure that governance does not become extreme in its policies and procedures. I know for me personally, I would really dislike working under a completely totalitarian or laissez-faire version of governance. Developers need to be able to be creative in their designs and too much governance can really impede the design process and prevent the most optimal design from being developed. On the other hand, with no governance enforced, no standards will be followed and accepted design patterns will be ignored. I have personally had to spend a lot of time working on this particular scenario and I have found that the concept of code reuse and composition is almost nonexistent.  Based on this, too much time and money is wasted on redeveloping existing aspects of an application that already exist within the system as a whole. I think moving forward we will see a staggered form of IT governance, regardless if it is for SOA or IT in general.  Depending on the size of a company and the size of its IT department,  I can see IT governance as a layered approach in that the top layer will be defined by enterprise architects that focus on abstract concepts pertaining to high level design, general  guidelines, acceptable best practices, and recommended design patterns.  The next layer will be defined by solution architects or department managers that further expand on abstracted guidelines defined by the enterprise architects. This layer will contain further definitions as to when various design patterns, coding standards, and best practices are to be applied based on the context of the solutions that are being developed by the department. The final layer will be defined by the system designer or a solutions architect assed to a project in that they will define what design patterns will be used in a solution, naming conventions, as well as outline how a system will function based on the best practices defined by the previous layers. This layered approach allows for IT departments to be flexible in that system designers have creative leeway in designing solutions to meet the needs of the business, but they must operate within the confines of the abstracted IT governance guidelines.  A real world example of this can be seen in the United States as it pertains to governance of the people in that the US government defines rules and regulations in the abstract and then the state governments take these guidelines and applies them based on the will of the people in each individual state. Furthermore, the county or city governments are the ones that actually enforce these rules based on how they are interpreted by local community.  To further define my example, the United States government defines that marijuana is illegal. Each individual state has the option to determine this regulation as it wishes in that the state of Florida determines that all uses of the drug are illegal, but the state of California legally allows the use of marijuana for medicinal purposes only. Based on these accepted practices each local government enforces these rules in that a police officer will arrest anyone in the state of Florida for having this drug on them if they walk down the street, but in California if a person has a medical prescription for the drug they will not get arrested.  REFERENCESThomas Manes, Anne. (2010). Understanding SOA Governance: http://www.soamag.com/I40/0610-2.php

    Read the article

  • Determining the angle to fire a shot when target and shooter moves, and bullet moves with shooter velocity added in

    - by Azaral
    I saw this question: Predicting enemy position in order to have an object lead its target and followed the link in the answer to stack overflow. In the stack overflow page I used the 2nd answer, the one that is a large mathematical derivation. My situation is a little different though. My first question though is will the answer provided in the stack overflow page even work to begin with, assuming the original circumstances of moving target and stationary shooter. My situation is a little different than that situation. My target moves, the shooter moves, and the bullets from the shooter start off with the velocities in x and y added to the bullets' x and y velocities. If you are sliding to the right, the bullets will remain in front of you as you move so as long as your velocity remains constant. What I'm trying to do is to get the enemy to be able to determine where they need to shoot in order to hit the player. Unless the player and enemy is stationary, the velocity from the ship adding to the velocity of the bullets will cause a miss. I'd rather like to prevent that. I used the formula in the stack overflow answer and did what I thought were the appropriate adjustments. I've been banging at this for the last four hours and I just can't make it click. It is probably something really simple and boneheaded that I am missing (that seems to be a lot of my problems lately). Here is the solution presented from the stack overflow answer: It boils down to solving a quadratic equation of the form: a * sqr(x) + b * x + c == 0 Note that by sqr I mean square, as opposed to square root. Use the following values: a := sqr(target.velocityX) + sqr(target.velocityY) - sqr(projectile_speed) b := 2 * (target.velocityX * (target.startX - cannon.X) + target.velocityY * (target.startY - cannon.Y)) c := sqr(target.startX - cannon.X) + sqr(target.startY - cannon.Y) Now we can look at the discriminant to determine if we have a possible solution. disc := sqr(b) - 4 * a * c If the discriminant is less than 0, forget about hitting your target -- your projectile can never get there in time. Otherwise, look at two candidate solutions: t1 := (-b + sqrt(disc)) / (2 * a) t2 := (-b - sqrt(disc)) / (2 * a) Note that if disc == 0 then t1 and t2 are equal. If there are no other considerations such as intervening obstacles, simply choose the smaller positive value. (Negative t values would require firing backward in time to use!) Substitute the chosen t value back into the target's position equations to get the coordinates of the leading point you should be aiming at: aim.X := t * target.velocityX + target.startX aim.Y := t * target.velocityY + target.startY Here is my code, after being corrected by Sam Hocevar (thank you again for your help!). It still doesn't work. For some reason it never enters the section of code inside the if(disc = 0) (obviously because it is always less than zero but...). However, if I plug the numbers from my game log on the enemy and player positions and velocities it outputs a valid firing solution. I have looked at the code side by side a couple of times now and I can't find any differences. There has got to be something simple I'm missing here. If someone else could look at this code and determine what is going on here I'd appreciate it. I know it's not going through that section because if it were, shouldShoot would become true and the enemy would be blasting away at the player. This section calls the function in question, CalculateShootHeading() if(shouldMove) { UseEngines(); } x += xVelocity; y += yVelocity; CalculateShootHeading(); if(shouldShoot) { ShootWeapons(); } UpdateWeapons(); This is CalculateShootHeading(). This is inside the enemy class so x and y are the enemy's x and y and the same with velocity. One output from my game log gives Player X = 2108, Player Y = -180.956, Player X velocity = 10.9949, Player Y Velocity = -6.26017, Enemy X = 1988.31, Enemy Y = -339.051, Enemy X velocity = 1.81666, Enemy Y velocity = -9.67762, 0 enemy projectiles. The output from the console tester is Bullet position = 2210.49, -239.313 and Player Position = 2210.49, -239.313. This doesn't make any sense. The only thing that could be different is the code or the input into my function in the game and I've checked that and I don't think that it is wrong as it's updated before this and never changed. float const bulletSpeed = 30.f; float const dx = playerX - x; float const dy = playerY - y; float const vx = playerXVelocity - xVelocity; float const vy = playerYVelocity - yVelocity; float const a = vx * vx + vy * vy - bulletSpeed * bulletSpeed; float const b = 2.f * (vx * dx + vy * dy); float const c = dx * dx + dy * dy; float const disc = b * b - 4.f * a * c; shouldShoot = false; if (disc >= 0.f) { float t0 = (-b - std::sqrt(disc)) / (2.f * a); float t1 = (-b + std::sqrt(disc)) / (2.f * a); if (t0 < 0.f || (t1 < t0 && t1 >= 0.f)) { t0 = t1; } if (t0 >= 0.f) { float shootx = vx + dx / t0; float shooty = vy + dy / t0; heading = std::atan2(shooty, shootx) * RAD2DEGREE; } shouldShoot = true; }

    Read the article

  • What's New in Oracle VM VirtualBox 4.2?

    - by Fat Bloke
    A year is a long time in the IT industry. Since the last VirtualBox feature release, which was a little over a year ago, we've seen: new releases of cool new operating systems, such as Windows 8, ChromeOS, and Mountain Lion; we've seen a myriad of new Linux releases from big Enterprise class distributions like Oracle 6.3, to accessible desktop distros like Ubuntu 12.04 and Fedora 17; and we've also seen the spec of a typical PC or laptop double in power. All of these events have influenced our new VirtualBox version which we're releasing today. Here's how... Powerful hosts  One of the trends we've seen is that as the average host platform becomes more powerful, our users are consistently running more and more vm's. Some of our users have large libraries of vm's of various vintages, whilst others have groups of vm's that are run together as an assembly of the various tiers in a multi-tiered software solution, for example, a database tier, middleware tier, and front-ends.  So we're pleased to unveil a more powerful VirtualBox Manager to address the needs of these users: VM Groups Groups allow you to organize your VM library in a sensible way, e.g.  by platform type, by project, by version, by whatever. To create groups you can drag one VM onto another or select one or more VM's and choose Machine...Group from the menu bar. You can expand and collapse groups to save screen real estate, and you can Enter and Leave a group (think iPad navigation here) by using the right and left arrow keys when groups are selected. But groups are more than passive folders, because you can now also perform operations on groups, rather than all the individual VMs. So if you have a multi-tiered solution you can start the whole stack up with just one click. Autostart Many VirtualBox users run dedicated services in their VMs, for example, running a Wiki. With these types of VM workloads, you really want the VM start up when the host machine boots up. So with 4.2 we've introduced a cross-platform Auto-start mechanism to allow you to treat VMs as host services. Headless VM Launching With VM's such as web servers, wikis, and other types of server-class workloads, the Console of the VM is pretty much redundant. For some time now VirtualBox has offered a separate launch mechanism for these VM's, namely the command-line interface commands VBoxHeadless or VBoxManage startvm ... --type headless commands. But with 4.2 we also allow you launch headless VMs from the Manager. Simply hold down Shift when launching the VM from the Manager.  It's that easy. But how do you stop a headless VM? Well, with 4.2 we allow you to Close the VM from the Manager. (BTW best to use the ACPI Shutdown method which allows the guest VM to close down gracefully.) Easy VM Creation For our expert users, the  New VM Wizard was a little tiresome, so now there's a faster 2-click VM creation mode. Just Hide the description when creating a new VM. Powerful VMs  As the hosts have become more powerful, so are the guests that are running inside them. Here are some of the 4.2 features to accommodate them: Virtual Network Interface Cards  With 4.2, it's now possible to create VMs with up to 36 NICs, when using the ICH9 chipset emulation. But with great power comes great responsibility (didn't Obi-Wan say something similar?), and so we have also introduced bandwidth limiting to prevent a rogue VM stealing the whole pipe. VLAN tagging Some of our users leverage VLANs extensively so we've enhanced the E1000 NICs to support this.  Processor Performance If you are running a CPU which supports Nested Paging (aka EPT in the Intel world) such as most of the Core i5 and i7 CPUs, or are running an AMD Bulldozer or later, you should see some performance improvements from our work with these processors. And while we're talking Processors, we've added support for some of the more modern VIA CPUs too. Powerful Automation Because VirtualBox runs atop a fully blown operating system, it makes sense to leverage the capabilities of the host to run scripts that can drive the guest VMs. Guest Automation was introduced in a prior release but with 4.2 we've revamped the APIs to allow a richer and more powerful set of operations to be executed by the guest. Check out the IGuest APIs in the VirtualBox Programming Guide and Reference (SDK). Powerful Platforms  All the hardcore engineering that has gone into 4.2 has been done for a purpose and that is to deliver a fast and powerful engine that can run almost any x86 OS because of the integrity of the virtualization. So we're pleased to add support for these platforms: Mac OS X "Mountain Lion"  Windows 8 Windows Server 2012 Ubuntu 12.04 (“Precise Pangolin”) Fedora 17 Oracle Linux 6.3  Here's the proof: We don't have time to go into the myriad of smaller improvements such as support for burning audio CDs from a guest, bi-directional clipboard control,  drag-and-drop of files into Linux guests, etc. so we'll leave that as an exercise for the user as soon as you've downloaded from the Oracle or community site and taken a peek at the User Guide. So all in all, a pretty solid release, one that we hope you'll enjoy discovering. - FB 

    Read the article

  • C++ strongly typed typedef

    - by Kian
    I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Nagios Creating lots of zombie process

    - by pradeepchhetri
    In my monitoring box, I have lots of zombie process created by nagios and they gets remove quickly also. I am using active checks to perform monitoring of my servers. I accumulated the defunct processes created using the following command: $ top -d 0.25 -b -n 20 > topout.txt This collected the output of top with 0.25s delay 20 times. I did grep on the topout.txt for the defunct process. $ cat topout.txt | grep defunct I get the following output. 8957 nagios 20 0 0 0 0 Z 6.0 0.0 0:00.02 nagios <defunct> 8951 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 8954 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 8945 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 8946 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 8980 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9000 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.00 nagios <defunct> 9024 nagios 20 0 0 0 0 Z 7.0 0.0 0:00.02 nagios <defunct> 9025 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.01 nagios <defunct> 9040 nagios 20 0 0 0 0 Z 3.1 0.0 0:00.01 nagios <defunct> 9086 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9087 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9123 nagios 20 0 0 0 0 Z 6.1 0.0 0:00.02 nagios <defunct> 9126 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 9131 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 9091 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.05 nagios <defunct> 9111 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9119 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9118 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9151 nagios 20 0 0 0 0 Z 2.9 0.0 0:00.02 nagios <defunct> 9153 nagios 20 0 0 0 0 Z 2.9 0.0 0:00.02 nagios <defunct> 9150 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9164 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.02 nagios <defunct> 9171 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.02 nagios <defunct> 9154 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9156 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9163 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9167 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9178 nagios 20 0 0 0 0 Z 3.8 0.0 0:00.02 nagios <defunct> 9174 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9179 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9182 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> Can somebody help me in finding out the reason of these zombie processes and how i can prevent these zombie processes ?

    Read the article

  • Can't get MySQL to install

    - by James Marthenal
    I'd like to think I know what I'm doing in a Unix shell but maybe not. I made a mistake in a configuration file for MySQL, so I decided to just uninstall it and then reinstall it, so I did: sudo apt-get --purge remove mysql-server mysql-server-5.0 mysql-client The files were deleted, so I then tried to install it, but it didn't ask me for a root password or anything else, so I uninstalled it using the above command again and then did sudo rm -rf /etc/mysql sudo rm /etc/init.d/mysql sudo rm -rf /var/lib/mysql* I then restarted the computer then installed it again: sudo apt-get install mysql-server mysql-client It asked for a root password, and everything looked like it would work, until I saw this: $ sudo apt-get install mysql-server mysql-client Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: mysql-server-5.0 Suggested packages: tinyca The following NEW packages will be installed: mysql-client mysql-server mysql-server-5.0 0 upgraded, 3 newly installed, 0 to remove and 1 not upgraded. Need to get 0B/27.4MB of archives. After this operation, 86.7MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! mysql-server-5.0 mysql-client mysql-server Authentication warning overridden. Preconfiguring packages ... Can't exec "/tmp/mysql-server-5.0.config.28101": Permission denied at /usr/share/perl/5.10/IPC/Open3.pm line 168. open2: exec of /tmp/mysql-server-5.0.config.28101 configure failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 mysql-server-5.0 failed to preconfigure, with exit status 255 Selecting previously deselected package mysql-server-5.0. (Reading database ... 160284 files and directories currently installed.) Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.0.51a-24+lenny5_amd64.deb) ... Selecting previously deselected package mysql-client. Unpacking mysql-client (from .../mysql-client_5.0.51a-24+lenny5_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.0.51a-24+lenny5_all.deb) ... Processing triggers for man-db ... Setting up mysql-server-5.0 (5.0.51a-24+lenny5) ... Stopping MySQL database server: mysqld. /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 Setting up mysql-client (5.0.51a-24+lenny5) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) Now I can't seem to figure out what to do. I just want to get a clean MySQL installation at this point. I'm running the latest stable release of Debian. All help is appreciated—thanks! Edit: I looked at this similar question, which suggests that I uninstall mysql-common, but when I try to do so I see: The following packages will be REMOVED: apache2 apache2-mpm-prefork apache2-utils apache2.2-common git-svn libapache2-mod-php5 libapache2-mod-python libapache2-svn libaprutil1 libdbd-mysql-perl libdbd-mysql-rubygem libmysql-ruby libmysql-ruby1.8 libmysql-rubygem libmysqlclient15-dev libmysqlclient15off librdf-perl librdf0 libserf-0-0 libsvn-perl libsvn1 mysql-client-5.0 mysql-common mytop ndn-apache22-php5 ndn-apache22-svn ndn-interpreters ndn-lighttpd ndn-netsaint-plugins ndn-perl-modules ndn-php5-cgi ndn-php5-xcache ndn-php53 ndn-php53-suhosin ndn-rubygems php5 php5-mcrypt php5-mysql proftpd proftpd-mod-mysql python-django python-mysqldb python-subversion python-svn subversion subversion-tools trac zendoptimizer 0 upgraded, 0 newly installed, 48 to remove and 1 not upgraded. Eeek! Any suggestions?

    Read the article

  • Linux fsck.ext3 says "Device or resource busy" although I did not mount the disk.

    - by matnagel
    I am running an ubuntu 8.04 server instance with a 8GB virtual disk on vmware 1.0.9. For disk maintenance I made a copy of the virtual disk (by making a copy of the 2 vmdk files of sda on the stopped vm on the host) and added it to the original vm. Now this vm has it's original virtual disk sda plus a 1:1 copy (sdd). There are 2 additional disk sdb and sdc which I ignore.) I would expect sdb not to be mounted when I start the vm. So I try tp do a ext2 fsck on sdd from the running vm, but it reports fsck reported that sdb was mounted. $ sudo fsck.ext3 -b 8193 /dev/sdd e2fsck 1.40.8 (13-Mar-2008) fsck.ext3: Device or resource busy while trying to open /dev/sdd Filesystem mounted or opened exclusively by another program? The "mount" command does not tell me sdd is mounted: $ sudo mount /dev/sda1 on / type ext3 (rw,relatime,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) /sys on /sys type sysfs (rw,noexec,nosuid,nodev) varrun on /var/run type tmpfs (rw,noexec,nosuid,nodev,mode=0755) varlock on /var/lock type tmpfs (rw,noexec,nosuid,nodev,mode=1777) udev on /dev type tmpfs (rw,mode=0755) devshm on /dev/shm type tmpfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdc1 on /mnt/r1 type ext3 (rw,relatime,errors=remount-ro) /dev/sdb1 on /mnt/k1 type ext3 (rw,relatime,errors=remount-ro) securityfs on /sys/kernel/security type securityfs (rw) When I ignore the warning and continue the fsck, it reported many errors. How do I get this under control? Is there a better way to figure out if sdd is mounted? Or how is it "busy? How to unmount it then? How to prevent ubuntu from automatically mounting. Or is there something else I am missing? Also from /var/log/syslog I cannot see it is mounted, this is the last part of the startup sequence: kernel: [ 14.229494] ACPI: Power Button (FF) [PWRF] kernel: [ 14.230326] ACPI: AC Adapter [ACAD] (on-line) kernel: [ 14.460136] input: PC Speaker as /devices/platform/pcspkr/input/input3 kernel: [ 14.639366] udev: renamed network interface eth0 to eth1 kernel: [ 14.670187] eth1: link up kernel: [ 16.329607] input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/ kernel: [ 16.367540] parport_pc 00:08: reported by Plug and Play ACPI kernel: [ 16.367670] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE] kernel: [ 19.425637] NET: Registered protocol family 10 kernel: [ 19.437550] lo: Disabled Privacy Extensions kernel: [ 24.328857] loop: module loaded kernel: [ 24.449293] lp0: using parport0 (interrupt-driven). kernel: [ 26.075499] EXT3 FS on sda1, internal journal kernel: [ 28.380299] kjournald starting. Commit interval 5 seconds kernel: [ 28.381706] EXT3 FS on sdc1, internal journal kernel: [ 28.381747] EXT3-fs: mounted filesystem with ordered data mode. kernel: [ 28.444867] kjournald starting. Commit interval 5 seconds kernel: [ 28.445436] EXT3 FS on sdb1, internal journal kernel: [ 28.445444] EXT3-fs: mounted filesystem with ordered data mode. kernel: [ 31.309766] eth1: no IPv6 routers present kernel: [ 35.054268] ip_tables: (C) 2000-2006 Netfilter Core Team mysqld_safe[4367]: started mysqld[4370]: 100124 14:40:21 InnoDB: Started; log sequence number 0 10130914 mysqld[4370]: 100124 14:40:21 [Note] /usr/sbin/mysqld: ready for connections. mysqld[4370]: Version: '5.0.51a-3ubuntu5.4' socket: '/var/run/mysqld/mysqld.sock' port: 3 /etc/mysql/debian-start[4417]: Upgrading MySQL tables if necessary. /etc/mysql/debian-start[4422]: Looking for 'mysql' in: /usr/bin/mysql /etc/mysql/debian-start[4422]: Looking for 'mysqlcheck' in: /usr/bin/mysqlcheck /etc/mysql/debian-start[4422]: This installation of MySQL is already upgraded to 5.0.51a, u /etc/mysql/debian-start[4436]: Checking for insecure root accounts. /etc/mysql/debian-start[4444]: Checking for crashed MySQL tables.

    Read the article

  • Can't install MySQL

    - by James Marthenal
    I have a Debian machine that I have previously installed MySQL on. In an attempt to delete it, I stupidly deleted the directories/files /etc/mysql/, /etc/init.d/mysql, /usr/lib/mysql/, /var/lib/mysql/. I then later did sudo apt-get purge mysql-server mysql-server-5.0. Now, when I try to install mysql-server, I get: $ sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: mysql-server-5.0 The following NEW packages will be installed: mysql-server mysql-server-5.0 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/27.4MB of archives. After this operation, 86.6MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! mysql-server-5.0 mysql-server Authentication warning overridden. Preconfiguring packages ... Can't exec "/tmp/mysql-server-5.0.config.122781": Permission denied at /usr/share/perl/5.10/IPC/Open3.pm line 168. open2: exec of /tmp/mysql-server-5.0.config.122781 configure failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 mysql-server-5.0 failed to preconfigure, with exit status 255 Selecting previously deselected package mysql-server-5.0. (Reading database ... 158138 files and directories currently installed.) Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.0.51a-24+lenny5_amd64.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.0.51a-24+lenny5_all.deb) ... Processing triggers for man-db ... Setting up mysql-server-5.0 (5.0.51a-24+lenny5) ... Stopping MySQL database server: mysqld. 110206 19:31:13 [ERROR] /usr/sbin/mysqld: Can't find file: './mysql/user.frm' (errno: 13) 110206 19:31:13 [ERROR] /usr/sbin/mysqld: Can't find file: './mysql/user.frm' (errno: 13) ERROR: 1017 Can't find file: './mysql/user.frm' (errno: 13) 110206 19:31:13 [ERROR] Aborting 110206 19:31:13 [Note] /usr/sbin/mysqld: Shutdown complete /etc/init.d/mysql: WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz (warning). Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried to search for a solution via Google and have found lots of suggestions for this problem, but ultimately it seems like the problem is that by deleting the files manually, I messed up the mysql-common package. I have tried to do sudo apt-get install --reinstall mysql-common followed by installing mysql-server, but it does the exact same thing. I previously had MySQL working great, I just want to get it back to that state. Thanks so much for your help.

    Read the article

  • SSL certificates and types for securing your websites and applications

    - by Mit Naik
    Need to share few information regarding SSL certificates and there types, which SSL certificates are widely used etc. There are several SSL certificates available in the market today inorder to secure your domains, multiple subdomains, your applications and code too. Few of the details are mentioned below. CheapSSL certificates available today are Standard Rapidssl certificate, Thwate SSL 123 etc certificates which are basic level certificates. Most of these cheap SSL certificates are domain-validated only and don't provide the greatest trust for your customers. This means you shouldn't use cheap SSL certificates on e-commerce stores or other public-facing sites that require people to trust the site. EV certificates I found Geotrust Truebusinessid with EV certificate which is one of the cheapest certificate available in market today, you can also find Thwate, Versign EV version of certificates. Its designed to prevent phishing attacks better than normal SSL certificates. What makes an EV Certificate so special? An SSL Certificate Provider has to do some extensive validation to give you one including: Verifying that your organization is legally registered and active, Verifying the address and phone number of your organization, Verifying that your organization has exclusive right to use the domain specified in the EV Certificate, Verifying that the person ordering the certificate has been authorized by the organization, Verifying that your organization is not on any government blacklists. SSL WILDCARD CERTIFICATES, SSL Wildcard Certificates are big money-savers. An SSL Wildcard Certificate allows you to secure an unlimited number of first-level sub-domains on a single domain name. For example, if you need to secure the following websites: * www.yourdomain.com * secure.yourdomain.com * product.yourdomain.com * info.yourdomain.com * download.yourdomain.com * anything.yourdomain.com and all of these websites are hosted on the multiple server box, you can purchase and install one Wildcard certificate issued to *.yourdomain.com to secure all these sites. SAN CERTIFICATES, are interesting certificates and are helpfull if you want to secure multiple domains by generating single CSR and can install the same certificate on your additional sites without generating new CSRs for all the additional domains. CODE SIGNING CERTIFICATES, A code signing certificate is a file containing a digital signature that can be used to sign executables and scripts in order to verify your identity and ensure that your code has not been tampered with since it was signed. This helps your users to determine whether your software can be trusted. Scroll to the chart below to compare cheap code signing certificates. A code signing certificate allows you to sign code using a private and public key system similar to how an SSL certificate secures a website. When you request a code signing certificate, a public/private key pair is generated. The certificate authority will then issue a code signing certificate that contains the public key. A certificate for code signing needs to be signed by a trusted certificate authority so that the operating system knows that your identity has been validated. You could still use the code signing certificate to sign and distribute malicious software but you will be held legally accountable for it. You can sign many different types of code. The most common types include Windows applications such as .exe, .cab, .dll, .ocx, and .xpi files (using an Authenticode certificate), Apple applications (using an Apple code signing certificate), Microsoft Office VBA objects and macros (using a VBA code signing certificate), .jar files (using a Java code signing certificate), .air or .airi files (using an Adobe AIR certificate), and Windows Vista drivers and other kernel-mode software (using a Vista code certificate). In reality, a code signing certificate can sign almost all types of code as long as you convert the certificate to the correct format first. Also I found the below URL which provides you good suggestion regarding purchasing best SSL certificates for securing your site, as per the Financial institution, Bank, Hosting providers, ISP, Retail Merchants etc. Please vote and provide comments or any additional suggestions regarding SSL certificates.

    Read the article

  • GMail suspects confirmation email in stealing personal information

    - by Dennis Gorelik
    When user registers on my web site, web site sends user email confirmation link. Subject: Please confirm your email address Body:Please open this link in your browser to confirm your email address: http://www.postjobfree.com/a/c301718062444f96ba0e358ea833c9b3 This link will expire on: 6/9/2012 8:04:07 PM EST. If my web site sends that email to GMaill (either @gmail.com or another domain that's handled by Google Apps) and that user never emailed to email -- then GMail not only puts the email to spam folder, but also adds prominent red warning:Be careful with this message. Similar messages were used to steal people's personal information. Unless you trust the sender, don't click links or reply with personal information. Learn more That warning really scares many of my users, so they are afraid to open that link and confirm their email. What can I do about it? Ideally I would like that message end up in user's inbox, not spam folder. But at least how do I prevent that scary message? IP address of my mailing server is not blacklisted: http://www.mxtoolbox.com/SuperTool.aspx?action=blacklist%3a208.43.198.72 I use SPF and DKIM signature. Below is the email that ended up in spam folder with that scary red message. Delivered-To: [email protected] Received: by 10.112.84.98 with SMTP id x2csp36568lby; Fri, 8 Jun 2012 17:04:15 -0700 (PDT) Received: by 10.60.25.6 with SMTP id y6mr9110318oef.42.1339200255375; Fri, 08 Jun 2012 17:04:15 -0700 (PDT) Return-Path: Received: from smtp.postjobfree.com (smtp.postjobfree.com. [208.43.198.72]) by mx.google.com with ESMTP id v8si6058193oev.44.2012.06.08.17.04.14; Fri, 08 Jun 2012 17:04:15 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 208.43.198.72 as permitted sender) client-ip=208.43.198.72; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 208.43.198.72 as permitted sender) [email protected]; dkim=pass [email protected] DomainKey-Signature: a=rsa-sha1; c=nofws; q=dns; d=postjobfree.com; s=postjobfree.com; h= received:message-id:mime-version:from:to:date:subject:content-type; b=TCip/3hP1WWViWB1cdAzMFPjyi/aUKXQbuSTVpEO7qr8x3WdMFhJCqZciA69S0HB4 Koatk2cQQ3fOilr4ledCgZYemLSJgwa/ZRhObnqgPHAglkBy8/RAwkrwaE0GjLKup 0XI6G2wPlh+ReR+inkMwhCPHFInmvrh4evlBx/VlA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=postjobfree.com; s=postjobfree.com; h=content-type:subject:date:to:from:mime-version:message-id; bh=N59EIgRECIlAnd41LY4HY/OFI+v1p7t5M9yP+3FsKXY=; b=J3/BdZmpjzP4I6GA4ntmi4REu5PpOcmyzEL+6i7y7LaTR8tuc2h7fdW4HaMPlB7za Lj4NJPed61ErumO66eG4urd1UfyaRDtszWeuIbcIUqzwYpnMZ8ytaj8DPcWPE3JYj oKhcYyiVbgiFjLujib3/2k2PqDIrNutRH9Ln7puz4= Received: from sv3035 (sv3035 [208.43.198.72]) by smtp.postjobfree.com with SMTP; Fri, 8 Jun 2012 20:04:07 -0400 Message-ID: MIME-Version: 1.0 From: "PostJobFree Notification" To: [email protected] Date: 8 Jun 2012 20:04:07 -0400 Subject: Please confirm your email address Content-Type: multipart/alternative; boundary=--boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08 ----boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Please open this link in your browser to confirm your email addre= ss: =0D=0Ahttp://www.postjobfree.com/a/c301718062444f96ba0e358ea8= 33c9b3 =0D=0AThis link will expire on: 6/9/2012 8:04:07 PM EST. =0D=0A ----boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: base64 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj1Db250ZW50LVR5cGUgY29udGVu dD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij48L2hlYWQ+DQo8Ym9keT48ZGl2 Pg0KUGxlYXNlIG9wZW4gdGhpcyBsaW5rIGluIHlvdXIgYnJvd3NlciB0byBjb25m aXJtIHlvdXIgZW1haWwgYWRkcmVzczo8YnIgLz48YSBocmVmPSJodHRwOi8vd3d3 LnBvc3Rqb2JmcmVlLmNvbS9hL2MzMDE3MTgwNjI0NDRmOTZiYTBlMzU4ZWE4MzNj OWIzIj5odHRwOi8vd3d3LnBvc3Rqb2JmcmVlLmNvbS9hL2MzMDE3MTgwNjI0NDRm OTZiYTBlMzU4ZWE4MzNjOWIzPC9hPjxiciAvPlRoaXMgbGluayB3aWxsIGV4cGly ZSBvbjogNi85LzIwMTIgODowNDowNyBQTSBFU1QuPGJyIC8+DQo8L2Rpdj48L2Jv ZHk+PC9odG1sPg== ----boundary_107_ffa6a9ea-01dc-40f5-a50c-4c3b3d113f08--

    Read the article

  • Permissions restoring from Time Machine - Finder copy vs "cp" copy

    - by Ben Challenor
    Note: this question was starting to sprawl so I rewrote it. I have a folder that I'm trying to restore from a Time Machine backup. Using cp -R works fine, but certain folders cannot be restored with either the Time Machine UI or Finder. Other users have reported similar errors and the cp -R workaround was suggested (e.g. Restoring from Time Machine - Permissions Error). But I wanted to understand: Why cp -R works when the Finder and the Time Machine UI do not. Whether I could prevent the errors by changing file permissions before the backup. There do indeed seem to be some permissions that Finder works with and some that it does not. I've narrowed the errors down to folders with the user ben (that's me) and the group wheel. Here's a simplified reproduction. I have four folders with the owner/group combinations I've seen so far: ben ~/Desktop/test $ ls -lea total 16 drwxr-xr-x 7 ben staff 238 27 Nov 14:31 . drwx------+ 17 ben staff 578 27 Nov 14:29 .. 0: group:everyone deny delete -rw-r--r--@ 1 ben staff 6148 27 Nov 14:31 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff drwxr-xr-x 3 ben wheel 102 27 Nov 14:30 ben-wheel drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Each contains a single file called file with the same owner/group: ben ~/Desktop/test $ cd ben-staff ben ~/Desktop/test/ben-staff $ ls -lea total 0 drwxr-xr-x 3 ben staff 102 27 Nov 14:30 . drwxr-xr-x 7 ben staff 238 27 Nov 14:31 .. -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file In the backup, they look like this: ben /Volumes/Deimos/Backups.backupdb/Ben’s MacBook Air/Latest/Macintosh HD/Users/ben/Desktop/test $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:34 .DS_Store 0: group:everyone deny write,delete,append,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben staff 102 27 Nov 14:51 ben-staff 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben wheel 102 27 Nov 14:51 ben-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root admin 102 27 Nov 14:52 root-admin 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root wheel 102 27 Nov 14:52 root-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown Of these, ben-staff can be restored with Finder without errors. root-wheel and root-admin ask for my password and then restore without errors. But ben-wheel does not prompt for my password and gives the error: The operation can’t be completed because you don’t have permission to access “file”. Interestingly, I can restore the file from this folder by dragging it directly to my local drive (instead of dragging its parent folder), but when I do so its permissions are changed to ben/staff. Here are the permissions after the restore for the three folders that worked correctly, and the file from ben-wheel that was changed to ben/staff. ben ~/Desktop/test-restore $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:46 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Can anyone explain this behaviour? Why do Finder and the Time Machine UI break with the ben / wheel permissions? And why does cp -R work (even without sudo)?

    Read the article

  • Wireless AAA for a small, bandwidth-limited hotel.

    - by Anthony Hiscox
    We (the tech I work with and myself) live in a remote northern town where Internet access is somewhat of a luxury, and bandwidth is quite limited. Here, overage charges ranging from few hundreds, to few thousands of dollars a month, is not uncommon. I myself incur regular monthly charges just through my regular Internet usage at home (I am allowed 10G for $60CAD!) As part of my work, I have found myself involved with several hotels that are feeling this. I know that I can come up with something to solve this problem, but I am relatively new to system administration and I don't want my dreams to overcome reality. So, I pass these ideas on to you, those with much more experience than I, in hopes you will share some of your thoughts and concerns. This system must be cost effective, yes the charges are high here, but the trust in technology is the lowest I've ever seen. Must be capable of helping client reduce their usage (squid) Allow a limited (throughput and total usage) amount of free Internet, as this is often franchise policy. Allow a user to track their bandwidth usage Allow (optional) higher speed and/or usage for an additional charge. This fee can be obtained at the front desk on checkout and should not require the use of PayPal or Credit Card. Unfortunately some franchises have ridiculous policies that require the use of a third party remote service to authenticate guests to your network. This means WPA is out, and it also means that I do not auth before Internet usage, that will be their job. However, I do require the ABILITY to perform authentication for Internet access if a hotel does not have this policy. I will still have to track bandwidth (under a guest account by default) and provide the same limiting, however the guest often will require a complete 'unlimited' access, in terms of existence, not throughput. Provide firewalling capabilities for hotels that have nothing, Office, and Guest network segregation (some of these guys are running their office on the guest network, with no encryption, and a simple TOS to get on!) Prevent guests from connecting to other guests, however provide a means to allow this to happen. IE. Each guest connects to a page and allows the other guest, this writes a iptables rule (with python-netfilter) and allows two rooms to play a game, for instance. My thoughts on how to implement this. One decent box (we'll call it a router now) with a lot of ram, and 3 NIC's: Internet Office Guests (AP's + In Room Ethernet) Router Firewall Rules Guest can talk to router only, through which they are routed to where they need to go, including Internet services. Office can be used to bridge Office to Internet if an existing solution is not in place, otherwise, it simply works for a network accessible web (webmin+python-webmin?) interface. Router Software: OpenVZ provides virtualization for a few services I don't really trust. Squid, FreeRADIUS and Apache. The only service directly accessible to guests is Apache. Apache has mod_wsgi and django, because I can write quickly using django and my needs are low. It also potentially has the FreeRADIUS mod, but there seems to be some caveats with this. Firewall rules are handled on the router with iptables. Webmin (or a custom django app maybe) provides abstracted control over any features that the staff may need to access. Python, if you haven't guessed it's the language I feel most comfortable in, and I use it for almost everything. And finally, has this been done, is it a overly massive project not worth taking on for one guy, and/or is there some tools I'm missing that could be making my life easier? For the record, I am fairly good with Python, but not very familiar with many other languages (I can struggle through PHP, it's a cosmetic issue there). I am also an avid linux user, and comfortable with config files and command line. Thank you for your time, I look forward to reading your responses. Edit: My apologies if this is not a Q&A in the sense that some were expecting, I'm just looking for ideas and to make sure I'm not trying to do something that's been done. I'm looking at pfSense now as a possible start for what I need.

    Read the article

  • proftpd, dynamic IP, and filezilla: port troubles

    - by Yami
    The basic setup: Two computers, one running proftpd, one attempting to connect via filezilla. Both linux (xubuntu on the server, kubuntu on the client). Both are at the moment behind a router on a residential (read: dynamic IP) connection; the client is a laptop I plan to take away from the home network, so I'll need this to work externally. I have my router set up to allow specific ports forwarded to each machine and, where possible, have plugged in those numbers into proftpd (via gadmin, double-checking the config file) and filezilla. Attempting to connect via active mode using the internal IP works: Status: Connecting to 192.168.1.139:8085... Status: Connection established, waiting for welcome message... Response: 220 Crossroads FTP Command: USER <redacted> Response: 331 Password required for <redacted> Command: PASS ******* Response: 230 Anonymous access granted, restrictions apply Command: OPTS UTF8 ON Response: 200 UTF8 set to on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PORT 192,168,1,52,153,140 Response: 200 PORT command successful Command: LIST Response: 150 Opening ASCII mode data connection for file list Response: 226 Transfer complete Status: Directory listing successful Attempting to connect via the domain name, however, leads to issues; in active mode, the PORT is the last command to be received according to the server's logs, and in passive mode, it's the PASV command. This leads me to believe I'm being redirected to a bad port? Active Sample: Status: Resolving address of <url> Status: Connecting to <ip:port> Status: Connection established, waiting for welcome message... Response: 220 Crossroads FTP Command: USER <redacted> Response: 331 Password required for <redacted> Command: PASS ******* Response: 230 Anonymous access granted, restrictions apply Command: OPTS UTF8 ON Response: 200 UTF8 set to on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PORT 174,111,127,27,153,139 Response: 200 PORT command successful Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing Passive sample: Status: Resolving address of ftp.bonsaiwebdesigns.com Status: Connecting to 174.111.127.27:8085... Status: Connection established, waiting for welcome message... Response: 220 Crossroads FTP Command: USER yamikuronue Response: 331 Password required for yamikuronue Command: PASS ******* Response: 230 Anonymous access granted, restrictions apply Command: OPTS UTF8 ON Response: 200 UTF8 set to on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (64,95,64,197,101,88). Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing In both cases, the log file ends at "PORT" or "PASV" - there's no record of ever receiving a "LIST" command. Just above that I can see the attempt to connect actively via the internal IP, which does indeed include a LIST command. My config file includes "PassivePorts 20001-26999", which are the port forwards I set up for the ftp server, and "Port 8085", which is also forwarded to the same machine. I also have a MasqueradeAddress set up to prevent it from reporting its internal IP, which was an earlier issue I had. I think what I'm asking is, is there another setting someplace I have to change to get this setup to work?

    Read the article

  • Users suddenly missing write permissions to the root drive c within an active directory domain

    - by Kevin
    I'm managing an active directory single domain environment on some Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 machines. Since a few weeks I got a strange issue. Some users (not all!) report that they cannot any longer save, copy or write files to the root drive c, whether on their clients (vista, win 7) nor via remote desktop connection on a Windows Server 2008 machine. Even running programs that require direct write permissions to the root drive without administrator permissions fail to do so since then. The affected users have local administrator permissions. The question I'm facing now is: What caused this change of system behavior? Why did this happen? I didn't find out yet. What was the last thing I did before it happened? The last action that was made before it happened was the rollout of a GPO containing network drive mappings for the users depending on their security group membership. All network drives are located on a linux server with samba enabled. We did not change any UAC settings, and they have always been activated. However I can't imagine that rolling out this GPO caused the problem. Has anybody faced an issue like that? Just in case: I know that it is for a specific reason that an user without administrative privileges is prevented from writing to the root drive since windows vista and the implementation of UAC. I don't think that those users should be able to write to drive c, but I try to figure out why this is happening and a few weeks ago this was still working. I also know that a user who is a member of the local administrators group does not execute anything with administrator permissions per default unless he or she executes a program with this permissions. What did I do yet? I checked the permissions of the affected programs, the affected clients/server. Didn't find something special. I checked ALL of our GPOs if there exist any restrictions that could prevent the affected users from writing to the root drive. Did not find any settings. I checked the UAC settings of the affected users and compared those to other users that still can write to the root drive. Everything similar. I googled though the internet and tried to find someone who had a similar problem. Did not find one. Has anybody an idea? Thank you very much. Edit: The GPO that was rolled out does the following (Please excuse if the settings are not named exactly like that, I translated the settings into english): **Windows Settings -- Network Drive Mappings -- Drive N: -- General:** Action: Replace **Properties:** Letter: N Location: \\path-to-drive\drivename Re-Establish connection: deactivated Label as: Name_of_the_Share Use first available Option: deactivated **Windows Settings -- Network Drive Mappings -- Drive N: -- Public: Options:** On error don't process any further elements for this extension: no Run as the logged in user: no remove element if it is not applied anymore: no Only apply once: no **Securitygroup:** Attribute -- Value bool -- AND not -- 0 name -- domain\groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 **Securitygroup:** Attribute -- Value bool -- OR not -- 0 name -- domain\another-groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 Edit: The Error-Message of an affected users says the following: Due to an unexpected error you can't copy the file. Error-Code 0x80070522: The client is missing a required permission. The command icacls C: shows the following: NT-AUTORITY\SYSTEM:(OI)(CI)(F) PRE-DEFINED\Administrators:(OI)(CI)(F) computername\username:(OI)(CI)(F) A college just told me that also the primary domain-controller (PDC) changed from Windows Server 2008 to Windows Server 2012. That also may be a reason. Any suggestions?

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >