Search Results

Search found 8177 results on 328 pages for 'lost focus'.

Page 324/328 | < Previous Page | 320 321 322 323 324 325 326 327 328  | Next Page >

  • XNA Screen Manager problem with transitions

    - by NexAddo
    I'm having issues using the game statemanagement example in the game I am developing. I have no issues with my first three screens transitioning between one another. I have a main menu screen, a splash screen and a high score screen that cycle: mainMenuScreen->splashScreen->highScoreScreen->mainMenuScreen The screens change every 15 seconds. Transition times public MainMenuScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.0); currentCreditAmount = Global.CurrentCredits; } public SplashScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } public HighScoreScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } public GamePlayScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } When a user inserts credits they can play the game after pressing start mainMenuScreen->splashScreen->highScoreScreen->(loops forever) || || || ===========Credits In============= || Start || \/ LoadingScreen || Start || \/ GamePlayScreen During each of these transitions, between screens, the same code is used, which exits(removes) all current active screens and respects transitions, then adds the new screen to the screen manager: foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); //AddScreen takes a new screen to manage and the controlling player ScreenManager.AddScreen(new NameOfScreenHere(), null); Each screen is removed from the ScreenManager with ExitScreen() and using this function, each screen transition is respected. The problem I am having is with my gamePlayScreen. When the current game is finished and the transition is complete for the gamePlayScreen, it should be removed and the next screens should be added to the ScreenManager. GamePlayScreen Code Snippet private void FinishCurrentGame() { AudioManager.StopSounds(); this.UnloadContent(); if (Global.SaveDevice.IsReady) Stats.Save(); if (HighScoreScreen.IsInHighscores(timeLimit)) { foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); Global.TimeRemaining = timeLimit; ScreenManager.AddScreen(new BackgroundScreen(), null); ScreenManager.AddScreen(new MessageBoxScreen("Enter your Initials", true), null); } else { foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); ScreenManager.AddScreen(new BackgroundScreen(), null); ScreenManager.AddScreen(new MainMenuScreen(), null); } } The problem is that when isExiting is set to true by screen.ExitScreen() for the gamePlayScreen, the transition never completes the transition and removes the screen from the ScreenManager. Every other screen that I use the same technique to add and remove each screen fully transitions On/Off and is removed at the appropriate time from the ScreenManager, but noy my GamePlayScreen. Has anyone that has used the GameStateManagement example experienced this issue or can someone see the mistake I am making? EDIT This is what I tracked down. When the game is done, I call foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); to start the transition off process for the gameplay screen. At this point there is only 1 screen on the ScreenManager stack. The gamePlay screen gets isExiting set to true and starts to transition off. Right after the above call to ExitScreen() I add a background screen and menu screen to the screenManager: ScreenManager.AddScreen(new background(), null); ScreenManager.AddScreen(new Menu(), null); The count of the ScreenManager is now 3. What I noticed while stepping through the updates for GameScreen and ScreenManager, the gameplay screen never gets to the point where the transistion process finishes so the ScreenManager can remove it from the stack. This anomaly does not happen to any of my other screens when I switch between them. Screen Manager Code #region File Description //----------------------------------------------------------------------------- // ScreenManager.cs // // Microsoft XNA Community Game Platform // Copyright (C) Microsoft Corporation. All rights reserved. //----------------------------------------------------------------------------- #endregion #define DEMO #region Using Statements using System; using System.Diagnostics; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using PerformanceUtility.GameDebugTools; #endregion namespace GameStateManagement { /// <summary> /// The screen manager is a component which manages one or more GameScreen /// instances. It maintains a stack of screens, calls their Update and Draw /// methods at the appropriate times, and automatically routes input to the /// topmost active screen. /// </summary> public class ScreenManager : DrawableGameComponent { #region Fields List<GameScreen> screens = new List<GameScreen>(); List<GameScreen> screensToUpdate = new List<GameScreen>(); InputState input = new InputState(); SpriteBatch spriteBatch; SpriteFont font; Texture2D blankTexture; bool isInitialized; bool getOut; bool traceEnabled; #if DEBUG DebugSystem debugSystem; Stopwatch stopwatch = new Stopwatch(); bool debugTextEnabled; #endif #endregion #region Properties /// <summary> /// A default SpriteBatch shared by all the screens. This saves /// each screen having to bother creating their own local instance. /// </summary> public SpriteBatch SpriteBatch { get { return spriteBatch; } } /// <summary> /// A default font shared by all the screens. This saves /// each screen having to bother loading their own local copy. /// </summary> public SpriteFont Font { get { return font; } } public Rectangle ScreenRectangle { get { return new Rectangle(0, 0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height); } } /// <summary> /// If true, the manager prints out a list of all the screens /// each time it is updated. This can be useful for making sure /// everything is being added and removed at the right times. /// </summary> public bool TraceEnabled { get { return traceEnabled; } set { traceEnabled = value; } } #if DEBUG public bool DebugTextEnabled { get { return debugTextEnabled; } set { debugTextEnabled = value; } } public DebugSystem DebugSystem { get { return debugSystem; } } #endif #endregion #region Initialization /// <summary> /// Constructs a new screen manager component. /// </summary> public ScreenManager(Game game) : base(game) { // we must set EnabledGestures before we can query for them, but // we don't assume the game wants to read them. //TouchPanel.EnabledGestures = GestureType.None; } /// <summary> /// Initializes the screen manager component. /// </summary> public override void Initialize() { base.Initialize(); #if DEBUG debugSystem = DebugSystem.Initialize(Game, "Fonts/MenuFont"); #endif isInitialized = true; } /// <summary> /// Load your graphics content. /// </summary> protected override void LoadContent() { // Load content belonging to the screen manager. ContentManager content = Game.Content; spriteBatch = new SpriteBatch(GraphicsDevice); font = content.Load<SpriteFont>(@"Fonts\menufont"); blankTexture = content.Load<Texture2D>(@"Textures\Backgrounds\blank"); // Tell each of the screens to load their content. foreach (GameScreen screen in screens) { screen.LoadContent(); } } /// <summary> /// Unload your graphics content. /// </summary> protected override void UnloadContent() { // Tell each of the screens to unload their content. foreach (GameScreen screen in screens) { screen.UnloadContent(); } } #endregion #region Update and Draw /// <summary> /// Allows each screen to run logic. /// </summary> public override void Update(GameTime gameTime) { #if DEBUG debugSystem.TimeRuler.StartFrame(); debugSystem.TimeRuler.BeginMark("Update", Color.Blue); if (debugTextEnabled && getOut == false) { debugSystem.FpsCounter.Visible = true; debugSystem.TimeRuler.Visible = true; debugSystem.TimeRuler.ShowLog = true; getOut = true; } else if (debugTextEnabled == false) { getOut = false; debugSystem.FpsCounter.Visible = false; debugSystem.TimeRuler.Visible = false; debugSystem.TimeRuler.ShowLog = false; } #endif // Read the keyboard and gamepad. input.Update(); // Make a copy of the master screen list, to avoid confusion if // the process of updating one screen adds or removes others. screensToUpdate.Clear(); foreach (GameScreen screen in screens) screensToUpdate.Add(screen); bool otherScreenHasFocus = !Game.IsActive; bool coveredByOtherScreen = false; // Loop as long as there are screens waiting to be updated. while (screensToUpdate.Count > 0) { // Pop the topmost screen off the waiting list. GameScreen screen = screensToUpdate[screensToUpdate.Count - 1]; screensToUpdate.RemoveAt(screensToUpdate.Count - 1); // Update the screen. screen.Update(gameTime, otherScreenHasFocus, coveredByOtherScreen); if (screen.ScreenState == ScreenState.TransitionOn || screen.ScreenState == ScreenState.Active) { // If this is the first active screen we came across, // give it a chance to handle input. if (!otherScreenHasFocus) { screen.HandleInput(input); otherScreenHasFocus = true; } // If this is an active non-popup, inform any subsequent // screens that they are covered by it. if (!screen.IsPopup) coveredByOtherScreen = true; } } // Print debug trace? if (traceEnabled) TraceScreens(); #if DEBUG debugSystem.TimeRuler.EndMark("Update"); #endif } /// <summary> /// Prints a list of all the screens, for debugging. /// </summary> void TraceScreens() { List<string> screenNames = new List<string>(); foreach (GameScreen screen in screens) screenNames.Add(screen.GetType().Name); Debug.WriteLine(string.Join(", ", screenNames.ToArray())); } /// <summary> /// Tells each screen to draw itself. /// </summary> public override void Draw(GameTime gameTime) { #if DEBUG debugSystem.TimeRuler.StartFrame(); debugSystem.TimeRuler.BeginMark("Draw", Color.Yellow); #endif foreach (GameScreen screen in screens) { if (screen.ScreenState == ScreenState.Hidden) continue; screen.Draw(gameTime); } #if DEBUG debugSystem.TimeRuler.EndMark("Draw"); #endif #if DEMO SpriteBatch.Begin(); SpriteBatch.DrawString(font, "DEMO - NOT FOR RESALE", new Vector2(20, 80), Color.White); SpriteBatch.End(); #endif } #endregion #region Public Methods /// <summary> /// Adds a new screen to the screen manager. /// </summary> public void AddScreen(GameScreen screen, PlayerIndex? controllingPlayer) { screen.ControllingPlayer = controllingPlayer; screen.ScreenManager = this; screen.IsExiting = false; // If we have a graphics device, tell the screen to load content. if (isInitialized) { screen.LoadContent(); } screens.Add(screen); } /// <summary> /// Removes a screen from the screen manager. You should normally /// use GameScreen.ExitScreen instead of calling this directly, so /// the screen can gradually transition off rather than just being /// instantly removed. /// </summary> public void RemoveScreen(GameScreen screen) { // If we have a graphics device, tell the screen to unload content. if (isInitialized) { screen.UnloadContent(); } screens.Remove(screen); screensToUpdate.Remove(screen); } /// <summary> /// Expose an array holding all the screens. We return a copy rather /// than the real master list, because screens should only ever be added /// or removed using the AddScreen and RemoveScreen methods. /// </summary> public GameScreen[] GetScreens() { return screens.ToArray(); } /// <summary> /// Helper draws a translucent black fullscreen sprite, used for fading /// screens in and out, and for darkening the background behind popups. /// </summary> public void FadeBackBufferToBlack(float alpha) { Viewport viewport = GraphicsDevice.Viewport; spriteBatch.Begin(); spriteBatch.Draw(blankTexture, new Rectangle(0, 0, viewport.Width, viewport.Height), Color.Black * alpha); spriteBatch.End(); } #endregion } } Game Screen Parent of GamePlayScreen #region File Description //----------------------------------------------------------------------------- // GameScreen.cs // // Microsoft XNA Community Game Platform // Copyright (C) Microsoft Corporation. All rights reserved. //----------------------------------------------------------------------------- #endregion #region Using Statements using System; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Input; //using Microsoft.Xna.Framework.Input.Touch; using System.IO; #endregion namespace GameStateManagement { /// <summary> /// Enum describes the screen transition state. /// </summary> public enum ScreenState { TransitionOn, Active, TransitionOff, Hidden, } /// <summary> /// A screen is a single layer that has update and draw logic, and which /// can be combined with other layers to build up a complex menu system. /// For instance the main menu, the options menu, the "are you sure you /// want to quit" message box, and the main game itself are all implemented /// as screens. /// </summary> public abstract class GameScreen { #region Properties /// <summary> /// Normally when one screen is brought up over the top of another, /// the first screen will transition off to make room for the new /// one. This property indicates whether the screen is only a small /// popup, in which case screens underneath it do not need to bother /// transitioning off. /// </summary> public bool IsPopup { get { return isPopup; } protected set { isPopup = value; } } bool isPopup = false; /// <summary> /// Indicates how long the screen takes to /// transition on when it is activated. /// </summary> public TimeSpan TransitionOnTime { get { return transitionOnTime; } protected set { transitionOnTime = value; } } TimeSpan transitionOnTime = TimeSpan.Zero; /// <summary> /// Indicates how long the screen takes to /// transition off when it is deactivated. /// </summary> public TimeSpan TransitionOffTime { get { return transitionOffTime; } protected set { transitionOffTime = value; } } TimeSpan transitionOffTime = TimeSpan.Zero; /// <summary> /// Gets the current position of the screen transition, ranging /// from zero (fully active, no transition) to one (transitioned /// fully off to nothing). /// </summary> public float TransitionPosition { get { return transitionPosition; } protected set { transitionPosition = value; } } float transitionPosition = 1; /// <summary> /// Gets the current alpha of the screen transition, ranging /// from 1 (fully active, no transition) to 0 (transitioned /// fully off to nothing). /// </summary> public float TransitionAlpha { get { return 1f - TransitionPosition; } } /// <summary> /// Gets the current screen transition state. /// </summary> public ScreenState ScreenState { get { return screenState; } protected set { screenState = value; } } ScreenState screenState = ScreenState.TransitionOn; /// <summary> /// There are two possible reasons why a screen might be transitioning /// off. It could be temporarily going away to make room for another /// screen that is on top of it, or it could be going away for good. /// This property indicates whether the screen is exiting for real: /// if set, the screen will automatically remove itself as soon as the /// transition finishes. /// </summary> public bool IsExiting { get { return isExiting; } protected internal set { isExiting = value; } } bool isExiting = false; /// <summary> /// Checks whether this screen is active and can respond to user input. /// </summary> public bool IsActive { get { return !otherScreenHasFocus && (screenState == ScreenState.TransitionOn || screenState == ScreenState.Active); } } bool otherScreenHasFocus; /// <summary> /// Gets the manager that this screen belongs to. /// </summary> public ScreenManager ScreenManager { get { return screenManager; } internal set { screenManager = value; } } ScreenManager screenManager; public KeyboardState KeyboardState { get {return Keyboard.GetState();} } /// <summary> /// Gets the index of the player who is currently controlling this screen, /// or null if it is accepting input from any player. This is used to lock /// the game to a specific player profile. The main menu responds to input /// from any connected gamepad, but whichever player makes a selection from /// this menu is given control over all subsequent screens, so other gamepads /// are inactive until the controlling player returns to the main menu. /// </summary> public PlayerIndex? ControllingPlayer { get { return controllingPlayer; } internal set { controllingPlayer = value; } } PlayerIndex? controllingPlayer; /// <summary> /// Gets whether or not this screen is serializable. If this is true, /// the screen will be recorded into the screen manager's state and /// its Serialize and Deserialize methods will be called as appropriate. /// If this is false, the screen will be ignored during serialization. /// By default, all screens are assumed to be serializable. /// </summary> public bool IsSerializable { get { return isSerializable; } protected set { isSerializable = value; } } bool isSerializable = true; #endregion #region Initialization /// <summary> /// Load graphics content for the screen. /// </summary> public virtual void LoadContent() { } /// <summary> /// Unload content for the screen. /// </summary> public virtual void UnloadContent() { } #endregion #region Update and Draw /// <summary> /// Allows the screen to run logic, such as updating the transition position. /// Unlike HandleInput, this method is called regardless of whether the screen /// is active, hidden, or in the middle of a transition. /// </summary> public virtual void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) { this.otherScreenHasFocus = otherScreenHasFocus; if (isExiting) { // If the screen is going away to die, it should transition off. screenState = ScreenState.TransitionOff; if (!UpdateTransition(gameTime, transitionOffTime, 1)) { // When the transition finishes, remove the screen. ScreenManager.RemoveScreen(this); } } else if (coveredByOtherScreen) { // If the screen is covered by another, it should transition off. if (UpdateTransition(gameTime, transitionOffTime, 1)) { // Still busy transitioning. screenState = ScreenState.TransitionOff; } else { // Transition finished! screenState = ScreenState.Hidden; } } else { // Otherwise the screen should transition on and become active. if (UpdateTransition(gameTime, transitionOnTime, -1)) { // Still busy transitioning. screenState = ScreenState.TransitionOn; } else { // Transition finished! screenState = ScreenState.Active; } } } /// <summary> /// Helper for updating the screen transition position. /// </summary> bool UpdateTransition(GameTime gameTime, TimeSpan time, int direction) { // How much should we move by? float transitionDelta; if (time == TimeSpan.Zero) transitionDelta = 1; else transitionDelta = (float)(gameTime.ElapsedGameTime.TotalMilliseconds / time.TotalMilliseconds); // Update the transition position. transitionPosition += transitionDelta * direction; // Did we reach the end of the transition? if (((direction < 0) && (transitionPosition <= 0)) || ((direction > 0) && (transitionPosition >= 1))) { transitionPosition = MathHelper.Clamp(transitionPosition, 0, 1); return false; } // Otherwise we are still busy transitioning. return true; } /// <summary> /// Allows the screen to handle user input. Unlike Update, this method /// is only called when the screen is active, and not when some other /// screen has taken the focus. /// </summary> public virtual void HandleInput(InputState input) { } public KeyboardState currentKeyState; public KeyboardState lastKeyState; public bool IsKeyHit(Keys key) { if (currentKeyState.IsKeyDown(key) && lastKeyState.IsKeyUp(key)) return true; return false; } /// <summary> /// This is called when the screen should draw itself. /// </summary> public virtual void Draw(GameTime gameTime) { } #endregion #region Public Methods /// <summary> /// Tells the screen to serialize its state into the given stream. /// </summary> public virtual void Serialize(Stream stream) { } /// <summary> /// Tells the screen to deserialize its state from the given stream. /// </summary> public virtual void Deserialize(Stream stream) { } /// <summary> /// Tells the screen to go away. Unlike ScreenManager.RemoveScreen, which /// instantly kills the screen, this method respects the transition timings /// and will give the screen a chance to gradually transition off. /// </summary> public void ExitScreen() { if (TransitionOffTime == TimeSpan.Zero) { // If the screen has a zero transition time, remove it immediately. ScreenManager.RemoveScreen(this); } else { // Otherwise flag that it should transition off and then exit. isExiting = true; } } #endregion #region Helper Methods /// <summary> /// A helper method which loads assets using the screen manager's /// associated game content loader. /// </summary> /// <typeparam name="T">Type of asset.</typeparam> /// <param name="assetName">Asset name, relative to the loader root /// directory, and not including the .xnb extension.</param> /// <returns></returns> public T Load<T>(string assetName) { return ScreenManager.Game.Content.Load<T>(assetName); } #endregion } }

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • ANTS Memory Profiler 7.0 Review

    - by Michael B. McLaughlin
    (This is my first review as a part of the GeeksWithBlogs.net Influencers program. It’s a program in which I (and the others who have been selected for it) get the opportunity to check out new products and services and write reviews about them. We don’t get paid for this, but we do generally get to keep a copy of the software or retain an account for some period of time on the service that we review. In this case I received a copy of Red Gate Software’s ANTS Memory Profiler 7.0, which was released in January. I don’t have any upgrade rights nor is my review guided, restrained, influenced, or otherwise controlled by Red Gate or anyone else. But I do get to keep the software license. I will always be clear about what I received whenever I do a review – I leave it up to you to decide whether you believe I can be objective. I believe I can be. If I used something and really didn’t like it, keeping a copy of it wouldn’t be worth anything to me. In that case though, I would simply uninstall/deactivate/whatever the software or service and tell the company what I didn’t like about it so they could (hopefully) make it better in the future. I don’t think it’d be polite to write up a terrible review, nor do I think it would be a particularly good use of my time. There are people who get paid for a living to review things, so I leave it to them to tell you what they think is bad and why. I’ll only spend my time telling you about things I think are good.) Overview of Common .NET Memory Problems When coming to land of managed memory from the wilds of unmanaged code, it’s easy to say to one’s self, “Wow! Now I never have to worry about memory problems again!” But this simply isn’t true. Managed code environments, such as .NET, make many, many things easier. You will never have to worry about memory corruption due to a bad pointer, for example (unless you’re working with unsafe code, of course). But managed code has its own set of memory concerns. For example, failing to unsubscribe from events when you are done with them leaves the publisher of an event with a reference to the subscriber. If you eliminate all your own references to the subscriber, then that memory is effectively lost since the GC won’t delete it because of the publishing object’s reference. When the publishing object itself becomes subject to garbage collection then you’ll get that memory back finally, but that could take a very long time depending of the life of the publisher. Another common source of resource leaks is failing to properly release unmanaged resources. When writing a class that contains members that hold unmanaged resources (e.g. any of the Stream-derived classes, IsolatedStorageFile, most classes ending in “Reader” or “Writer”), you should always implement IDisposable, making sure to use a properly written Dispose method. And when you are using an instance of a class that implements IDisposable, you should always make sure to use a 'using' statement in order to ensure that the object’s unmanaged resources are disposed of properly. (A ‘using’ statement is a nicer, cleaner looking, and easier to use version of a try-finally block. The compiler actually translates it as though it were a try-finally block. Note that Code Analysis warning 2202 (CA2202) will often be triggered by nested using blocks. A properly written dispose method ensures that it only runs once such that calling dispose multiple times should not be a problem. Nonetheless, CA2202 exists and if you want to avoid triggering it then you should write your code such that only the innermost IDisposable object uses a ‘using’ statement, with any outer code making use of appropriate try-finally blocks instead). Then, of course, there are situations where you are operating in a memory-constrained environment or else you want to limit or even eliminate allocations within a certain part of your program (e.g. within the main game loop of an XNA game) in order to avoid having the GC run. On the Xbox 360 and Windows Phone 7, for example, for every 1 MB of heap allocations you make, the GC runs; the added time of a GC collection can cause a game to drop frames or run slowly thereby making it look bad. Eliminating allocations (or else minimizing them and calling an explicit Collect at an appropriate time) is a common way of avoiding this (the other way is to simplify your heap so that the GC’s latency is low enough not to cause performance issues). ANTS Memory Profiler 7.0 When the opportunity to review Red Gate’s recently released ANTS Memory Profiler 7.0 arose, I jumped at it. In order to review it, I was given a free copy (which does not include upgrade rights for future versions) which I am allowed to keep. For those of you who are familiar with ANTS Memory Profiler, you can find a list of new features and enhancements here. If you are an experienced .NET developer who is familiar with .NET memory management issues, ANTS Memory Profiler is great. More importantly still, if you are new to .NET development or you have no experience or limited experience with memory profiling, ANTS Memory Profiler is awesome. From the very beginning, it guides you through the process of memory profiling. If you’re experienced and just want dive in however, it doesn’t get in your way. The help items GAHSFLASHDAJLDJA are well designed and located right next to the UI controls so that they are easy to find without being intrusive. When you first launch it, it presents you with a “Getting Started” screen that contains links to “Memory profiling video tutorials”, “Strategies for memory profiling”, and the “ANTS Memory Profiler forum”. I’m normally the kind of person who looks at a screen like that only to find the “Don’t show this again” checkbox. Since I was doing a review, though, I decided I should examine them. I was pleasantly surprised. The overview video clocks in at three minutes and fifty seconds. It begins by showing you how to get started profiling an application. It explains that profiling is done by taking memory snapshots periodically while your program is running and then comparing them. ANTS Memory Profiler (I’m just going to call it “ANTS MP” from here) analyzes these snapshots in the background while your application is running. It briefly mentions a new feature in Version 7, a new API that give you the ability to trigger snapshots from within your application’s source code (more about this below). You can also, and this is the more common way you would do it, take a memory snapshot at any time from within the ANTS MP window by clicking the “Take Memory Snapshot” button in the upper right corner. The overview video goes on to demonstrate a basic profiling session on an application that pulls information from a database and displays it. It shows how to switch which snapshots you are comparing, explains the different sections of the Summary view and what they are showing, and proceeds to show you how to investigate memory problems using the “Instance Categorizer” to track the path from an object (or set of objects) to the GC’s root in order to find what things along the path are holding a reference to it/them. For a set of objects, you can then click on it and get the “Instance List” view. This displays all of the individual objects (including their individual sizes, values, etc.) of that type which share the same path to the GC root. You can then click on one of the objects to generate an “Instance Retention Graph” view. This lets you track directly up to see the reference chain for that individual object. In the overview video, it turned out that there was an event handler which was holding on to a reference, thereby keeping a large number of strings that should have been freed in memory. Lastly the video shows the “Class List” view, which lets you dig in deeply to find problems that might not have been clear when following the previous workflow. Once you have at least one memory snapshot you can begin analyzing. The main interface is in the “Analysis” tab. You can also switch to the “Session Overview” tab, which gives you several bar charts highlighting basic memory data about the snapshots you’ve taken. If you hover over the individual bars (and the individual colors in bars that have more than one), you will see a detailed text description of what the bar is representing visually. The Session Overview is good for a quick summary of memory usage and information about the different heaps. You are going to spend most of your time in the Analysis tab, but it’s good to remember that the Session Overview is there to give you some quick feedback on basic memory usage stats. As described above in the summary of the overview video, there is a certain natural workflow to the Analysis tab. You’ll spin up your application and take some snapshots at various times such as before and after clicking a button to open a window or before and after closing a window. Taking these snapshots lets you examine what is happening with memory. You would normally expect that a lot of memory would be freed up when closing a window or exiting a document. By taking snapshots before and after performing an action like that you can see whether or not the memory is really being freed. If you already know an area that’s giving you trouble, you can run your application just like normal until just before getting to that part and then you can take a few strategic snapshots that should help you pin down the problem. Something the overview didn’t go into is how to use the “Filters” section at the bottom of ANTS MP together with the Class List view in order to narrow things down. The video tutorials page has a nice 3 minute intro video called “How to use the filters”. It’s a nice introduction and covers some of the basics. I’m going to cover a bit more because I think they’re a really neat, really helpful feature. Large programs can bring up thousands of classes. Even simple programs can instantiate far more classes than you might realize. In a basic .NET 4 WPF application for example (and when I say basic, I mean just MainWindow.xaml with a button added to it), the unfiltered Class List view will have in excess of 1000 classes (my simple test app had anywhere from 1066 to 1148 classes depending on which snapshot I was using as the “Current” snapshot). This is amazing in some ways as it shows you how in stark detail just how immensely powerful the WPF framework is. But hunting through 1100 classes isn’t productive, no matter how cool it is that there are that many classes instantiated and doing all sorts of awesome things. Let’s say you wanted to examine just the classes your application contains source code for (in my simple example, that would be the MainWindow and App). Under “Basic Filters”, click on “Classes with source” under “Show only…”. Voilà. Down from 1070 classes in the snapshot I was using as “Current” to 2 classes. If you then click on a class’s name, it will show you (to the right of the class name) two little icon buttons. Hover over them and you will see that you can click one to view the Instance Categorizer for the class and another to view the Instance List for the class. You can also show classes based on which heap they live on. If you chose both a Baseline snapshot and a Current snapshot then you can use the “Comparing snapshots” filters to show only: “New objects”; “Surviving objects”; “Survivors in growing classes”; or “Zombie objects” (if you aren’t sure what one of these means, you can click the helpful “?” in a green circle icon to bring up a popup that explains them and provides context). Remember that your selection(s) under the “Show only…” heading will still apply, so you should update those selections to make sure you are seeing the view you want. There are also links under the “What is my memory problem?” heading that can help you diagnose the problems you are seeing including one for “I don’t know which kind I have” for situations where you know generally that your application has some problems but aren’t sure what the behavior you have been seeing (OutOfMemoryExceptions, continually growing memory usage, larger memory use than expected at certain points in the program). The Basic Filters are not the only filters there are. “Filter by Object Type” gives you the ability to filter by: “Objects that are disposable”; “Objects that are/are not disposed”; “Objects that are/are not GC roots” (GC roots are things like static variables); and “Objects that implement _______”. “Objects that implement” is particularly neat. Once you check the box, you can then add one or more classes and interfaces that an object must implement in order to survive the filtering. Lastly there is “Filter by Reference”, which gives you the option to pare down the list based on whether an object is “Kept in memory exclusively by” a particular item, a class/interface, or a namespace; whether an object is “Referenced by” one or more of those choices; and whether an object is “Never referenced by” one or more of those choices. Remember that filtering is cumulative, so anything you had set in one of the filter sections still remains in effect unless and until you go back and change it. There’s quite a bit more to ANTS MP – it’s a very full featured product – but I think I touched on all of the most significant pieces. You can use it to debug: a .NET executable; an ASP.NET web application (running on IIS); an ASP.NET web application (running on Visual Studio’s built-in web development server); a Silverlight 4 browser application; a Windows service; a COM+ server; and even something called an XBAP (local XAML browser application). You can also attach to a .NET 4 process to profile an application that’s already running. The startup screen also has a large number of “Charting Options” that let you adjust which statistics ANTS MP should collect. The default selection is a good, minimal set. It’s worth your time to browse through the charting options to examine other statistics that may also help you diagnose a particular problem. The more statistics ANTS MP collects, the longer it will take to collect statistics. So just turning everything on is probably a bad idea. But the option to selectively add in additional performance counters from the extensive list could be a very helpful thing for your memory profiling as it lets you see additional data that might provide clues about a particular problem that has been bothering you. ANTS MP integrates very nicely with all versions of Visual Studio that support plugins (i.e. all of the non-Express versions). Just note that if you choose “Profile Memory” from the “ANTS” menu that it will launch profiling for whichever project you have set as the Startup project. One quick tip from my experience so far using ANTS MP: if you want to properly understand your memory usage in an application you’ve written, first create an “empty” version of the type of project you are going to profile (a WPF application, an XNA game, etc.) and do a quick profiling session on that so that you know the baseline memory usage of the framework itself. By “empty” I mean just create a new project of that type in Visual Studio then compile it and run it with profiling – don’t do anything special or add in anything (except perhaps for any external libraries you’re planning to use). The first thing I tried ANTS MP out on was a demo XNA project of an editor that I’ve been working on for quite some time that involves a custom extension to XNA’s content pipeline. The first time I ran it and saw the unmanaged memory usage I was convinced I had some horrible bug that was creating extra copies of texture data (the demo project didn’t have a lot of texture data so when I saw a lot of unmanaged memory I instantly figured I was doing something wrong). Then I thought to run an empty project through and when I saw that the amount of unmanaged memory was virtually identical, it dawned on me that the CLR itself sits in unmanaged memory and that (thankfully) there was nothing wrong with my code! Quite a relief. Earlier, when discussing the overview video, I mentioned the API that lets you take snapshots from within your application. I gave it a quick trial and it’s very easy to integrate and make use of and is a really nice addition (especially for projects where you want to know what, if any, allocations there are in a specific, complicated section of code). The only concern I had was that if I hadn’t watched the overview video I might never have known it existed. Even then it took me five minutes of hunting around Red Gate’s website before I found the “Taking snapshots from your code" article that explains what DLL you need to add as a reference and what method of what class you should call in order to take an automatic snapshot (including the helpful warning to wrap it in a try-catch block since, under certain circumstances, it can raise an exception, such as trying to call it more than 5 times in 30 seconds. The difficulty in discovering and then finding information about the automatic snapshots API was one thing I thought could use improvement. Another thing I think would make it even better would be local copies of the webpages it links to. Although I’m generally always connected to the internet, I imagine there are more than a few developers who aren’t or who are behind very restrictive firewalls. For them (and for me, too, if my internet connection happens to be down), it would be nice to have those documents installed locally or to have the option to download an additional “documentation” package that would add local copies. Another thing that I wish could be easier to manage is the Filters area. Finding and setting individual filters is very easy as is understanding what those filter do. And breaking it up into three sections (basic, by object, and by reference) makes sense. But I could easily see myself running a long profiling session and forgetting that I had set some filter a long while earlier in a different filter section and then spending quite a bit of time trying to figure out why some problem that was clearly visible in the data wasn’t showing up in, e.g. the instance list before remembering to check all the filters for that one setting that was only culling a few things from view. Some sort of indicator icon next to the filter section names that appears you have at least one filter set in that area would be a nice visual clue to remind me that “oh yeah, I told it to only show objects on the Gen 2 heap! That’s why I’m not seeing those instances of the SuperMagic class!” Something that would be nice (but that Red Gate cannot really do anything about) would be if this could be used in Windows Phone 7 development. If Microsoft and Red Gate could work together to make this happen (even if just on the WP7 emulator), that would be amazing. Especially given the memory constraints that apps and games running on mobile devices need to work within, a good memory profiler would be a phenomenally helpful tool. If anyone at Microsoft reads this, it’d be really great if you could make something like that happen. Perhaps even a (subsidized) custom version just for WP7 development. (For XNA games, of course, you can create a Windows version of the game and use ANTS MP on the Windows version in order to get a better picture of your memory situation. For Silverlight on WP7, though, there’s quite a bit of educated guess work and WeakReference creation followed by forced collections in order to find the source of a memory problem.) The only other thing I found myself wanting was a “Back” button. Between my Windows Phone 7, Zune, and other things, I’ve grown very used to having a “back stack” that lets me just navigate back to where I came from. The ANTS MP interface is surprisingly easy to use given how much it lets you do, and once you start using it for any amount of time, you learn all of the different areas such that you know where to go. And it does remember the state of the areas you were previously in, of course. So if you go to, e.g., the Instance Retention Graph from the Class List and then return back to the Class List, it will remember which class you had selected and all that other state information. Still, a “Back” button would be a welcome addition to a future release. Bottom Line ANTS Memory Profiler is not an inexpensive tool. But my time is valuable. I can easily see ANTS MP saving me enough time tracking down memory problems to justify it on a cost basis. More importantly to me, knowing what is happening memory-wise in my programs and having the confidence that my code doesn’t have any hidden time bombs in it that will cause it to OOM if I leave it running for longer than I do when I spin it up real quickly for debugging or just to see how a new feature looks and feels is a good feeling. It’s a feeling that I like having and want to continue to have. I got the current version for free in order to review it. Having done so, I’ve now added it to my must-have tools and will gladly lay out the money for the next version when it comes out. It has a 14 day free trial, so if you aren’t sure if it’s right for you or if you think it seems interesting but aren’t really sure if it’s worth shelling out the money for it, give it a try.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • What is hogging my connection?

    - by SF.
    At times it seems like dozens, if not hundreds of root-owned HTTP connections spring up. This is not much of a problem on LAN or WLAN as each of them seems to transfer very little, but if I use GPRS link, my ping times go into minutes (seriously, 80000ms is not infrequent!) and all connections grind to a halt waiting till these end. This usually lasts some 15 minutes and ends about when I start troubleshooting it for real. I've managed to capture a fragment of Nethogs output NetHogs version 0.8.0 PID USER PROGRAM DEV SENT RECEIVED ? root 37.209.147.180:59854-141.101.114.59:80 0.013 0.000 KB/sec ? root 37.209.147.180:59853-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:52804-173.194.70.95:80 0.000 0.000 KB/sec 1954 bw /home/bw/.dropbox-dist/dropbox ppp0 0.000 0.000 KB/sec ? root 37.209.147.180:59851-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:59850-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:52801-173.194.70.95:80 0.000 0.000 KB/sec 13301 bw /usr/lib/firefox/firefox ppp0 0.000 0.000 KB/sec ? root unknown TCP 0.000 0.000 KB/sec Unfortunately, it doesn't display the owning process of these. Does anyone recognize these addresses or is able to suggest how to troubleshoot it further or disable it? Is it some automatic update or something like that? EDIT: per request; netstat -n, for obvious reason that normal netstat won't ever launch as all DNS requests are hogged just the same. netstat -n Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 1 93.154.166.62:51314 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44098 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59855 141.101.114.59:80 FIN_WAIT1 tcp 1 0 192.168.43.224:38237 213.189.45.39:443 CLOSE_WAIT tcp 1 0 93.154.146.186:35167 75.101.152.29:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32939 199.15.160.100:80 CLOSE_WAIT tcp 1 0 192.168.43.224:55619 63.245.217.207:443 CLOSE_WAIT tcp 1 0 93.154.146.186:60210 75.101.152.29:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32944 199.15.160.100:80 CLOSE_WAIT tcp 0 1 37.209.147.180:52804 173.194.70.95:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46606 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:52619 107.22.246.76:80 CLOSE_WAIT tcp 415 0 93.154.146.186:36156 82.112.106.104:80 CLOSE_WAIT tcp 1 0 93.154.146.186:50352 107.22.246.76:443 CLOSE_WAIT tcp 1 0 192.168.43.224:55000 213.189.45.44:443 CLOSE_WAIT tcp 0 1 37.209.147.180:59853 141.101.114.59:80 FIN_WAIT1 tcp 1 0 192.168.43.224:32937 199.15.160.100:80 CLOSE_WAIT tcp 1 0 192.168.43.224:56055 93.184.221.40:80 CLOSE_WAIT tcp 415 0 93.154.146.186:36155 82.112.106.104:80 CLOSE_WAIT tcp 0 1 37.209.147.180:44097 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:35166 75.101.152.29:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32943 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:46607 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:36422 23.21.151.181:443 CLOSE_WAIT tcp 1 0 192.168.43.224:36081 93.184.220.148:80 CLOSE_WAIT tcp 1 0 192.168.43.224:44462 213.189.45.29:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32938 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:36419 23.21.151.181:443 CLOSE_WAIT tcp 0 497 93.154.166.62:51313 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59851 141.101.114.59:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44095 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46611 23.21.151.181:80 CLOSE_WAIT tcp 1 0 192.168.43.224:38236 213.189.45.39:443 CLOSE_WAIT tcp 0 171 37.209.147.180:45341 173.194.113.146:443 ESTABLISHED tcp 0 1 37.209.147.180:52801 173.194.70.95:80 FIN_WAIT1 tcp 1 0 192.168.43.224:36080 93.184.220.148:80 CLOSE_WAIT tcp 0 1 37.209.147.180:59856 141.101.114.59:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44096 198.252.206.16:80 FIN_WAIT1 tcp 0 1 93.154.166.62:57471 108.160.162.49:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59854 141.101.114.59:80 FIN_WAIT1 tcp 0 171 37.209.147.180:45340 173.194.113.146:443 ESTABLISHED tcp 0 168 37.209.147.180:45334 173.194.113.146:443 FIN_WAIT1 tcp 1 0 93.154.146.186:46609 23.21.151.181:80 CLOSE_WAIT tcp 0 1248 93.154.166.62:58270 64.251.23.59:443 FIN_WAIT1 tcp 0 1 37.209.147.180:59850 141.101.114.59:80 FIN_WAIT1 tcp 1 0 93.154.146.186:35181 75.101.152.29:80 CLOSE_WAIT tcp 232 0 93.154.172.168:46384 198.252.206.25:80 ESTABLISHED tcp 1 0 93.154.146.186:52618 107.22.246.76:80 CLOSE_WAIT tcp 1 0 93.154.172.168:36298 173.194.69.95:443 CLOSE_WAIT tcp 1 0 93.154.146.186:60209 75.101.152.29:443 CLOSE_WAIT tcp 0 168 37.209.147.180:45335 173.194.113.146:443 FIN_WAIT1 tcp 415 0 93.154.146.186:36157 82.112.106.104:80 CLOSE_WAIT tcp 1 0 192.168.43.224:36082 93.184.220.148:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32942 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:50350 107.22.246.76:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32941 199.15.160.100:80 CLOSE_WAIT tcp 0 534 37.209.147.180:44089 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46608 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:46612 23.21.151.181:80 CLOSE_WAIT udp 0 0 37.209.147.180:49057 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:51631 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:34827 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:35908 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:44106 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:42184 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:54485 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:42216 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:51961 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:48412 193.41.112.14:53 ESTABLISHED The interesting lines from ping got lost, but the summary over past few hours is: --- 8.8.8.8 ping statistics --- 107459 packets transmitted, 104376 received, +22 duplicates, 2% packet loss, time 195427362ms rtt min/avg/max/mdev = 24.822/528.132/90538.257/2519.263 ms, pipe 90 EDIT: Per request: Happened again, reboot didn't help but cleaned up all "hanging" processes. Currently netstat shows: bw@pony:/var/log$ netstat -n -t Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 93.154.188.68:42767 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:50270 173.194.69.189:443 ESTABLISHED tcp 0 0 93.154.188.68:45250 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:53488 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:53490 173.194.32.198:80 ESTABLISHED tcp 0 159 93.154.188.68:42741 74.125.239.143:443 LAST_ACK tcp 0 0 93.154.188.68:45808 198.252.206.25:80 ESTABLISHED tcp 0 0 93.154.188.68:52449 173.194.32.199:443 ESTABLISHED tcp 0 0 93.154.188.68:52600 173.194.32.199:443 TIME_WAIT tcp 0 0 93.154.188.68:50300 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:45253 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:46252 173.194.32.204:443 ESTABLISHED tcp 0 0 93.154.188.68:45246 190.93.244.58:80 ESTABLISHED tcp 0 0 93.154.188.68:47064 173.194.113.143:443 ESTABLISHED tcp 0 0 93.154.188.68:34484 173.194.69.95:443 ESTABLISHED tcp 0 0 93.154.188.68:45252 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:54290 173.194.32.202:443 ESTABLISHED tcp 0 0 93.154.188.68:47063 173.194.113.143:443 ESTABLISHED tcp 0 0 93.154.188.68:53469 173.194.32.198:80 TIME_WAIT tcp 0 0 93.154.188.68:45242 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:53468 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:50299 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:42764 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:45256 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:58047 108.160.162.105:80 ESTABLISHED tcp 0 0 93.154.188.68:45249 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:50297 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:53470 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:34100 68.232.35.121:443 ESTABLISHED tcp 0 0 93.154.188.68:42758 74.125.239.143:443 ESTABLISHED tcp 0 0 93.154.188.68:42765 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:39000 173.194.69.95:80 TIME_WAIT tcp 0 0 93.154.188.68:50296 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:53467 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:42766 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:45251 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:45248 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:45247 190.93.244.58:80 ESTABLISHED tcp 0 159 93.154.188.68:50254 173.194.69.189:443 LAST_ACK tcp 0 0 93.154.188.68:34483 173.194.69.95:443 ESTABLISHED Output of ps: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.8 0.0 3628 2092 ? Ss 16:52 0:03 /sbin/init root 2 0.0 0.0 0 0 ? S 16:52 0:00 [kthreadd] root 3 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/0] root 4 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/0:0] root 6 0.0 0.0 0 0 ? S 16:52 0:00 [migration/0] root 7 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/0] root 8 0.0 0.0 0 0 ? S 16:52 0:00 [migration/1] root 10 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/1] root 11 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/1] root 12 0.0 0.0 0 0 ? S 16:52 0:00 [migration/2] root 14 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/2] root 15 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/2] root 16 0.0 0.0 0 0 ? S 16:52 0:00 [migration/3] root 17 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/3:0] root 18 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/3] root 19 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/3] root 20 0.0 0.0 0 0 ? S< 16:52 0:00 [cpuset] root 21 0.0 0.0 0 0 ? S< 16:52 0:00 [khelper] root 22 0.0 0.0 0 0 ? S 16:52 0:00 [kdevtmpfs] root 23 0.0 0.0 0 0 ? S< 16:52 0:00 [netns] root 24 0.0 0.0 0 0 ? S 16:52 0:00 [sync_supers] root 25 0.0 0.0 0 0 ? S 16:52 0:00 [bdi-default] root 26 0.0 0.0 0 0 ? S< 16:52 0:00 [kintegrityd] root 27 0.0 0.0 0 0 ? S< 16:52 0:00 [kblockd] root 28 0.0 0.0 0 0 ? S< 16:52 0:00 [ata_sff] root 29 0.0 0.0 0 0 ? S 16:52 0:00 [khubd] root 30 0.0 0.0 0 0 ? S< 16:52 0:00 [md] root 42 0.0 0.0 0 0 ? S 16:52 0:00 [khungtaskd] root 43 0.0 0.0 0 0 ? S 16:52 0:00 [kswapd0] root 44 0.0 0.0 0 0 ? SN 16:52 0:00 [ksmd] root 45 0.0 0.0 0 0 ? SN 16:52 0:00 [khugepaged] root 46 0.0 0.0 0 0 ? S 16:52 0:00 [fsnotify_mark] root 47 0.0 0.0 0 0 ? S 16:52 0:00 [ecryptfs-kthrea] root 48 0.0 0.0 0 0 ? S< 16:52 0:00 [crypto] root 59 0.0 0.0 0 0 ? S< 16:52 0:00 [kthrotld] root 70 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/2:1] root 71 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_0] root 72 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_1] root 73 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_2] root 74 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_3] root 75 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/u:2] root 76 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/u:3] root 79 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/1:1] root 99 0.0 0.0 0 0 ? S< 16:52 0:00 [deferwq] root 100 0.0 0.0 0 0 ? S< 16:52 0:00 [charger_manager] root 101 0.0 0.0 0 0 ? S< 16:52 0:00 [devfreq_wq] root 102 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/2:2] root 106 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_4] root 107 0.0 0.0 0 0 ? S 16:52 0:00 [usb-storage] root 108 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_5] root 109 0.0 0.0 0 0 ? S 16:52 0:00 [usb-storage] root 271 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/1:2] root 316 0.0 0.0 0 0 ? S 16:52 0:00 [jbd2/sda1-8] root 317 0.0 0.0 0 0 ? S< 16:52 0:00 [ext4-dio-unwrit] root 440 0.1 0.0 2820 608 ? S 16:52 0:00 upstart-udev-bridge --daemon root 478 0.0 0.0 3460 1648 ? Ss 16:52 0:00 /sbin/udevd --daemon root 632 0.0 0.0 3348 1336 ? S 16:52 0:00 /sbin/udevd --daemon root 633 0.0 0.0 3348 1204 ? S 16:52 0:00 /sbin/udevd --daemon root 782 0.0 0.0 2816 596 ? S 16:52 0:00 upstart-socket-bridge --daemon root 822 0.0 0.0 6684 2400 ? Ss 16:52 0:00 /usr/sbin/sshd -D 102 834 0.2 0.0 4064 1864 ? Ss 16:52 0:01 dbus-daemon --system --fork root 857 0.0 0.1 7420 3380 ? Ss 16:52 0:00 /usr/sbin/modem-manager root 858 0.0 0.0 4784 1636 ? Ss 16:52 0:00 /usr/sbin/bluetoothd syslog 860 0.0 0.0 31068 1496 ? Sl 16:52 0:00 rsyslogd -c5 root 869 0.1 0.1 24280 5564 ? Ssl 16:52 0:00 NetworkManager avahi 883 0.0 0.0 3448 1488 ? S 16:52 0:00 avahi-daemon: running [pony.local] avahi 884 0.0 0.0 3448 436 ? S 16:52 0:00 avahi-daemon: chroot helper root 885 0.0 0.0 0 0 ? S< 16:52 0:00 [kpsmoused] root 892 0.0 0.1 25696 4140 ? Sl 16:52 0:00 /usr/lib/policykit-1/polkitd --no-debug root 923 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_6] root 959 0.0 0.0 0 0 ? S< 16:52 0:00 [krfcommd] root 970 0.0 0.1 7536 3120 ? Ss 16:52 0:00 /usr/sbin/cupsd -F colord 976 0.1 0.3 55080 10396 ? Sl 16:52 0:00 /usr/lib/i386-linux-gnu/colord/colord root 979 0.0 0.0 4632 872 tty4 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty4 root 987 0.0 0.0 4632 884 tty5 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty5 root 994 0.0 0.0 4632 884 tty2 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty2 root 995 0.0 0.0 4632 868 tty3 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty3 root 998 0.0 0.0 4632 876 tty6 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty6 root 1022 0.0 0.0 2176 680 ? Ss 16:52 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket root 1029 0.0 0.0 3632 664 ? Ss 16:52 0:00 /usr/sbin/irqbalance daemon 1030 0.0 0.0 2476 120 ? Ss 16:52 0:00 atd root 1031 0.0 0.0 2620 880 ? Ss 16:52 0:00 cron root 1061 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/3:2] root 1064 0.0 1.0 34116 31072 ? SLsl 16:52 0:00 lightdm root 1076 13.4 1.2 118688 37920 tty7 Ssl+ 16:52 0:55 /usr/bin/X :0 -core -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswit root 1085 0.0 0.0 0 0 ? S 16:52 0:00 [rts_pstor] root 1087 0.0 0.0 0 0 ? S 16:52 0:00 [rtsx-polling] root 1095 0.0 0.0 0 0 ? S< 16:52 0:00 [cfg80211] root 1127 0.0 0.0 0 0 ? S 16:52 0:00 [flush-8:0] root 1130 0.0 0.0 6136 1824 ? Ss 16:52 0:00 /sbin/wpa_supplicant -B -P /run/sendsigs.omit.d/wpasupplicant.pid -u -s -O /va root 1137 0.0 0.1 24604 3164 ? Sl 16:52 0:00 /usr/lib/accountsservice/accounts-daemon root 1140 0.0 0.0 0 0 ? S< 16:52 0:00 [hd-audio0] root 1188 0.0 0.1 34308 3420 ? Sl 16:52 0:00 /usr/sbin/console-kit-daemon --no-daemon root 1425 0.0 0.0 4632 872 tty1 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty1 root 1443 0.1 0.1 29460 4664 ? Sl 16:52 0:00 /usr/lib/upower/upowerd root 1579 0.0 0.1 16540 3272 ? Sl 16:53 0:00 lightdm --session-child 12 19 bw 1623 0.0 0.0 2232 644 ? Ss 16:53 0:00 /bin/sh /usr/bin/startkde bw 1672 0.0 0.0 4092 204 ? Ss 16:53 0:00 /usr/bin/ssh-agent /usr/bin/gpg-agent --daemon --sh --write-env-file=/home/bw/ bw 1673 0.0 0.0 5492 384 ? Ss 16:53 0:00 /usr/bin/gpg-agent --daemon --sh --write-env-file=/home/bw/.gnupg/gpg-agent-in bw 1676 0.0 0.0 3848 792 ? S 16:53 0:00 /usr/bin/dbus-launch --exit-with-session /usr/bin/startkde bw 1677 0.5 0.0 5384 2180 ? Ss 16:53 0:02 //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session root 1704 0.3 0.1 25348 3600 ? Sl 16:53 0:01 /usr/lib/udisks/udisks-daemon root 1705 0.0 0.0 6620 728 ? S 16:53 0:00 udisks-daemon: not polling any devices bw 1736 0.0 0.0 2008 64 ? S 16:53 0:00 /usr/lib/kde4/libexec/start_kdeinit +kcminit_startup bw 1737 0.0 0.5 115200 15588 ? Ss 16:53 0:00 kdeinit4: kdeinit4 Running... bw 1738 0.1 0.2 116756 8728 ? S 16:53 0:00 kdeinit4: klauncher [kdeinit] --fd=9 bw 1740 0.6 1.0 340524 31264 ? Sl 16:53 0:02 kdeinit4: kded4 [kdeinit] bw 1742 0.0 0.0 8944 2144 ? S 16:53 0:00 /usr/lib/i386-linux-gnu/gconf/gconfd-2 bw 1746 0.2 0.4 92028 14688 ? S 16:53 0:00 /usr/bin/kglobalaccel bw 1748 0.0 0.4 90804 13500 ? S 16:53 0:00 /usr/bin/kwalletd bw 1752 0.1 0.5 103764 15152 ? S 16:53 0:00 /usr/bin/kactivitymanagerd bw 1758 0.0 0.0 2144 280 ? S 16:53 0:00 kwrapper4 ksmserver bw 1759 0.1 0.5 150016 16088 ? Sl 16:53 0:00 kdeinit4: ksmserver [kdeinit] bw 1763 2.2 1.0 178492 32100 ? Sl 16:53 0:08 kwin bw 1772 0.2 0.5 106292 16340 ? Sl 16:53 0:00 /usr/bin/knotify4 bw 1777 0.9 1.1 246120 32912 ? Sl 16:53 0:03 /usr/bin/krunner bw 1778 6.3 2.7 389884 80216 ? Sl 16:53 0:23 /usr/bin/plasma-desktop bw 1785 0.0 0.0 2844 1208 ? S 16:53 0:00 ksysguardd bw 1789 0.1 0.4 82036 14176 ? S 16:53 0:00 /usr/bin/kuiserver bw 1805 0.3 0.1 61560 5612 ? Sl 16:53 0:01 /usr/bin/akonadi_control root 1806 0.0 0.0 0 0 ? S 16:53 0:00 [kworker/0:2] bw 1808 0.1 0.2 211852 8460 ? Sl 16:53 0:00 akonadiserver bw 1810 0.4 0.8 244116 25360 ? Sl 16:53 0:01 /usr/sbin/mysqld --defaults-file=/home/bw/.local/share/akonadi/mysql.conf --da bw 1874 0.0 0.0 35284 2956 ? Sl 16:53 0:00 /usr/bin/xsettings-kde bw 1876 0.0 0.3 68776 9488 ? Sl 16:53 0:00 /usr/bin/nepomukserver bw 1884 0.4 0.9 173876 29240 ? SNl 16:53 0:01 /usr/bin/nepomukservicestub nepomukstorage bw 1902 6.1 2.1 451512 63924 ? Sl 16:53 0:21 /home/bw/.dropbox-dist/dropbox bw 1906 3.8 1.0 142368 32376 ? Rl 16:53 0:13 /usr/bin/yakuake bw 1933 0.0 0.1 54636 4680 ? Sl 16:53 0:00 /usr/bin/zeitgeist-datahub bw 1943 0.5 1.5 164836 46836 ? Sl 16:53 0:01 python /usr/bin/printer-applet bw 1945 0.1 0.1 99636 5048 ? S<l 16:53 0:00 /usr/bin/pulseaudio --start --log-target=syslog rtkit 1947 0.0 0.0 21336 1248 ? SNl 16:53 0:00 /usr/lib/rtkit/rtkit-daemon bw 1958 0.0 0.1 44204 3792 ? Sl 16:53 0:00 /usr/bin/zeitgeist-daemon bw 1972 0.0 0.0 27008 2684 ? Sl 16:53 0:00 /usr/lib/gvfs/gvfsd bw 1974 0.1 0.5 90480 16660 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_akonotes_resource akonadi_akonotes_res bw 1984 0.1 0.5 90472 16636 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_akonotes_resource akonadi_akonotes_res bw 1985 0.3 0.9 148800 28304 ? S 16:53 0:01 /usr/bin/akonadi_archivemail_agent --identifier akonadi_archivemail_agent bw 1992 0.1 0.5 90020 16148 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_contacts_resource akonadi_contacts_res bw 1993 0.1 0.5 90132 16452 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_contacts_resource akonadi_contacts_res bw 1994 0.1 0.5 90564 16332 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_ical_resource akonadi_ical_resource_0 bw 1995 0.1 0.5 90676 16732 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_ical_resource akonadi_ical_resource_1 bw 1996 0.1 0.5 90468 16800 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_maildir_resource akonadi_maildir_resou bw 1999 0.2 0.6 99324 19276 ? S 16:53 0:00 /usr/bin/akonadi_maildispatcher_agent --identifier akonadi_maildispatcher_agen bw 2006 0.3 0.9 148808 28332 ? S 16:53 0:01 /usr/bin/akonadi_mailfilter_agent --identifier akonadi_mailfilter_agent bw 2017 0.0 0.1 50256 4716 ? Sl 16:53 0:00 /usr/lib/zeitgeist/zeitgeist-fts bw 2024 0.2 0.6 103632 18376 ? Sl 16:53 0:00 /usr/bin/akonadi_nepomuk_feeder --identifier akonadi_nepomuk_feeder bw 2043 0.0 0.0 4484 280 ? S 16:53 0:00 /bin/cat bw 2101 0.2 0.7 113600 22396 ? Sl 16:53 0:00 /usr/lib/kde4/libexec/polkit-kde-authentication-agent-1 bw 2105 0.2 0.7 114196 22072 ? Sl 16:53 0:00 /usr/bin/nepomukcontroller bw 2156 0.3 1.0 333188 31244 ? Sl 16:54 0:01 /usr/bin/kmix bw 2167 0.0 0.0 6548 2724 pts/2 Ss 16:54 0:00 /bin/bash bw 2177 0.2 0.7 113496 22960 ? Sl 16:54 0:00 /usr/bin/klipper bw 2394 3.5 1.2 52932 35596 ? SNl 16:54 0:11 /usr/bin/virtuoso-t +foreground +configfile /tmp/virtuoso_hX1884.ini +wait root 2460 0.0 0.0 6184 1876 pts/2 S 16:54 0:00 sudo -s root 2500 0.0 0.0 6528 2700 pts/2 S 16:54 0:00 /bin/bash root 2599 0.0 0.0 5444 1280 pts/2 S+ 16:54 0:00 /bin/bash bin/aero root 2606 0.1 0.0 9836 2500 pts/2 S+ 16:54 0:00 wvdial aero2 root 2619 0.0 0.0 3504 1280 pts/2 S 16:54 0:00 /usr/sbin/pppd 57600 modem crtscts defaultroute usehostname -detach user aero bw 2653 0.0 0.0 6600 2880 pts/3 Ss 16:54 0:00 /bin/bash bw 2676 0.4 0.8 130296 24016 ? SNl 16:54 0:01 /usr/bin/nepomukservicestub nepomukfilewatch bw 2679 0.1 0.7 101636 22252 ? SNl 16:54 0:00 /usr/bin/nepomukservicestub nepomukqueryservice bw 2681 0.2 0.8 109836 24280 ? SNl 16:54 0:00 /usr/bin/nepomukservicestub nepomukbackupsync bw 3833 46.0 9.7 829272 288012 ? Rl 16:55 1:46 /usr/lib/firefox/firefox bw 3903 0.0 0.0 35128 2804 ? Sl 16:55 0:00 /usr/lib/at-spi2-core/at-spi-bus-launcher bw 4708 0.1 0.0 6564 2736 pts/4 Ss 16:56 0:00 /bin/bash root 5210 0.0 0.0 0 0 ? S 16:57 0:00 [kworker/u:0] root 6140 0.2 0.0 0 0 ? S 16:58 0:00 [kworker/0:1] root 6371 0.5 0.0 6184 1868 pts/4 S+ 16:59 0:00 sudo nethogs ppp0 root 6411 17.7 0.2 8616 6144 pts/4 S+ 16:59 0:05 nethogs ppp0 bw 6787 0.0 0.0 5464 1220 pts/3 R+ 16:59 0:00 ps auxw

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Windows 7 intermittently drops wired internet/lan connection.

    - by CraigTP
    In a nutshell, my Windows 7 Ultimate PC intermittently drops it's internet connection. Why? Background: My PC is wired to my ADSL modem/router which is directly connected to the phone line. I also have wireless connectivity turned on within the router for a laptop to connect wirelessly. Every few hours or so, when using my PC, I find I cannot access the internet and pages will not load. Eventually, Windows7 will update the network icon in the task-tray to show the exclamation mark symbol on the network icon. Opening up the Network And Sharing Centre will show the red cross between the "Multiple Networks" and "The Internet". Here's a picture of the "Network And Sharing Centre" (grabbed when everything was working!) As you can see, I'm running Sun's VirtualBox on this machine and that creates a Network connection for itself. This doesn't seem to affect the intermittent dropping (i.e. the intermittent drops occur whether the VirtualBox connection is in use or not). When the connection does drop, I cannot access any internet pages, nor can I access the router's web admin page at http://192.168.1.1/, so I'm assuming I've lost all local LAN access too. It's definitely not the router (or the internet connection itself) as my laptop, using the wireless connection (and running Vista Home Premium) continues to be able to access the internet (and the router's web admin pages) just fine. Every time this happens, I can immediately restore all internet and LAN access by opening Network Adapter page, disabling the "Local Area Connection" and then re-enabling it. Give it a few seconds and everything is fine again. I assume this is because, beneath the GUI, it's effectively doing an "ipconfig /release" then "ipconfig /renew". Why does this happen in the first place, though? I've googled for this and seen quite a few other people (even on MSDN/Technet forums) experiencing the same or almost the same problem, but with no clear resolution. Suggestions of turning off IPv6 on the LAN adapter, and ensuring there's no power management "sleeping" the network adapter have been tried but do not cure the problem. There does not seem to be any particular sequence of events that cause it to happen either. I've had it go twice in 20 minutes when just randomly browsing the web with no other traffic, and I've also had it go once then not go again for 2-3 hours with the same sort of usage. Can anyone tell me why this is happening and how to make it stop? EDIT: Additional information based upon the answer provided so far: Firstly, I forgot the mention that this is Windows 7 64 bit if that makes any difference at all. I mentioned that I don't think the VirtualBox network adpater is causing this problem in any way, and I also have VirtualBox installed on two other machines, one running Vista Home Premium and the other running XP. Neither of these machine experience the same network connectivity issues as the Windows 7 machine. The IP assignment for the Windows 7 machine is the same both before and after the "drop". I have a DHCP server on the router issuing IP Addresses, however my Windows 7 machine uses a static address. Here's the output from "ipconfig": Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.1.2(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Within the system's event logs, the only event that relates to the connection dropping is a "DNS Client Event" and this is generated after the connection has dropped and is an event detailing that DNS information can't be found for whatever website I may be trying to access, just as the connection drops: Log Name: System Source: Microsoft-Windows-DNS-Client Event ID: 1014 Task Category: None Level: Warning Keywords: User: NETWORK SERVICE Description: Name resolution for the name weather.service.msn.com timed out after none of the configured DNS servers responded. The network adapter chipset is Realtek PCIe GBE Family Controller and I have confirmed that this is the correct chipset for the motherboard (Asus M4A77TD PRO), and in fact, Windows Update installed an updated driver for this on 12/Jan/2009. The details of the update say that it's a Realtek software update from December 2009. Incidentally, I was still having the same intermittent problems prior to this update. It seems to have made no difference at all. EDIT 2 (1 Feb 2010): In my quest to solve this problem, I have discovered some more interesting information. On another forum, someone suggested that I should try running Windows in "Safe Mode With Networking" and see if the problem continues to occur. This was a fantastic suggestion and I don't know why I didn't think of it sooner myself. So, I proceeded to run in Safe Mode with Networking for a number of hours, and amazingly, the "drops" didn't occur once. It was a positive discovery, however, due to the intermittent nature of the original problem, I wasn't completely convinced that the problem was cured. One thing I did note is that the fan on my GFX card was running alot louder than normal. This is due to the fact that I have an ASUS ENGTS250 graphics card (http://www.asus.com/product.aspx?P_ID=B6imcoax3MRY42f3) which had a known problem with a noisy fan until a BIOS update fixed the issue. (See the "Manufacturer Response" here: http://www.newegg.com/Product/Product.aspx?Item=N82E16814121334 for details). Well, running in safe mode had the fan running (incorrectly) at full speed (as it did before the BIOS update), but with an (apparently) stable network connection. Obviously some driver was not loaded for the GFX card when in Safe Mode so this got me thinking about the GFX card (since the very noisy fan was quite obvious when running in Safe Mode). I rebooted into normal mode, and found that Nvidia had a very up-to-date new driver for my GFX card (only about 1 week old), so I downloaded the appropriate driver and installed it. After installation and a reboot, I was able to use my PC for an entire day with NO NETWORK DROPS!!! This was on Saturday. However, on the Sunday, I also had my PC for pretty much the entire day and experienced 2 network drops. No other changes have been made to my PC in this time. So, the story seems to be that updating my graphics card drivers seems to have improved (if not completely fixed) the issue, however, I'm still searching for a proper fix for this problem. Hopefully, this information may help anyone who may have additional ideas as to why this problem is occuring in the first place. (And why does new GFX card drivers have anything to do with the network?) I appreciate everyone's feedback so far. However, I'll have to ask once more if anyone has any further ideas of how to fix this particular problem? Thanks in advance.

    Read the article

  • Can't join OS X Mavericks to AD Domain

    - by watkipet
    I'm attempting to join an OS X Mavericks (10.9) client to a Windows Server 2008 Active Directory domain, however the bind fails with this error in the OS X client's system.log: Oct 24 15:03:15 host.domain.com com.apple.preferences.users.remoteservice[5547]: -[ODCAddServerSheetController handleOtherActionError: gotError: Error Domain=com.apple.OpenDirectory Code=5202 "Authentication server encountered an error while attempting the requested operation." UserInfo=0x7f9e6cb3e180 {NSLocalizedDescription=Authentication server encountered an error while attempting the requested operation., NSLocalizedFailureReason=Authentication server encountered an error while attempting the requested operation.}, Authentication server encountered an error while attempting the requested operation. I've joined (bound) Ubuntu Linux clients to the same domain with net ads join in the past with no problems (using the same administrative user). I don't have access to any server logs. Here's the GUI error (from Directory Utility) on the OS X client: Here's the GUI error (from User's and Groups) in System Preferences on the OS X client: Update After some Wiresharking I've got some more info: OS X Client - KDC (over UDP): AS_REQ (no padata) OS X Client <- KDC (over UDP): KRB5KDC_ERR_PREAUTH_REQUIRED OS X Client - KDC (over UDP): AS_REQ (this time with PA-ENC-TIMESTAMP in padata) OS X Client <- KDC (over UDP): KRB5KDC_ERR_RESPONSE_TOO_BIG OS X Client - KDC (over TCP): AS_REQ (also with PA-ENC-TIMESTAMP in padata) OS X Client <- KDC (over TCP): KDC_ERR_ETYPE_NOSUPP ...and that's it. This is what I think is going on: The OS X client sends a kerberos request. The KDC says, "You need to pre-authenticate. Try again" The OS X client tries to pre-authenticate (all this so far is over UDP) Something gets lost on our network and the KDC says, "Oops something went wrong" The OS X client switches to TCP and tries again. Over TCP, the KDC says, "You're using an encryption type I don't support" Note that in its padata records, the OS X client is always using "aes256-cts-hmac-sha1-96" as its encryption type. However, in its KDC_REQ_BODY record it lists the aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, and rc4-hmac encryption types. When the KDC comes back with KDC_ERR_ETYPE_NOSUPP, it uses rc4-hmac as its encryption type in its padata record. I know next to nothing about Kerberos, but it seems to me that the OS X client should go ahead and try the rc4-hmac encryption type. However, it does nothing after this. Update 2 Here's the debug log from Directory Services on the OS X client. Sorry--it's long. 2013-10-25 14:19:13.219128 PDT - 10544.20463 - ODNodeCustomCall request, NodeID: 52A65FAE-4B24-455D-86EC-2199A780D234, Code: 80 2013-10-25 14:19:13.220409 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - client requested OU - 'CN=Computers,DC=domain,DC=com' 2013-10-25 14:19:13.220427 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Binding using '[email protected]' for kerberos ID 2013-10-25 14:19:13.220571 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - new kerberos credential cache 'MEMORY:0x7fa713635470' for '[email protected]' 2013-10-25 14:19:13.220623 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: loop 1 2013-10-25 14:19:13.220639 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send 0 patypes 2013-10-25 14:19:13.220653 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - fast disabled, not doing any fast wrapping 2013-10-25 14:19:13.220699 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 0 2013-10-25 14:19:13.221275 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.221326 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00000001 2013-10-25 14:19:13.221373 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00000001 2013-10-25 14:19:13.222588 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00000001 2013-10-25 14:19:13.222617 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00000001 2013-10-25 14:19:13.222665 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto_context DOMAIN.COM done: 0 hosts 1 packets 1 wc: 0.001960 nr: 0.000000 kh: 0.000560 tid: 00000001 2013-10-25 14:19:13.222705 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: loop 2 2013-10-25 14:19:13.222737 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: processing input 2013-10-25 14:19:13.222752 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: got an KRB-ERROR from KDC 2013-10-25 14:19:13.222775 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: KRB-ERROR -1765328359/Additional pre-authentication required 2013-10-25 14:19:13.222791 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send 4 patypes 2013-10-25 14:19:13.222800 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send PA-DATA type: 19 2013-10-25 14:19:13.222808 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send PA-DATA type: 2 2013-10-25 14:19:13.222816 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send PA-DATA type: 16 2013-10-25 14:19:13.222825 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send PA-DATA type: 15 2013-10-25 14:19:13.222840 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: using ENC-TS with enctype 18 2013-10-25 14:19:13.222850 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: using default_s2k_func 2013-10-25 14:19:13.227443 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - fast disabled, not doing any fast wrapping 2013-10-25 14:19:13.227502 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 0 2013-10-25 14:19:13.228233 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.228320 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00010001 2013-10-25 14:19:13.228374 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00010001 2013-10-25 14:19:13.229930 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00010001 2013-10-25 14:19:13.229957 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00010001 2013-10-25 14:19:13.229975 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto trying over again (reset): 0 2013-10-25 14:19:13.230023 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 2 2013-10-25 14:19:13.230664 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.230726 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.230818 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to 11: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.231101 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.232743 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.232777 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.232798 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto_context DOMAIN.COM done: 0 hosts 2 packets 2 wc: 0.005316 nr: 0.000000 kh: 0.001339 tid: 00010002 2013-10-25 14:19:13.232856 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: loop 3 2013-10-25 14:19:13.232868 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: processing input 2013-10-25 14:19:13.232900 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: using keyproc 2013-10-25 14:19:13.232910 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: using default_s2k_func 2013-10-25 14:19:13.236487 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: extracting ticket 2013-10-25 14:19:13.236557 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: wc: 0.015944 2013-10-25 14:19:13.237022 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 2 2013-10-25 14:19:13.237444 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.237482 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.237551 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to 11: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.237900 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.238616 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.238645 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.238674 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto_context DOMAIN.COM done: 0 hosts 1 packets 1 wc: 0.001656 nr: 0.000000 kh: 0.000409 tid: 00020001 2013-10-25 14:19:13.238839 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 2 2013-10-25 14:19:13.239302 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.239360 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.239429 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to 11: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.239683 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.240350 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.240387 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.240415 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto_context DOMAIN.COM done: 0 hosts 1 packets 1 wc: 0.001578 nr: 0.000000 kh: 0.000445 tid: 00030001 2013-10-25 14:19:13.240514 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_credentials_with_flags: DOMAIN.COM wc: 0.003615 2013-10-25 14:19:13.240537 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - valid credentials for [email protected] 2013-10-25 14:19:13.240541 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching to cache 'MEMORY:0x7fa713635470' 2013-10-25 14:19:13.240545 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching GSS to cache 'MEMORY:0x7fa713635470 2013-10-25 14:19:13.240555 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Bind Step 5 - Bind/Join computer to domain - 'domain.com' 2013-10-25 14:19:13.241345 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - resolving 'server.domain.com' 2013-10-25 14:19:13.241646 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - added socket 12 for host 'server.domain.com:389' address '192.168.0.2' to kqueue list 2013-10-25 14:19:13.241930 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Setting kerberos server for 'Kerberos:DOMAIN.COM' to 'server.domain.com' 2013-10-25 14:19:13.241962 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching to cache 'MEMORY:0x7fa713635470' 2013-10-25 14:19:13.241969 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching GSS to cache 'MEMORY:0x7fa713635470 2013-10-25 14:19:13.242231 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI allow Confidentiality 2013-10-25 14:19:13.242234 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - setting realm 'DOMAIN.COM' for node '/Active Directory/domain.com' 2013-10-25 14:19:13.242239 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI allow Integrity (signing) 2013-10-25 14:19:13.242274 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI using hostname 'server.domain.com' 2013-10-25 14:19:13.242282 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI using initiator credential '[email protected]' 2013-10-25 14:19:13.250771 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Authenticate to LDAP using Kerberos credential - 0 2013-10-25 14:19:13.250784 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - verified connectivity to '192.168.0.2' with socket 12 2013-10-25 14:19:13.251513 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - locating site using domain domain.com using CLDAP 2013-10-25 14:19:13.252145 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - using site of 'DOMAINGROUP' from CLDAP 2013-10-25 14:19:13.253626 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - resolving 'server2.domain.com' 2013-10-25 14:19:13.253933 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - added socket 13 for host 'server2.domain.com:389' address '192.168.0.1' to kqueue list 2013-10-25 14:19:13.254428 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Setting kerberos server for 'Kerberos:DOMAIN.COM' to 'server2.domain.com' 2013-10-25 14:19:13.254462 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching to cache 'MEMORY:0x7fa713635470' 2013-10-25 14:19:13.254468 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching GSS to cache 'MEMORY:0x7fa713635470 2013-10-25 14:19:13.254617 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - setting realm 'DOMAIN.COM' for node '/Active Directory/domain.com' 2013-10-25 14:19:13.254661 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI allow Confidentiality 2013-10-25 14:19:13.254670 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI allow Integrity (signing) 2013-10-25 14:19:13.254689 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI using hostname 'server2.domain.com' 2013-10-25 14:19:13.254695 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI using initiator credential '[email protected]' 2013-10-25 14:19:13.262092 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Authenticate to LDAP using Kerberos credential - 0 2013-10-25 14:19:13.262108 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - verified connectivity to '192.168.0.1' with socket 13 2013-10-25 14:19:13.262982 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Computer account either already exists or DC is already Read/Write 2013-10-25 14:19:13.264968 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Adding record 'cn=spike,CN=Computers,DC=domain,DC=com' in 'domain.com' The failure point seems to be Computer account either already exists or DC is already Read/Write, however, I can search for 'spike' on the Active Directory server using Active Directory Explorer and it's not there. If I do the same search for the Linux and Windows PCs I added previously, I can find them.

    Read the article

  • Cisco SR520w FE - WAN Port Stops Working

    - by Mike Hanley
    I have setup a Cisco SR520W and everything appears to be working. After about 1-2 days, it looks like the WAN port stops forwarding traffic to the Internet gateway IP of the device. If I unplug and then plug in the network cable connecting the WAN port of the SR520W to my Comcast Cable Modem, traffic startings flowing again. Also, if I restart the SR520W, the traffic will flow again. Any ideas? Here is the running config: Current configuration : 10559 bytes ! version 12.4 no service pad no service timestamps debug uptime service timestamps log datetime msec no service password-encryption ! hostname hostname.mydomain.com ! boot-start-marker boot-end-marker ! logging message-counter syslog no logging rate-limit enable secret 5 <removed> ! aaa new-model ! ! aaa authentication login default local aaa authorization exec default local ! ! aaa session-id common clock timezone PST -8 clock summer-time PDT recurring ! crypto pki trustpoint TP-self-signed-334750407 enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-334750407 revocation-check none rsakeypair TP-self-signed-334750407 ! ! crypto pki certificate chain TP-self-signed-334750407 certificate self-signed 01 <removed> quit dot11 syslog ! dot11 ssid <removed> vlan 75 authentication open authentication key-management wpa guest-mode wpa-psk ascii 0 <removed> ! ip source-route ! ! ip dhcp excluded-address 172.16.0.1 172.16.0.10 ! ip dhcp pool inside import all network 172.16.0.0 255.240.0.0 default-router 172.16.0.1 dns-server 10.0.0.15 10.0.0.12 domain-name mydomain.com ! ! ip cef ip domain name mydomain.com ip name-server 68.87.76.178 ip name-server 66.240.48.9 ip port-map user-ezvpn-remote port udp 10000 ip ips notify SDEE ip ips name sdm_ips_rule ! ip ips signature-category category all retired true category ios_ips basic retired false ! ip inspect log drop-pkt no ipv6 cef ! multilink bundle-name authenticated parameter-map type inspect z1-z2-pmap audit-trail on password encryption aes ! ! username admin privilege 15 secret 5 <removed> ! crypto key pubkey-chain rsa named-key realm-cisco.pub key-string <removed> quit ! ! ! ! ! ! crypto ipsec client ezvpn EZVPN_REMOTE_CONNECTION_1 connect auto group EZVPN_GROUP_1 key <removed> mode client peer 64.1.208.90 virtual-interface 1 username admin password <removed> xauth userid mode local ! ! archive log config logging enable logging size 600 hidekeys ! ! ! class-map type inspect match-any SDM_AH match access-group name SDM_AH class-map type inspect match-any SDM-Voice-permit match protocol sip class-map type inspect match-any SDM_ESP match access-group name SDM_ESP class-map type inspect match-any SDM_EASY_VPN_REMOTE_TRAFFIC match protocol isakmp match protocol ipsec-msft match class-map SDM_AH match class-map SDM_ESP match protocol user-ezvpn-remote class-map type inspect match-all SDM_EASY_VPN_REMOTE_PT match class-map SDM_EASY_VPN_REMOTE_TRAFFIC match access-group 101 class-map type inspect match-any Easy_VPN_Remote_VT match access-group 102 class-map type inspect match-any sdm-cls-icmp-access match protocol icmp match protocol tcp match protocol udp class-map type inspect match-any sdm-cls-insp-traffic match protocol cuseeme match protocol dns match protocol ftp match protocol h323 match protocol https match protocol icmp match protocol imap match protocol pop3 match protocol netshow match protocol shell match protocol realmedia match protocol rtsp match protocol smtp extended match protocol sql-net match protocol streamworks match protocol tftp match protocol vdolive match protocol tcp match protocol udp class-map type inspect match-any L4-inspect-class match protocol icmp class-map type inspect match-all sdm-invalid-src match access-group 100 class-map type inspect match-all dhcp_out_self match access-group name dhcp-resp-permit class-map type inspect match-all dhcp_self_out match access-group name dhcp-req-permit class-map type inspect match-all sdm-protocol-http match protocol http ! ! policy-map type inspect sdm-permit-icmpreply class type inspect dhcp_self_out pass class type inspect sdm-cls-icmp-access inspect class class-default pass policy-map type inspect sdm-permit_VT class type inspect Easy_VPN_Remote_VT pass class class-default drop policy-map type inspect sdm-inspect class type inspect SDM-Voice-permit pass class type inspect sdm-cls-insp-traffic inspect class type inspect sdm-invalid-src drop log class type inspect sdm-protocol-http inspect z1-z2-pmap class class-default pass policy-map type inspect sdm-inspect-voip-in class type inspect SDM-Voice-permit pass class class-default drop policy-map type inspect sdm-permit class type inspect SDM_EASY_VPN_REMOTE_PT pass class type inspect dhcp_out_self pass class class-default drop ! zone security ezvpn-zone zone security out-zone zone security in-zone zone-pair security sdm-zp-in-ezvpn1 source in-zone destination ezvpn-zone service-policy type inspect sdm-permit_VT zone-pair security sdm-zp-out-ezpn1 source out-zone destination ezvpn-zone service-policy type inspect sdm-permit_VT zone-pair security sdm-zp-ezvpn-out1 source ezvpn-zone destination out-zone service-policy type inspect sdm-permit_VT zone-pair security sdm-zp-self-out source self destination out-zone service-policy type inspect sdm-permit-icmpreply zone-pair security sdm-zp-out-in source out-zone destination in-zone service-policy type inspect sdm-inspect-voip-in zone-pair security sdm-zp-ezvpn-in1 source ezvpn-zone destination in-zone service-policy type inspect sdm-permit_VT zone-pair security sdm-zp-out-self source out-zone destination self service-policy type inspect sdm-permit zone-pair security sdm-zp-in-out source in-zone destination out-zone service-policy type inspect sdm-inspect ! bridge irb ! ! interface FastEthernet0 switchport access vlan 75 ! interface FastEthernet1 switchport access vlan 75 ! interface FastEthernet2 switchport access vlan 75 ! interface FastEthernet3 switchport access vlan 75 ! interface FastEthernet4 description $FW_OUTSIDE$ ip address 75.149.48.76 255.255.255.240 ip nat outside ip ips sdm_ips_rule out ip virtual-reassembly zone-member security out-zone duplex auto speed auto crypto ipsec client ezvpn EZVPN_REMOTE_CONNECTION_1 ! interface Virtual-Template1 type tunnel no ip address ip virtual-reassembly zone-member security ezvpn-zone tunnel mode ipsec ipv4 ! interface Dot11Radio0 no ip address ! encryption vlan 75 mode ciphers aes-ccm ! ssid <removed> ! speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0 station-role root ! interface Dot11Radio0.75 encapsulation dot1Q 75 native ip virtual-reassembly bridge-group 75 bridge-group 75 subscriber-loop-control bridge-group 75 spanning-disabled bridge-group 75 block-unknown-source no bridge-group 75 source-learning no bridge-group 75 unicast-flooding ! interface Vlan1 no ip address ip virtual-reassembly bridge-group 1 ! interface Vlan75 no ip address ip virtual-reassembly bridge-group 75 bridge-group 75 spanning-disabled ! interface BVI1 no ip address ip nat inside ip virtual-reassembly ! interface BVI75 description $FW_INSIDE$ ip address 172.16.0.1 255.240.0.0 ip nat inside ip ips sdm_ips_rule in ip virtual-reassembly zone-member security in-zone crypto ipsec client ezvpn EZVPN_REMOTE_CONNECTION_1 inside ! ip forward-protocol nd ip route 0.0.0.0 0.0.0.0 75.149.48.78 2 ! ip http server ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ip nat inside source list 1 interface FastEthernet4 overload ! ip access-list extended SDM_AH remark SDM_ACL Category=1 permit ahp any any ip access-list extended SDM_ESP remark SDM_ACL Category=1 permit esp any any ip access-list extended dhcp-req-permit remark SDM_ACL Category=1 permit udp any eq bootpc any eq bootps ip access-list extended dhcp-resp-permit remark SDM_ACL Category=1 permit udp any eq bootps any eq bootpc ! access-list 1 remark SDM_ACL Category=2 access-list 1 permit 172.16.0.0 0.15.255.255 access-list 100 remark SDM_ACL Category=128 access-list 100 permit ip host 255.255.255.255 any access-list 100 permit ip 127.0.0.0 0.255.255.255 any access-list 100 permit ip 75.149.48.64 0.0.0.15 any access-list 101 remark SDM_ACL Category=128 access-list 101 permit ip host 64.1.208.90 any access-list 102 remark SDM_ACL Category=1 access-list 102 permit ip any any ! ! ! ! snmp-server community <removed> RO ! control-plane ! bridge 1 protocol ieee bridge 1 route ip bridge 75 route ip banner login ^CSR520 Base Config - MFG 1.0 ^C ! line con 0 no modem enable line aux 0 line vty 0 4 transport input telnet ssh ! scheduler max-task-time 5000 end I also ran some diagnostics when the WAN port stopped working: 1. show interface fa4 FastEthernet4 is up, line protocol is up Hardware is PQUICC_FEC, address is 0026.99c5.b434 (bia 0026.99c5.b434) Description: $FW_OUTSIDE$ Internet address is 75.149.48.76/28 MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, 100BaseTX/FX ARP type: ARPA, ARP Timeout 04:00:00 Last input 01:08:15, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/23/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 1000 bits/sec, 0 packets/sec 336446 packets input, 455403158 bytes Received 23 broadcasts, 0 runts, 0 giants, 37 throttles 41 input errors, 0 CRC, 0 frame, 0 overrun, 41 ignored 0 watchdog 0 input packets with dribble condition detected 172529 packets output, 23580132 bytes, 0 underruns 0 output errors, 0 collisions, 2 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out 2. show ip route Gateway of last resort is 75.149.48.78 to network 0.0.0.0 C 192.168.75.0/24 is directly connected, BVI75 64.0.0.0/32 is subnetted, 1 subnets S 64.1.208.90 [1/0] via 75.149.48.78 S 192.168.10.0/24 is directly connected, BVI75 75.0.0.0/28 is subnetted, 1 subnets C 75.149.48.64 is directly connected, FastEthernet4 S* 0.0.0.0/0 [2/0] via 75.149.48.78 3. show ip arp Protocol Address Age (min) Hardware Addr Type Interface Internet 75.149.48.65 69 001e.2a39.7b08 ARPA FastEthernet4 Internet 75.149.48.76 - 0026.99c5.b434 ARPA FastEthernet4 Internet 75.149.48.78 93 0022.2d6c.ae36 ARPA FastEthernet4 Internet 192.168.75.1 - 0027.0d58.f5f0 ARPA BVI75 Internet 192.168.75.12 50 7c6d.62c7.8c0a ARPA BVI75 Internet 192.168.75.13 0 001b.6301.1227 ARPA BVI75 4. sh ip cef Prefix Next Hop Interface 0.0.0.0/0 75.149.48.78 FastEthernet4 0.0.0.0/8 drop 0.0.0.0/32 receive 64.1.208.90/32 75.149.48.78 FastEthernet4 75.149.48.64/28 attached FastEthernet4 75.149.48.64/32 receive FastEthernet4 75.149.48.65/32 attached FastEthernet4 75.149.48.76/32 receive FastEthernet4 75.149.48.78/32 attached FastEthernet4 75.149.48.79/32 receive FastEthernet4 127.0.0.0/8 drop 192.168.10.0/24 attached BVI75 192.168.75.0/24 attached BVI75 192.168.75.0/32 receive BVI75 192.168.75.1/32 receive BVI75 192.168.75.12/32 attached BVI75 192.168.75.13/32 attached BVI75 192.168.75.255/32 receive BVI75 224.0.0.0/4 drop 224.0.0.0/24 receive 240.0.0.0/4 drop 255.255.255.255/32 receive Thanks in advance, -Mike

    Read the article

  • Can ping IP address and nslookup hostname but cannot ping hostname

    - by jao
    On a windows 2003 server I can nslookup www.google.com which returns Server: localhost Address: 127.0.0.1 Non-authoritative answer: Name: www.l.google.com Addresses: 74.125.79.104, 74.125.79.147, 74.125.79.99 Aliases: www.google.com I can then ping 74.125.79.104: Pinging 74.125.79.104 with 32 bytes of data: Reply from 74.125.79.104: bytes=32 time=16ms TTL=54 Reply from 74.125.79.104: bytes=32 time=32ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Ping statistics for 74.125.79.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 15ms, Maximum = 32ms, Average = 19ms But I cannot ping www.google.com: Ping request could not find host www.google.com. Please check the name and try again. (this one is different from the other question in that this one has a TLD, it is not a local domain.) Update: I am running a dns server at localhost (127.0.0.1). Even when I change it to use for example opendns, it still can nslookup hostname and ping ip address, but not ping hostname. So what is wrong? Update 2: here is the ipconfig /all result: Windows IP Configuration Host Name . . . . . . . . . . . . : SERVER Primary Dns Suffix . . . . . . . : NETWORK.local Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : NETWORK.local Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet #2 Physical Address. . . . . . . . . : 00-0F-1F-56-3B-AA DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.7.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.7.1 DNS Servers . . . . . . . . . . . : 127.0.0.1 Update 3: Thanks everyone for their help and suggestions. I appreciate that. Ipconfig /flushdns returns: Sucessfully flushed the DNS resolver cache Ipconfig /displaydns returns: 2.7.168.192.in-addr.arpa ---------------------------------------- Record Name . . . . . : 2.7.168.192.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : webserver.mydomainname.com 1.0.0.127.in-addr.arpa ---------------------------------------- Record Name . . . . . : 1.0.0.127.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : localhost Update 4: Wireshark shows the following: 3 11.540542 208.67.220.220 192.168.7.2 DNS Standard query response A 74.125.79.99 A 74.125.79.104 A 74.125.79.147 6 42.056794 192.168.7.2 192.168.7.255 NBNS Name query NB WWW.GOOGLE.COM<00> which is weird: when I ping, it sends a packet to 192.168.7.255 instead of asking the DNS server for an address

    Read the article

  • org-sort multi: date/time (?d ?t) | priority (?p) | title (?a)

    - by lawlist
    Is anyone aware of an org-sort function / modification that can refile / organize a group of TODO so that it sorts them by three (3) criteria: first sort by due date, second sort by priority, and third sort by by title of the task? EDIT: I believe that org-sort by deadline (?d) has a bug that cannot properly handle undated tasks. I am working on a workaround (i.e., moving the undated todo to a different heading before the deadline (?d) sort occurs), but perhaps the best thing to do would be to try and fix the original sorting function. Development of the workaround can be found in this thread (i.e., moving the undated tasks to a different heading in one fell swoop): How to automate org-refile for multiple todo EDIT: Apparently, the following code (ancient history) that I found on the internet was eventually modified and included as a part of org-sort-entries. Unfortunately, undated todo are not properly sorted when sorting by deadline -- i.e., they are mixed in with the dated todo. ;; multiple sort (defun org-sort-multi (&rest sort-types) "Multiple sorts on a certain level of an outline tree, or plain list items. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Example: To sort first by TODO status, then by priority, then by date, then alphabetically (case-sensitive) use the following call: (org-sort-multi '(?d ?p ?t (t . ?a)))" (interactive) (dolist (x (nreverse sort-types)) (when (char-valid-p x) (setq x (cons nil x))) (condition-case nil (org-sort-entries (car x) (cdr x)) (error nil)))) ;; sort current level (defun lawlist-sort (&rest sort-types) "Sort the current org level. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Defaults to \"?o ?p\" which is sorted by TODO status, then by priority" (interactive) (when (equal mode-name "Org") (let ((sort-types (or sort-types (if (or (org-entry-get nil "TODO") (org-entry-get nil "PRIORITY")) '(?d ?t ?p) ;; date, time, priority '((nil . ?a)))))) (save-excursion (outline-up-heading 1) (let ((start (point)) end) (while (and (not (bobp)) (not (eobp)) (<= (point) start)) (condition-case nil (outline-forward-same-level 1) (error (outline-up-heading 1)))) (unless (> (point) start) (goto-char (point-max))) (setq end (point)) (goto-char start) (apply 'org-sort-multi sort-types) (goto-char end) (when (eobp) (forward-line -1)) (when (looking-at "^\\s-*$") ;; (delete-line) ) (goto-char start) ;; (dotimes (x ) (org-cycle)) ))))) EDIT: Here is a more modern version of multi-sort, which is likely based upon further development of the above-code: (defun org-sort-all () (interactive) (save-excursion (goto-char (point-min)) (while (re-search-forward "^\* " nil t) (goto-char (match-beginning 0)) (condition-case err (progn (org-sort-entries t ?a) (org-sort-entries t ?p) (org-sort-entries t ?o) (forward-line)) (error nil))) (goto-char (point-min)) (while (re-search-forward "\* PROJECT " nil t) (goto-char (line-beginning-position)) (ignore-errors (org-sort-entries t ?a) (org-sort-entries t ?p) (org-sort-entries t ?o)) (forward-line)))) EDIT: The best option will be to fix sorting of deadlines (?d) so that undated todo are moved to the bottom of the outline, instead of mixed in with the dated todo. Here is an excerpt from the current org.el included within Emacs Trunk (as of July 1, 2013): (defun org-sort (with-case) "Call `org-sort-entries', `org-table-sort-lines' or `org-sort-list'. Optional argument WITH-CASE means sort case-sensitively." (interactive "P") (cond ((org-at-table-p) (org-call-with-arg 'org-table-sort-lines with-case)) ((org-at-item-p) (org-call-with-arg 'org-sort-list with-case)) (t (org-call-with-arg 'org-sort-entries with-case)))) (defun org-sort-remove-invisible (s) (remove-text-properties 0 (length s) org-rm-props s) (while (string-match org-bracket-link-regexp s) (setq s (replace-match (if (match-end 2) (match-string 3 s) (match-string 1 s)) t t s))) s) (defvar org-priority-regexp) ; defined later in the file (defvar org-after-sorting-entries-or-items-hook nil "Hook that is run after a bunch of entries or items have been sorted. When children are sorted, the cursor is in the parent line when this hook gets called. When a region or a plain list is sorted, the cursor will be in the first entry of the sorted region/list.") (defun org-sort-entries (&optional with-case sorting-type getkey-func compare-func property) "Sort entries on a certain level of an outline tree. If there is an active region, the entries in the region are sorted. Else, if the cursor is before the first entry, sort the top-level items. Else, the children of the entry at point are sorted. Sorting can be alphabetically, numerically, by date/time as given by a time stamp, by a property or by priority. The command prompts for the sorting type unless it has been given to the function through the SORTING-TYPE argument, which needs to be a character, \(?n ?N ?a ?A ?t ?T ?s ?S ?d ?D ?p ?P ?o ?O ?r ?R ?f ?F). Here is the precise meaning of each character: n Numerically, by converting the beginning of the entry/item to a number. a Alphabetically, ignoring the TODO keyword and the priority, if any. o By order of TODO keywords. t By date/time, either the first active time stamp in the entry, or, if none exist, by the first inactive one. s By the scheduled date/time. d By deadline date/time. c By creation time, which is assumed to be the first inactive time stamp at the beginning of a line. p By priority according to the cookie. r By the value of a property. Capital letters will reverse the sort order. If the SORTING-TYPE is ?f or ?F, then GETKEY-FUNC specifies a function to be called with point at the beginning of the record. It must return either a string or a number that should serve as the sorting key for that record. Comparing entries ignores case by default. However, with an optional argument WITH-CASE, the sorting considers case as well." (interactive "P") (let ((case-func (if with-case 'identity 'downcase)) (cmstr ;; The clock marker is lost when using `sort-subr', let's ;; store the clocking string. (when (equal (marker-buffer org-clock-marker) (current-buffer)) (save-excursion (goto-char org-clock-marker) (looking-back "^.*") (match-string-no-properties 0)))) start beg end stars re re2 txt what tmp) ;; Find beginning and end of region to sort (cond ((org-region-active-p) ;; we will sort the region (setq end (region-end) what "region") (goto-char (region-beginning)) (if (not (org-at-heading-p)) (outline-next-heading)) (setq start (point))) ((or (org-at-heading-p) (condition-case nil (progn (org-back-to-heading) t) (error nil))) ;; we will sort the children of the current headline (org-back-to-heading) (setq start (point) end (progn (org-end-of-subtree t t) (or (bolp) (insert "\n")) (org-back-over-empty-lines) (point)) what "children") (goto-char start) (show-subtree) (outline-next-heading)) (t ;; we will sort the top-level entries in this file (goto-char (point-min)) (or (org-at-heading-p) (outline-next-heading)) (setq start (point)) (goto-char (point-max)) (beginning-of-line 1) (when (looking-at ".*?\\S-") ;; File ends in a non-white line (end-of-line 1) (insert "\n")) (setq end (point-max)) (setq what "top-level") (goto-char start) (show-all))) (setq beg (point)) (if (>= beg end) (error "Nothing to sort")) (looking-at "\\(\\*+\\)") (setq stars (match-string 1) re (concat "^" (regexp-quote stars) " +") re2 (concat "^" (regexp-quote (substring stars 0 -1)) "[ \t\n]") txt (buffer-substring beg end)) (if (not (equal (substring txt -1) "\n")) (setq txt (concat txt "\n"))) (if (and (not (equal stars "*")) (string-match re2 txt)) (error "Region to sort contains a level above the first entry")) (unless sorting-type (message "Sort %s: [a]lpha [n]umeric [p]riority p[r]operty todo[o]rder [f]unc [t]ime [s]cheduled [d]eadline [c]reated A/N/P/R/O/F/T/S/D/C means reversed:" what) (setq sorting-type (read-char-exclusive)) (and (= (downcase sorting-type) ?f) (setq getkey-func (org-icompleting-read "Sort using function: " obarray 'fboundp t nil nil)) (setq getkey-func (intern getkey-func))) (and (= (downcase sorting-type) ?r) (setq property (org-icompleting-read "Property: " (mapcar 'list (org-buffer-property-keys t)) nil t)))) (message "Sorting entries...") (save-restriction (narrow-to-region start end) (let ((dcst (downcase sorting-type)) (case-fold-search nil) (now (current-time))) (sort-subr (/= dcst sorting-type) ;; This function moves to the beginning character of the "record" to ;; be sorted. (lambda nil (if (re-search-forward re nil t) (goto-char (match-beginning 0)) (goto-char (point-max)))) ;; This function moves to the last character of the "record" being ;; sorted. (lambda nil (save-match-data (condition-case nil (outline-forward-same-level 1) (error (goto-char (point-max)))))) ;; This function returns the value that gets sorted against. (lambda nil (cond ((= dcst ?n) (if (looking-at org-complex-heading-regexp) (string-to-number (match-string 4)) nil)) ((= dcst ?a) (if (looking-at org-complex-heading-regexp) (funcall case-func (match-string 4)) nil)) ((= dcst ?t) (let ((end (save-excursion (outline-next-heading) (point)))) (if (or (re-search-forward org-ts-regexp end t) (re-search-forward org-ts-regexp-both end t)) (org-time-string-to-seconds (match-string 0)) (org-float-time now)))) ((= dcst ?c) (let ((end (save-excursion (outline-next-heading) (point)))) (if (re-search-forward (concat "^[ \t]*\\[" org-ts-regexp1 "\\]") end t) (org-time-string-to-seconds (match-string 0)) (org-float-time now)))) ((= dcst ?s) (let ((end (save-excursion (outline-next-heading) (point)))) (if (re-search-forward org-scheduled-time-regexp end t) (org-time-string-to-seconds (match-string 1)) (org-float-time now)))) ((= dcst ?d) (let ((end (save-excursion (outline-next-heading) (point)))) (if (re-search-forward org-deadline-time-regexp end t) (org-time-string-to-seconds (match-string 1)) (org-float-time now)))) ((= dcst ?p) (if (re-search-forward org-priority-regexp (point-at-eol) t) (string-to-char (match-string 2)) org-default-priority)) ((= dcst ?r) (or (org-entry-get nil property) "")) ((= dcst ?o) (if (looking-at org-complex-heading-regexp) (- 9999 (length (member (match-string 2) org-todo-keywords-1))))) ((= dcst ?f) (if getkey-func (progn (setq tmp (funcall getkey-func)) (if (stringp tmp) (setq tmp (funcall case-func tmp))) tmp) (error "Invalid key function `%s'" getkey-func))) (t (error "Invalid sorting type `%c'" sorting-type)))) nil (cond ((= dcst ?a) 'string<) ((= dcst ?f) compare-func) ((member dcst '(?p ?t ?s ?d ?c)) '<))))) (run-hooks 'org-after-sorting-entries-or-items-hook) ;; Reset the clock marker if needed (when cmstr (save-excursion (goto-char start) (search-forward cmstr nil t) (move-marker org-clock-marker (point)))) (message "Sorting entries...done"))) (defun org-do-sort (table what &optional with-case sorting-type) "Sort TABLE of WHAT according to SORTING-TYPE. The user will be prompted for the SORTING-TYPE if the call to this function does not specify it. WHAT is only for the prompt, to indicate what is being sorted. The sorting key will be extracted from the car of the elements of the table. If WITH-CASE is non-nil, the sorting will be case-sensitive." (unless sorting-type (message "Sort %s: [a]lphabetic, [n]umeric, [t]ime. A/N/T means reversed:" what) (setq sorting-type (read-char-exclusive))) (let ((dcst (downcase sorting-type)) extractfun comparefun) ;; Define the appropriate functions (cond ((= dcst ?n) (setq extractfun 'string-to-number comparefun (if (= dcst sorting-type) '< '>))) ((= dcst ?a) (setq extractfun (if with-case (lambda(x) (org-sort-remove-invisible x)) (lambda(x) (downcase (org-sort-remove-invisible x)))) comparefun (if (= dcst sorting-type) 'string< (lambda (a b) (and (not (string< a b)) (not (string= a b))))))) ((= dcst ?t) (setq extractfun (lambda (x) (if (or (string-match org-ts-regexp x) (string-match org-ts-regexp-both x)) (org-float-time (org-time-string-to-time (match-string 0 x))) 0)) comparefun (if (= dcst sorting-type) '< '>))) (t (error "Invalid sorting type `%c'" sorting-type))) (sort (mapcar (lambda (x) (cons (funcall extractfun (car x)) (cdr x))) table) (lambda (a b) (funcall comparefun (car a) (car b))))))

    Read the article

  • OS X won't see Windows 7 in network (and vice versa)

    - by meds
    I've enabled SMB sharing in OS X Lion and have added folders to share, it says 'Windows Sharing: On' with a green circle next to it (from the sharing window) and that to access the volume I will need to to go to \\192.168.0.17. It also says that the OS X should be visible as 'macbook' in the network. Both my WIndows 7 and OS X are connected to the same network, yet when I try to go to \\192.168.0.17 or from the Mac try to go to my Windows system (smb://192.168.0.6) the two OSs don't see each other. Any ideas why? Attempting to ping the Mac from Windows results in this output in the command prompt: Pinging 192.168.0.17 with 32 bytes of data: Reply from 192.168.0.6: Destination host unreachable. Request timed out. Request timed out. Request timed out. Ping statistics for 192.168.0.17: Packets: Sent = 4, Received = 1, Lost = 3 (75% loss), ipconfig in Windows is: Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::8918:efd1:b05c:890f%21 IPv4 Address. . . . . . . . . . . : 192.168.0.6 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.1 Ethernet adapter Bluetooth Network Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Ethernet adapter VMware Network Adapter VMnet1: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::98ab:63fc:3c07:d837%13 IPv4 Address. . . . . . . . . . . : 192.168.74.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter VMware Network Adapter VMnet8: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::80ff:c575:7b50:3a10%14 IPv4 Address. . . . . . . . . . . : 192.168.21.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Tunnel adapter isatap.{2E97D0AE-9E18-4072-AC23-1979BA0DCB79}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{E260CE43-E9A7-4DE0-A88E-4EAFF68ACDDB}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{A5130812-59CE-4DDF-9C35-9433BCED9831}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter Teredo Tunneling Pseudo-Interface: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{134BCAE7-CFFF-4A98-8DA0-3708806AABEB}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{8D9E3B8F-161C-4ACE-B211-3EDD694416B2}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : in OS X: lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 options=3<RXCSUM,TXCSUM> inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280 stf0: flags=0<> mtu 1280 en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 options=2b<RXCSUM,TXCSUM,VLAN_HWTAGGING,TSO4> ether c8:2a:14:01:24:c1 media: autoselect (none) status: inactive en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether e0:f8:47:0c:fe:04 inet6 fe80::e2f8:47ff:fe0c:fe04%en1 prefixlen 64 scopeid 0x5 inet 192.168.0.17 netmask 0xffffff00 broadcast 192.168.0.255 media: autoselect status: active p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304 ether 02:f8:47:0c:fe:04 media: autoselect status: inactive fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078 lladdr 70:cd:60:ff:fe:d8:f1:32 media: autoselect <full-duplex> status: inactive

    Read the article

  • I need help on my C++ assignment using MS Visual C++

    - by krayzwytie
    Ok, so I don't want you to do my homework for me, but I'm a little lost with this final assignment and need all the help I can get. Learning about programming is tough enough, but doing it online is next to impossible for me... Now, to get to the program, I am going to paste what I have so far. This includes mostly //comments and what I have written so far. If you can help me figure out where all the errors are and how to complete the assignment, I will really appreciate it. Like I said, I don't want you to do my homework for me (it's my final), but any constructive criticism is welcome. This is my final assignment for this class and it is due tomorrow (Sunday before midnight, Arizona time). This is the assignment: Examine the following situation: o Your company, Datamax, Inc., is in the process of automating its payroll systems. Your manager has asked you to create a program that calculates overtime pay for all employees. Your program must take into account the employee’s salary, total hours worked, and hours worked more than 40 in a week, and then provide an output that is useful and easily understood by company management. • Compile your program utilizing the following background information and the code outline in Appendix D (included in the code section). • Submit your project as an attachment including the code and the output. Company Background: o Three employees: Mark, John, and Mary o The end user needs to be prompted for three specific pieces of input—name, hours worked, and hourly wage. o Calculate overtime if input is greater than 40 hours per week. o Provide six test plans to verify the logic within the program. o Plan 1 must display the proper information for employee #1 with overtime pay. o Plan 2 must display the proper information for employee #1 with no overtime pay. o Plans 3-6 are duplicates of plan 1 and 2 but for the other employees. Program Requirements: o Define a base class to use for the entire program. o The class holds the function calls and the variables related to the overtime pay calculations. o Define one object per employee. Note there will be three employees. o Your program must take the objects created and implement calculations based on total salaries, total hours, and the total number of overtime hours. See the Employee Summary Data section of the sample output. Logic Steps to Complete Your Program: o Define your base class. o Define your objects from your base class. o Prompt for user input, updating your object classes for all three users. o Implement your overtime pay calculations. o Display overtime or regular time pay calculations. See the sample output below. o Implement object calculations by summarizing your employee objects and display the summary information in the example below. And this is the code: // Final_Project.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <iostream> #include <string> #include <iomanip> using namespace std; // //CLASS DECLARATION SECTION // class CEmployee { public: void ImplementCalculations(string EmployeeName, double hours, double wage); void DisplayEmployInformation(void); void Addsomethingup (CEmployee, CEmployee, CEmployee); string EmployeeName ; int hours ; int overtime_hours ; int iTotal_hours ; int iTotal_OvertimeHours ; float wage ; float basepay ; float overtime_pay ; float overtime_extra ; float iTotal_salaries ; float iIndividualSalary ; }; int main() { system("cls"); cout << "Welcome to the Employee Pay Center"; /* Use this section to define your objects. You will have one object per employee. You have only three employees. The format is your class name and your object name. */ std::cout << "Please enter Employee's Name: "; std::cin >> EmployeeName; std::cout << "Please enter Total Hours for (EmployeeName): "; std::cin >> hours; std::cout << "Please enter Base Pay for(EmployeeName): "; std::cin >> basepay; /* Here you will prompt for the first employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Example of Prompts Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ /* Here you will prompt for the second employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ /* Here you will prompt for the third employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ /* Here you will implement a function call to implement the employ calcuations for each object defined above. You will do this for each of the three employees or objects. The format for this step is the following: [(object name.function name(objectname.name, objectname.hours, objectname.wage)] ; */ /* This section you will send all three objects to a function that will add up the the following information: - Total Employee Salaries - Total Employee Hours - Total Overtime Hours The format for this function is the following: - Define a new object. - Implement function call [objectname.functionname(object name 1, object name 2, object name 3)] /* } //End of Main Function void CEmployee::ImplementCalculations (string EmployeeName, double hours, double wage){ //Initialize overtime variables overtime_hours=0; overtime_pay=0; overtime_extra=0; if (hours > 40) { /* This section is for the basic calculations for calculating overtime pay. - base pay = 40 hours times the hourly wage - overtime hours = hours worked – 40 - overtime pay = hourly wage * 1.5 - overtime extra pay over 40 = overtime hours * overtime pay - salary = overtime money over 40 hours + your base pay */ /* Implement function call to output the employee information. Function is defined below. */ } // if (hours > 40) else { /* Here you are going to calculate the hours less than 40 hours. - Your base pay is = your hours worked times your wage - Salary = your base pay */ /* Implement function call to output the employee information. Function is defined below. */ } // End of the else } //End of Primary Function void CEmployee::DisplayEmployInformation(); { // This function displays all the employee output information. /* This is your cout statements to display the employee information: Employee Name ............. = Base Pay .................. = Hours in Overtime ......... = Overtime Pay Amount........ = Total Pay ................. = */ } // END OF Display Employee Information void CEmployee::Addsomethingup (CEmployee Employ1, CEmployee Employ2) { // Adds two objects of class Employee passed as // function arguments and saves them as the calling object's data member values. /* Add the total hours for objects 1, 2, and 3. Add the salaries for each object. Add the total overtime hours. */ /* Then display the information below. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% EMPLOYEE SUMMARY DATA%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% Total Employee Salaries ..... = 576.43 %%%% Total Employee Hours ........ = 108 %%%% Total Overtime Hours......... = 5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% */ } // End of function

    Read the article

  • Tomcat6 can't connect to MySql (The driver has not received any packets from the server)

    - by Tobias Wiesenthal
    Hi all, i'm running an Apache Tomcat 6.0.20 / MySQL 5.1.37-lubuntu / sun-java6-jdk /sun-java6-jre / sun-java6-bin on my local machine using Ubuntu 9.10 as OS. I'm trying to get a simple DB-query example running for 2 days now, but i still get this Exception: org.apache.jasper.JasperException: javax.servlet.ServletException: javax.servlet.jsp.JspException: Unable to get connection, DataSource invalid: "org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)" org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:522) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:398) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) root cause javax.servlet.ServletException: javax.servlet.jsp.JspException: Unable to get connection, DataSource invalid: "org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)" org.apache.jasper.runtime.PageContextImpl.doHandlePageException(PageContextImpl.java:862) org.apache.jasper.runtime.PageContextImpl.handlePageException(PageContextImpl.java:791) org.apache.jsp.index_jsp._jspService(index_jsp.java:104) org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) root cause javax.servlet.jsp.JspException: Unable to get connection, DataSource invalid: "org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)" org.apache.taglibs.standard.tag.common.sql.QueryTagSupport.getConnection(QueryTagSupport.java:285) org.apache.taglibs.standard.tag.common.sql.QueryTagSupport.doStartTag(QueryTagSupport.java:168) org.apache.jsp.index_jsp._jspx_meth_sql_005fquery_005f0(index_jsp.java:274) org.apache.jsp.index_jsp._jspx_meth_c_005fotherwise_005f0(index_jsp.java:216) org.apache.jsp.index_jsp._jspx_meth_c_005fchoose_005f0(index_jsp.java:130) org.apache.jsp.index_jsp._jspService(index_jsp.java:93) org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) my web.xml looks like this : <?xml version="1.0" encoding="utf-8"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5"> <resource-ref> <description>DB Connection</description> <res-ref-name>jdbc/testDB</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> </web-app> the context.xml looks like this : <?xml version="1.0" encoding="UTF-8"?> <Context path="/my1stApp" docBase="/var/www/jsp/my1stApp" debug="5" reloadable="true" crossContext="true"> <Resource name="jdbc/testDB" auth="Container" type="javax.sql.DataSource" maxActive="5" maxIdle="5" maxWait="10000" username="user" password="password" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/some"/> </Context> and the jsp file looks like this: <%@ page contentType="text/html" %> <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <%@ taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %> <%@ taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt" %> <%@ taglib prefix="sql" uri="http://java.sun.com/jsp/jstl/sql" %> <html> <head> <title>DroneLootTool</title> </head> <body bgcolor="white"> <sql:query var="res" dataSource="jdbc/testDB"> select name, othername from mytable </sql:query> <h2>Results</h2> <c:forEach var="row" items="${res.rows}"> Name ${row.name}<br/> MoreName ${row.othername}<br/><br/> </c:forEach> </body> </html> read lots of forum entries / tried lots of different settings (always changed back to original settings when it didnt' work) set TOMCAT6_SECURITY=no in /etc/default/tomcat6 because TOMCAT6_SECURITY=yes was causing trouble too the skip-networking flag is not set for the DB (BIND 127.0.0.1 is set) firewall is swiched off (sudo ufw disable) MySQL works (tested several times with user used in this skript) telnet localhost 3306 says Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. The TestConnection.java produced the following output: me@my-laptop:~/Desktop$ java -classpath '/usr/share/java/mysql.jar:./' TestConnection com.mysql.jdbc.Driver jdbc:mysql://localhost:3306/testDB myuser mypassword com.mysql.jdbc.CommunicationsException: Communications link failure Last packet sent to the server was 0 ms ago. at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1070) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2103) at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:718) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:298) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:282) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:185) at TestConnection.checkConnection(TestConnection.java:40) at TestConnection.main(TestConnection.java:21) Caused by: com.mysql.jdbc.CommunicationsException: Communications link failure Last packet sent to the server was 0 ms ago. at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1070) at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:666) at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1069) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2031) ... 7 more Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2431) at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:590) ... 9 more Connection failed. i don't know if there is a difference between the way the java driver connects to the DB and the Perl DBI module does, but this PERL skript works #!/usr/bin/perl -w use CGI; use DBI; use strict; print CGI::header(); my $dbh = DBI->connect("dbi:mysql:some:localhost", "user", "password"); my $sSql = "SELECT * from mytable"; my $ppl = $dbh->selectall_arrayref( $sSql ); foreach my $pl (@$ppl) { my @array = @$pl; print @array; } $dbh->disconnect; enabled --log-warnings on the mysql, but i didn't get any new warnings. When i was searching the logs for warnings i found this messages when i restart the tomcat, don't know if it helps to find the problem : Feb 2 19:50:37 tobias-laptop jsvc.exec[3129]: 02.02.2010 19:50:37 org.apache.catalina.startup.HostConfig checkResources#012INFO: Undeploying context [/myapp] Feb 2 19:50:37 tobias-laptop jsvc.exec[3129]: 02.02.2010 19:50:37 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads#012SCHWERWIEGEND: A web application appears to have started a thread named [MySQL Statement Cancellation Timer] but has failed to stop it. This is very likely to create a memory leak. Feb 2 19:50:37 tobias-laptop jsvc.exec[3129]: 02.02.2010 19:50:37 org.apache.catalina.startup.HostConfig deployDescriptor#012INFO: Deploying configuration descriptor myapp.xml

    Read the article

  • ControlTemplate Exception: "XamlParseException: Cannot find a Resource with the Name/Key"

    - by akaphenom
    If I move the application resources to a UserControl resources everythgin runs groovy. I still don't understand why. I noticed that my application object MyApp did more than inherit from Application it loaded an XAML for the main template and connected all the plumbing. So I decided to create a user control to remove the template from the applciation (thinking there may be an even order issue not allowing my resource to be found). namespace Module1 type Template() as this = //inherit UriUserControl("/FSSilverlightApp;component/template.xaml", "") inherit UriUserControl("/FSSilverlightApp;component/templateSimple.xaml", "") do Application.LoadComponent(this, base.uri) let siteTemplate : Grid = (this.Content :?> FrameworkElement) ? siteTemplate let nav : Frame = siteTemplate ? contentFrame let pages : UriUserControl array = [| new Module1.Page1() :> UriUserControl ; new Module1.Page2() :> UriUserControl ; new Module1.Page3() :> UriUserControl ; new Module1.Page4() :> UriUserControl ; new Module1.Page5() :> UriUserControl ; |] do nav.Navigate((pages.[0] :> INamedUriProvider).Uri) |> ignore type MyApp() as this = inherit Application() do Application.LoadComponent(this, new System.Uri("/FSSilverlightApp;component/App.xaml", System.UriKind.Relative)) do System.Windows.Browser.HtmlPage.Plugin.Focus() this.RootVisual <- new Template() ; // test code to check for the existance of the ControlTemplate - it exists let a = Application.Current.Resources.MergedDictionaries let b = a.[0] let c = b.Count let d : ControlTemplate = downcast c.["TransitioningFrame"] () "/FSSilverlightApp;component/templateSimple.xaml" <UserControl x:Class="Module1.Template" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <Grid HorizontalAlignment="Center" Background="White" Name="siteTemplate"> <StackPanel Grid.Row="3" Grid.Column="2" Name="mainPanel"> <!--Template="{StaticResource TransitioningFrame}"--> <navigation:Frame Name="contentFrame" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Template="{StaticResource TransitioningFrame}"/> </StackPanel> </Grid> </UserControl> "/FSSilverlightApp;component/App.xaml" <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="Module1.MyApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/FSSilverlightApp;component/TransitioningFrame.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application> "/FSSilverlightApp;component/TransitioningFrame.xaml" <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <ControlTemplate x:Key="TransitioningFrame" TargetType="navigation:Frame"> <Border Background="Olive" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="5" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"> <ContentPresenter Cursor="{TemplateBinding Cursor}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" Margin="{TemplateBinding Padding}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" Content="{TemplateBinding Content}"/> </Border> </ControlTemplate> </ResourceDictionary> Unfortunately that did not work. If I remove the template attribute from the navigationFrame element, the app loads and directs the content area to the first page in the pages array. Referencing that resource continues to throw a resource no found error. Original Post I have the following app.xaml (using Silverlight 3) <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="Module1.MyApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/FSSilverlightApp;component/TransitioningFrame.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application> and content template: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <ControlTemplate x:Key="TransitioningFrame" TargetType="navigation:Frame"> <Border Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"> <ContentPresenter Cursor="{TemplateBinding Cursor}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" Margin="{TemplateBinding Padding}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" Content="{TemplateBinding Content}"/> </Border> </ControlTemplate> </ResourceDictionary> The debugger says it the contentTemplate is loaded correctly by adding some minimal code: type MyApp() as this = inherit Application() do Application.LoadComponent(this, new System.Uri("/FSSilverlightApp;component/App.xaml", System.UriKind.Relative)) let cc = new ContentControl() let mainGrid : Grid = loadXaml("MainWindow.xaml") do this.Startup.Add(this.startup) let t = Application.Current.Resources.MergedDictionaries let t1 = t.[0] let t2 = t1.Count let t3: ControlTemplate = t1.["TransitioningFrame"] With this line in my main.xaml <navigation:Frame Name="contentFrame" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Template="{StaticResource TransitioningFrame}"/> Yields this exception Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Timestamp: Mon, 24 May 2010 23:10:15 UTC Message: Unhandled Error in Silverlight Application Code: 4004 Category: ManagedRuntimeError Message: System.Windows.Markup.XamlParseException: Cannot find a Resource with the Name/Key TransitioningFrame [Line: 86 Position: 115] at MS.Internal.XcpImports.CreateFromXaml(String xamlString, Boolean createNamescope, Boolean requireDefaultNamespace, Boolean allowEventHandlers, Boolean expandTemplatesDuringParse) at MS.Internal.XcpImports.CreateFromXaml(String xamlString, Boolean createNamescope, Boolean requireDefaultNamespace, Boolean allowEventHandlers) at System.Windows.Markup.XamlReader.Load(String xaml) at Globals.loadXaml[T](String xamlPath) at Module1.MyApp..ctor() Line: 54 Char: 13 Code: 0 URI: file:///C:/fsharp/FSSilverlightDemo/FSSilverlightApp/bin/Debug/SilverlightApplication2TestPage.html

    Read the article

  • CSS IE Hover Effect - Overlapping Elements, Display:Block, and Crashes

    - by Emtucifor
    In a fairly simple page, I have some text appear on hover over some links, like a tooltip. To start with here's my test page I'm working with: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <title>Tooltip Test Page</title> <style type="text/css"> html, body, form, table, tr, td, div, p, h1, h2, h3, h4, h5 { border:0; margin:0; padding:0; } body { margin:10px; } html, body, table { font-family:Arial, Helvetica, sans-serif; font-size:12px; } h1 { font-weight:bold; font-size:16px; } table {border-collapse:collapse;} td {padding:0 8px 0 0;} a.tooltip { z-index:24; text-decoration:none; cursor:default; position:relative; color:#000; display:block; width:100px; } a.tooltip span {display:none;} a.tooltip:hover, a.tooltip:active { z-index:25; color:; background:; /* the color and background actually don't matter for their values, it's just that these have to change for IE to apply this style properly. Leaving out the color or the background makes this fail in different ways. */ } a.tooltip:hover span, a.tooltip:active span { display:block; position:absolute; color:black; background-color:#FFFFCC; border:1px solid black; padding:1px; text-align:left; top:0; left:0; margin-top:-1px; } td span.s5 {color:#ff0000} td span.s6 {color:#0000ff} </style> <script type="text/javascript"> function labelSubmit(label) { document.getElementById('o').value=label; document.BackAt.submit(); } </script> </head> <body> <h1>tooltip Test Page</h1> <table> <tbody> <tr> <td><span class="s6">&#x25a0;</span> Name 3</td> <td class="status"><a class="tooltip" href="" onclick="return false;">Status 6<span>Some very long tooltip text to demonstrate the problem by overlapping the cells below.</span></a></td> </tr> <tr> <td><span class="s6">&#x25a0;</span> Name 1</td> <td class="status"><a class="tooltip" href="" onclick="return false;">Status 6</a></td> </tr> <tr> <td><span class="s6">&#x25a0;</span> Name 2</td> <td class="status"><a class="tooltip" href="" onclick="return false;">Status 6<span>Some tooltip text</span></a></td> </tr> <tr> <td><span class="s6">&#x25a0;</span> Name 4</td> <td class="status"><a class="tooltip" href="" onclick="return false;">Status 6</a></td> </tr> <tr> <td><span class="s5">&#x25a0;</span> Name 5</td> <td class="status"><a class="tooltip" href="" onclick="return false;">Status 5<span>More Notes</span></a></td> </tr> <tr> <td><span class="s6">&#x25a0;</span> Name 6</td> <td class="status"><a class="tooltip" href="" onclick="return false;">Status 6<span>Yet more notes</span></a></td> </tr> </tbody> </table> </body> </html> The problem I'm experiencing is that text from other values shows through the tooltip text. Hover over the first row, second column to see the effect. There are a couple of things I'm trying to accomplish: Make the activation area for the hover wider, so hovering over some space to the right of "Status 6" calls up the tooltip (say, 100-150px total width of target). At first, when I was adding "display:block" to a.tooltip, IE was terminating on hover. I resolved that by removing width:14em from a.tooltip:hover. Styling the width of the hover event + display.block on the a element does BAD things. Change the width of the tooltip without changing the width of the column/parent element (so the tooltip can be wider than itso it takes up less vertical space). Options for making the tooltips change width with its contents up to a max width, at which point the lines wrap would be awesome, but probably impossible in IE. As soon as I put a width in place on a.tooltip, the portion of the tooltip that is above other rows than the hover source let text show through from those cells. Remove the width and you'll see that the text doesn't show through any more. The hover effect applies to the entire tooltip, so if the tooltip covers 3 rows, while moving the mouse downward, the next 2 rows won't activate because the cursor hasn't left the tooltip. Can the hover effect apply only to the initial element hovered over and not the tooltip itself so moving the mouse downward will show each tooltip in each row? It would be nice if the links could never be activated (they can't take the focus). I don't know if that's possible. Too bad IE doesn't support hover on any elements but links. Note: soon IE6 will be abandoned in favor of IE8. If it makes a big difference, then IE8 can be the target browser instead. Thanks for your help.

    Read the article

  • wpf treeview collapse/expand option disappeared

    - by Muhammad Adnan
    I used following code <!-- TreeView --> <Style x:Key="{x:Type TreeView}" TargetType="TreeView"> <Setter Property="OverridesDefaultStyle" Value="True" /> <Setter Property="SnapsToDevicePixels" Value="True" /> <Setter Property="ScrollViewer.HorizontalScrollBarVisibility" Value="Auto"/> <Setter Property="ScrollViewer.VerticalScrollBarVisibility" Value="Auto"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="TreeView"> <Border Name="Border" CornerRadius="1" Background="{StaticResource WindowBackgroundBrush}" BorderBrush="{StaticResource SolidBorderBrush}" BorderThickness="1"> <ScrollViewer Focusable="False" CanContentScroll="False" Padding="4"> <ItemsPresenter/> </ScrollViewer> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> <!-- TreeViewItem --> <Style x:Key="TreeViewItemFocusVisual"> <Setter Property="Control.Template"> <Setter.Value> <ControlTemplate> <Border> <Rectangle Margin="0,0,0,0" StrokeThickness="5" Stroke="Black" StrokeDashArray="1 2" Opacity="0"/> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> <Style x:Key="{x:Type TreeViewItem}" TargetType="{x:Type TreeViewItem}"> <Setter Property="Background" Value="Transparent"/> <Setter Property="HorizontalContentAlignment" Value="{Binding Path=HorizontalContentAlignment, RelativeSource={RelativeSource AncestorType={x:Type ItemsControl}}}"/> <Setter Property="VerticalContentAlignment" Value="{Binding Path=VerticalContentAlignment, RelativeSource={RelativeSource AncestorType={x:Type ItemsControl}}}"/> <Setter Property="Padding" Value="1,0,0,0"/> <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.ControlTextBrushKey}}"/> <Setter Property="FocusVisualStyle" Value="{StaticResource TreeViewItemFocusVisual}"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type TreeViewItem}"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition/> </Grid.RowDefinitions> <Border Name="Bd" Grid.Column="0" Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" Padding="{TemplateBinding Padding}"> <ContentPresenter x:Name="PART_Header" ContentSource="Header" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}"/> </Border> <ItemsPresenter x:Name="ItemsHost" Visibility="Visible" Grid.Row="1" Grid.Column="0"/> </Grid> <ControlTemplate.Triggers> <MultiTrigger> <MultiTrigger.Conditions> <Condition Property="HasHeader" Value="false"/> <Condition Property="Width" Value="Auto"/> </MultiTrigger.Conditions> <Setter TargetName="PART_Header" Property="MinWidth" Value="75"/> </MultiTrigger> <MultiTrigger> <MultiTrigger.Conditions> <Condition Property="HasHeader" Value="false"/> <Condition Property="Height" Value="Auto"/> </MultiTrigger.Conditions> <Setter TargetName="PART_Header" Property="MinHeight" Value="19"/> </MultiTrigger> <Trigger Property="IsSelected" Value="true"> <Setter TargetName="Bd" Property="Background" Value="{DynamicResource {x:Static SystemColors.HighlightBrushKey}}"/> <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.HighlightTextBrushKey}}"/> </Trigger> <MultiTrigger> <MultiTrigger.Conditions> <Condition Property="IsSelected" Value="true"/> <Condition Property="IsSelectionActive" Value="false"/> </MultiTrigger.Conditions> <Setter TargetName="Bd" Property="Background" Value="{DynamicResource {x:Static SystemColors.ControlBrushKey}}"/> <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.ControlTextBrushKey}}"/> </MultiTrigger> <Trigger Property="IsEnabled" Value="false"> <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.GrayTextBrushKey}}"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> </Window.Resources> <Grid> <TreeView> <TreeViewItem Header="Node 1"> <TreeViewItem Header="Node 1.1"> <TreeViewItem Header="Node 1.1.1" /> </TreeViewItem> </TreeViewItem> <TreeViewItem Header="Node 2"> <TreeViewItem Header="Node 2.1"> <TreeViewItem Header="Node 2.1.1" /> </TreeViewItem> <TreeViewItem Header="Node 2.2"> <TreeViewItem Header="Node 2.2.1" /> </TreeViewItem> </TreeViewItem> </TreeView> </Grid> taken from this url... http://stackoverflow.com/questions/1526925/can-i-have-a-treeview-without-the-tree-structure NOW I AM HAVING PROBLEM. I LOST COLLAPSE/EXPAND OPTIONS... HOW CAN I HAVE WITH THIS CODE...

    Read the article

  • Regarding playing media file in Android media player application

    - by Mangesh
    Hi. I am new to android development. I just started with creating my own media player application by looking at the code samples given in Android SDK. While I am trying to play a local media file (m.3gp), I am getting IOException error :: error(1,-4). Please can somebody help me in this regard. Here is my code. package com.mediaPlayer; import java.io.IOException; import android.app.Activity; import android.app.AlertDialog; import android.content.DialogInterface; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.media.MediaPlayer; import android.media.MediaPlayer.OnBufferingUpdateListener; import android.media.MediaPlayer.OnCompletionListener; import android.media.MediaPlayer.OnPreparedListener; import android.media.MediaPlayer.OnVideoSizeChangedListener; import android.view.SurfaceHolder; import android.util.Log; public class MediaPlayer1 extends Activity implements OnBufferingUpdateListener, OnCompletionListener,OnPreparedListener, OnVideoSizeChangedListener,SurfaceHolder.Callback { private static final String TAG = "MediaPlayerByMangesh"; // Widgets in the application private Button btnPlay; private Button btnPause; private Button btnStop; private MediaPlayer mMediaPlayer; private String path = "m.3gp"; private SurfaceHolder holder; private int mVideoWidth; private int mVideoHeight; private boolean mIsVideoSizeKnown = false; private boolean mIsVideoReadyToBePlayed = false; // For the id of radio button selected private int radioCheckedId = -1; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { Log.d(TAG, "Entered OnCreate:"); super.onCreate(savedInstanceState); setContentView(R.layout.main); Log.d(TAG, "Creatinging Buttons:"); btnPlay = (Button) findViewById(R.id.btnPlay); btnPause = (Button) findViewById(R.id.btnPause); // On app load, the Pause button is disabled btnPause.setEnabled(false); btnStop = (Button) findViewById(R.id.btnStop); btnStop.setEnabled(false); /* * Attach a OnCheckedChangeListener to the radio group to monitor radio * buttons selected by user */ Log.d(TAG, "Watching for Click"); /* Attach listener to the Calculate and Reset buttons */ btnPlay.setOnClickListener(mClickListener); btnPause.setOnClickListener(mClickListener); btnStop.setOnClickListener(mClickListener); } /* * ClickListener for the Calculate and Reset buttons. Depending on the * button clicked, the corresponding method is called. */ private OnClickListener mClickListener = new OnClickListener() { @Override public void onClick(View v) { switch (v.getId()) { case R.id.btnPlay: Log.d(TAG, "Clicked Play Button"); Log.d(TAG, "Calling Play Function"); Play(); break; case R.id.btnPause: Pause(); break; case R.id.btnStop: Stop(); break; } } }; /** * Play the Video. */ private void Play() { // Create a new media player and set the listeners mMediaPlayer = new MediaPlayer(); Log.d(TAG, "Entered Play function:"); try { mMediaPlayer.setDataSource(path); } catch(IOException ie) { Log.d(TAG, "IO Exception:" + path); } mMediaPlayer.setDisplay(holder); try { mMediaPlayer.prepare(); } catch(IOException ie) { Log.d(TAG, "IO Exception:" + path); } mMediaPlayer.setOnBufferingUpdateListener(this); mMediaPlayer.setOnCompletionListener(this); mMediaPlayer.setOnPreparedListener(this); //mMediaPlayer.setOnVideoSizeChangedListener(this); //mMediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC); } public void onBufferingUpdate(MediaPlayer arg0, int percent) { Log.d(TAG, "onBufferingUpdate percent:" + percent); } public void onCompletion(MediaPlayer arg0) { Log.d(TAG, "onCompletion called"); } public void onVideoSizeChanged(MediaPlayer mp, int width, int height) { Log.v(TAG, "onVideoSizeChanged called"); if (width == 0 || height == 0) { Log.e(TAG, "invalid video width(" + width + ") or height(" + height + ")"); return; } mIsVideoSizeKnown = true; mVideoWidth = width; mVideoHeight = height; if (mIsVideoReadyToBePlayed && mIsVideoSizeKnown) { startVideoPlayback(); } } public void onPrepared(MediaPlayer mediaplayer) { Log.d(TAG, "onPrepared called"); mIsVideoReadyToBePlayed = true; if (mIsVideoReadyToBePlayed && mIsVideoSizeKnown) { startVideoPlayback(); } } public void surfaceChanged(SurfaceHolder surfaceholder, int i, int j, int k) { Log.d(TAG, "surfaceChanged called"); } public void surfaceDestroyed(SurfaceHolder surfaceholder) { Log.d(TAG, "surfaceDestroyed called"); } public void surfaceCreated(SurfaceHolder holder) { Log.d(TAG, "surfaceCreated called"); Play(); } private void startVideoPlayback() { Log.v(TAG, "startVideoPlayback"); holder.setFixedSize(176, 144); mMediaPlayer.start(); } /** * Pause the Video */ private void Pause() { ; /* * If all fields are populated with valid values, then proceed to * calculate the tips */ } /** * Stop the Video. */ private void Stop() { ; /* * If all fields are populated with valid values, then proceed to * calculate the tips */ } /** * Shows the error message in an alert dialog * * @param errorMessage * String the error message to show * @param fieldId * the Id of the field which caused the error. This is required * so that the focus can be set on that field once the dialog is * dismissed. */ private void showErrorAlert(String errorMessage, final int fieldId) { new AlertDialog.Builder(this).setTitle("Error") .setMessage(errorMessage).setNeutralButton("Close", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { findViewById(fieldId).requestFocus(); } }).show(); } } Thanks, Mangesh Kumar K.

    Read the article

  • Bindable richTextBox still hanging in memory {WPF, Caliburn.Micro}

    - by Paul
    Hi, I use in WFP Caliburn.Micro Framework. I need bindable richTextbox for Document property. I found many ways how do it bindable richTextBox. But I have one problem. From parent window I open child window. Child window consist bindable richTextBox user control. After I close child window and use memory profiler view class with bindabelrichTextBox control and view model class is still hanging in memory. - this cause memory leaks. If I use richTextBox from .NET Framework or richTextBox from Extended WPF Toolkit it doesn’t cause this memory leak problem. I can’t identified problem in bindable richTextBox class. Here is ist class for bindable richTextBox: Base class can be from .NET or Extended toolkit. /// <summary> /// Represents a bindable rich editing control which operates on System.Windows.Documents.FlowDocument /// objects. /// </summary> public class BindableRichTextBox : RichTextBox { /// <summary> /// Identifies the <see cref="Document"/> dependency property. /// </summary> public static readonly DependencyProperty DocumentProperty = DependencyProperty.Register("Document", typeof(FlowDocument), typeof(BindableRichTextBox)); /// <summary> /// Initializes a new instance of the <see cref="BindableRichTextBox"/> class. /// </summary> public BindableRichTextBox() : base() { } /// <summary> /// Initializes a new instance of the <see cref="BindableRichTextBox"/> class. /// </summary> /// <param title="document">A <see cref="T:System.Windows.Documents.FlowDocument"></see> to be added as the initial contents of the new <see cref="T:System.Windows.Controls.BindableRichTextBox"></see>.</param> public BindableRichTextBox(FlowDocument document) : base(document) { } /// <summary> /// Raises the <see cref="E:System.Windows.FrameworkElement.Initialized"></see> event. This method is invoked whenever <see cref="P:System.Windows.FrameworkElement.IsInitialized"></see> is set to true internally. /// </summary> /// <param title="e">The <see cref="T:System.Windows.RoutedEventArgs"></see> that contains the event data.</param> protected override void OnInitialized(EventArgs e) { // Hook up to get notified when DocumentProperty changes. DependencyPropertyDescriptor descriptor = DependencyPropertyDescriptor.FromProperty(DocumentProperty, typeof(BindableRichTextBox)); descriptor.AddValueChanged(this, delegate { // If the underlying value of the dependency property changes, // update the underlying document, also. base.Document = (FlowDocument)GetValue(DocumentProperty); }); // By default, we support updates to the source when focus is lost (or, if the LostFocus // trigger is specified explicity. We don't support the PropertyChanged trigger right now. this.LostFocus += new RoutedEventHandler(BindableRichTextBox_LostFocus); base.OnInitialized(e); } /// <summary> /// Handles the LostFocus event of the BindableRichTextBox control. /// </summary> /// <param title="sender">The source of the event.</param> /// <param title="e">The <see cref="System.Windows.RoutedEventArgs"/> instance containing the event data.</param> void BindableRichTextBox_LostFocus(object sender, RoutedEventArgs e) { // If we have a binding that is set for LostFocus or Default (which we are specifying as default) // then update the source. Binding binding = BindingOperations.GetBinding(this, DocumentProperty); if (binding.UpdateSourceTrigger == UpdateSourceTrigger.Default || binding.UpdateSourceTrigger == UpdateSourceTrigger.LostFocus) { BindingOperations.GetBindingExpression(this, DocumentProperty).UpdateSource(); } } /// <summary> /// Gets or sets the <see cref="T:System.Windows.Documents.FlowDocument"></see> that represents the contents of the <see cref="T:System.Windows.Controls.BindableRichTextBox"></see>. /// </summary> /// <value></value> /// <returns>A <see cref="T:System.Windows.Documents.FlowDocument"></see> object that represents the contents of the <see cref="T:System.Windows.Controls.BindableRichTextBox"></see>.By default, this property is set to an empty <see cref="T:System.Windows.Documents.FlowDocument"></see>. Specifically, the empty <see cref="T:System.Windows.Documents.FlowDocument"></see> contains a single <see cref="T:System.Windows.Documents.Paragraph"></see>, which contains a single <see cref="T:System.Windows.Documents.Run"></see> which contains no text.</returns> /// <exception cref="T:System.ArgumentException">Raised if an attempt is made to set this property to a <see cref="T:System.Windows.Documents.FlowDocument"></see> that represents the contents of another <see cref="T:System.Windows.Controls.RichTextBox"></see>.</exception> /// <exception cref="T:System.ArgumentNullException">Raised if an attempt is made to set this property to null.</exception> /// <exception cref="T:System.InvalidOperationException">Raised if this property is set while a change block has been activated.</exception> public new FlowDocument Document { get { return (FlowDocument)GetValue(DocumentProperty); } set { SetValue(DocumentProperty, value); } } } Thank fro help and advice. Qucik example: Child window with .NET richTextBox <Window x:Class="WpfApplication2.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <RichTextBox Background="Green" VerticalScrollBarVisibility="Auto" HorizontalScrollBarVisibility="Auto" FontSize="13" Margin="4,4,4,4" Grid.Row="0"/> </Grid> </Window> This window I open from parent window: var w = new Window1(); w.Show(); Then close this window, check with memory profiler and it memory doesn’t exist any object of window1 - richTextBox. It’s Ok. But then I try bindable richTextBox: Child window 2: <Window x:Class="WpfApplication2.Window2" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Controls="clr-namespace:WpfApplication2.Controls" Title="Window2" Height="300" Width="300"> <Grid> <Controls:BindableRichTextBox Background="Red" VerticalScrollBarVisibility="Auto" HorizontalScrollBarVisibility="Auto" FontSize="13" Margin="4,4,4,4" Grid.Row="0" /> </Grid> </Window> Open child window 2, close this child window and in memory are still alive object of this child window also bindable richTextBox object.

    Read the article

  • Need guidance on a Google Map application that has to show 250 000 polylines.

    - by lucian.jp
    I am looking for advice for an application I am developing that uses Google Map. Summary: A user has a list of criteria for searching a street segment that fulfills the criteria. The street segments will be colored with 3 colors for showing those below average, average and over average. Then the user clicks on the street segment to see an information window showing the properties of that specific segment hiding those not selected until he/she closes the window and other polyline becomes visible again. This looks quite like the Monopoly City Streets game Hasbro made some month ago the difference being I do not use Flash, I can’t use Open Street Map because it doesn’t list street segment (if it does the IDs won’t be the same anyway) and I do not have to show Google sketch building over. Information: I have a database of street segments with IDs, polyline points and centroid. The database has 6,000,000 street segment records in it. To narrow the generated data a bit we focus on city. The largest city we must show has 250,000 street segments. This means 250,000 line segment polyline to show. Our longest polyline uses 9600 characters which is stored in two 8000 varchar columns in SQL Server 2008. We need to use the API v3 because it is faster than the API v2 and the application will be ported to iPhone. For now it's an ASP.NET 3.5 with SQl Server 2008 application. Performance is a priority. Problems: Most of the demo projects that do this are made with API v2. So besides tutorial on the Google API v3 reference page I have nothing to compare performance or technology use to achieve my goal. There is no available .NET wrapper for the API v3 yet. Generating a 250,000 line segment polyline creates a heavy file which takes time to transfer and parse. (I have found a demo of one polyline of 390,000 points. I think the encoder would be far less efficient with more polylines with less points since there will be less rounding.) Since streets segments are shown based on criteria, polylines must be dynamically created and cache can't be used. Some thoughts: KML/KMZ: Pros: Since it is a standard we can easily load Bing maps, Yahoo! maps, Google maps, Google Earth, with the same KML file. The data generation would be the same. Cons: LineString in KML cannot be encoded polyline like the Google map API can handle. So it would probably be bigger and slower to display. Zipping the file at the size it will take more processing time and require the client side to uncompress the data and I am not quite sure with 250,000 data how an iPhone would handle this and how a server would handle 40 users browsing at the same time. JavaScript file: Pros: JavaScript file can have encoded polyline and would significantly reduce the file to transfer. Cons: Have to create my own stripped version of API v3 to add overlays, create polyline, etc. It is more complex than just create a KML file and point to the source. GeoRSS: This option isn't adapted for my needs I think, but I could be wrong. MapServer: I saw some post suggesting using MapServer to generate overlays. Not quite sure for the connection with our database and the performance it would give. Plus it requires a plugin for generating KML. It seems to me that it wouldn't allow me to do better than creating my own KML or JavaScript file. Maintenance would be simpler without. Monopoly City Streets: The game is now over, but for those who know what I am talking about Monopoly City Streets was showing at max zoom level only the streets that the centroid was inside the Bounds of the window. Moving the map was sending request to the server for the new streets to show. While I think this was ingenious, I have no idea how to implement something similar. The only thing I thought about was to compare if the long was inside the bound of map area X and same with Y. While this could improve performance significantly at high zoom level, this would give nothing when showing a whole city. Clustering: While cluster is awesome for marker, it seems we cannot cluster polylines. I would have liked something like MarkerClusterer for polylines and be able to cluster by my 3 polyline colors. This will probably stay as a “would have been freaking awesome but forget it”. Arrow: I will have in a future version to show a direction for the polyline and will have to show an arrow at the centroid. Loading an image or marker will only double my data so creating a custom overlay will probably be my only option. I have found that demo for something similar I would like to achieve. Unfortunately, the demo is very slow, but I only wish to show 1 arrow per polyline and not multiple like the demo. This functionality will depend on the format of data since I don't think KML support custom overlays. Criteria: While the application is done with ASP.NET 3.5, the port to the iPhone won't use the web to show the application and be limited in screen size for selecting the criteria. This is why I was more orienting on a service or page generating the file based on criteria passed in parameters. The service would than generate the file I need to display the polylines on the map. I could also create an aspx page that does this. The aspx page is more documented than the service way. There should be a reason. Questions: Should I create a web service to returns the street segments file or create an aspx page that return the file? Should I create a JavaScript file with encoded polyline or a KML with longitude/latitude based on the fact that maximum longitude/latitude polyline have 9600 characters and I have to render maximum 250,000 line segment polyline. Or should I go with a MapServer that generate the overlay? Will I be able to display simple arrow on the polyline on the next version. In case of KML generation is it faster to create the file with XDocument, XmlDocument, XmlWriter and this manually or just serialize the street segment in the stream? This is more a brainstorming Stack Overflow question than an actual code problem. Any answer helping narrow the possibilities is as good as someone having all the knowledge to point me out a better choice.

    Read the article

  • What is the fastest cyclic synchronization in Java (ExecutorService vs. CyclicBarrier vs. X)?

    - by Alex Dunlop
    Which Java synchronization construct is likely to provide the best performance for a concurrent, iterative processing scenario with a fixed number of threads like the one outlined below? After experimenting on my own for a while (using ExecutorService and CyclicBarrier) and being somewhat surprised by the results, I would be grateful for some expert advice and maybe some new ideas. Existing questions here do not seem to focus primarily on performance, hence this new one. Thanks in advance! The core of the app is a simple iterative data processing algorithm, parallelized to the spread the computational load across 8 cores on a Mac Pro, running OS X 10.6 and Java 1.6.0_07. The data to be processed is split into 8 blocks and each block is fed to a Runnable to be executed by one of a fixed number of threads. Parallelizing the algorithm was fairly straightforward, and it functionally works as desired, but its performance is not yet what I think it could be. The app seems to spend a lot of time in system calls synchronizing, so after some profiling I wonder whether I selected the most appropriate synchronization mechanism(s). A key requirement of the algorithm is that it needs to proceed in stages, so the threads need to sync up at the end of each stage. The main thread prepares the work (very low overhead), passes it to the threads, lets them work on it, then proceeds when all threads are done, rearranges the work (again very low overhead) and repeats the cycle. The machine is dedicated to this task, Garbage Collection is minimized by using per-thread pools of pre-allocated items, and the number of threads can be fixed (no incoming requests or the like, just one thread per CPU core). V1 - ExecutorService My first implementation used an ExecutorService with 8 worker threads. The program creates 8 tasks holding the work and then lets them work on it, roughly like this: // create one thread per CPU executorService = Executors.newFixedThreadPool( 8 ); ... // now process data in cycles while( ...) { // package data into 8 work items ... // create one Callable task per work item ... // submit the Callables to the worker threads executorService.invokeAll( taskList ); } This works well functionally (it does what it should), and for very large work items indeed all 8 CPUs become highly loaded, as much as the processing algorithm would be expected to allow (some work items will finish faster than others, then idle). However, as the work items become smaller (and this is not really under the program's control), the user CPU load shrinks dramatically: blocksize | system | user | cycles/sec 256k 1.8% 85% 1.30 64k 2.5% 77% 5.6 16k 4% 64% 22.5 4096 8% 56% 86 1024 13% 38% 227 256 17% 19% 420 64 19% 17% 948 16 19% 13% 1626 Legend: - block size = size of the work item (= computational steps) - system = system load, as shown in OS X Activity Monitor (red bar) - user = user load, as shown in OS X Activity Monitor (green bar) - cycles/sec = iterations through the main while loop, more is better The primary area of concern here is the high percentage of time spent in the system, which appears to be driven by thread synchronization calls. As expected, for smaller work items, ExecutorService.invokeAll() will require relatively more effort to sync up the threads versus the amount of work being performed in each thread. But since ExecutorService is more generic than it would need to be for this use case (it can queue tasks for threads if there are more tasks than cores), I though maybe there would be a leaner synchronization construct. V2 - CyclicBarrier The next implementation used a CyclicBarrier to sync up the threads before receiving work and after completing it, roughly as follows: main() { // create the barrier barrier = new CyclicBarrier( 8 + 1 ); // create Runable for thread, tell it about the barrier Runnable task = new WorkerThreadRunnable( barrier ); // start the threads for( int i = 0; i < 8; i++ ) { // create one thread per core new Thread( task ).start(); } while( ... ) { // tell threads about the work ... // N threads + this will call await(), then system proceeds barrier.await(); // ... now worker threads work on the work... // wait for worker threads to finish barrier.await(); } } class WorkerThreadRunnable implements Runnable { CyclicBarrier barrier; WorkerThreadRunnable( CyclicBarrier barrier ) { this.barrier = barrier; } public void run() { while( true ) { // wait for work barrier.await(); // do the work ... // wait for everyone else to finish barrier.await(); } } } Again, this works well functionally (it does what it should), and for very large work items indeed all 8 CPUs become highly loaded, as before. However, as the work items become smaller, the load still shrinks dramatically: blocksize | system | user | cycles/sec 256k 1.9% 85% 1.30 64k 2.7% 78% 6.1 16k 5.5% 52% 25 4096 9% 29% 64 1024 11% 15% 117 256 12% 8% 169 64 12% 6.5% 285 16 12% 6% 377 For large work items, synchronization is negligible and the performance is identical to V1. But unexpectedly, the results of the (highly specialized) CyclicBarrier seem MUCH WORSE than those for the (generic) ExecutorService: throughput (cycles/sec) is only about 1/4th of V1. A preliminary conclusion would be that even though this seems to be the advertised ideal use case for CyclicBarrier, it performs much worse than the generic ExecutorService. V3 - Wait/Notify + CyclicBarrier It seemed worth a try to replace the first cyclic barrier await() with a simple wait/notify mechanism: main() { // create the barrier // create Runable for thread, tell it about the barrier // start the threads while( ... ) { // tell threads about the work // for each: workerThreadRunnable.setWorkItem( ... ); // ... now worker threads work on the work... // wait for worker threads to finish barrier.await(); } } class WorkerThreadRunnable implements Runnable { CyclicBarrier barrier; @NotNull volatile private Callable<Integer> workItem; WorkerThreadRunnable( CyclicBarrier barrier ) { this.barrier = barrier; this.workItem = NO_WORK; } final protected void setWorkItem( @NotNull final Callable<Integer> callable ) { synchronized( this ) { workItem = callable; notify(); } } public void run() { while( true ) { // wait for work while( true ) { synchronized( this ) { if( workItem != NO_WORK ) break; try { wait(); } catch( InterruptedException e ) { e.printStackTrace(); } } } // do the work ... // wait for everyone else to finish barrier.await(); } } } Again, this works well functionally (it does what it should). blocksize | system | user | cycles/sec 256k 1.9% 85% 1.30 64k 2.4% 80% 6.3 16k 4.6% 60% 30.1 4096 8.6% 41% 98.5 1024 12% 23% 202 256 14% 11.6% 299 64 14% 10.0% 518 16 14.8% 8.7% 679 The throughput for small work items is still much worse than that of the ExecutorService, but about 2x that of the CyclicBarrier. Eliminating one CyclicBarrier eliminates half of the gap. V4 - Busy wait instead of wait/notify Since this app is the primary one running on the system and the cores idle anyway if they're not busy with a work item, why not try a busy wait for work items in each thread, even if that spins the CPU needlessly. The worker thread code changes as follows: class WorkerThreadRunnable implements Runnable { // as before final protected void setWorkItem( @NotNull final Callable<Integer> callable ) { workItem = callable; } public void run() { while( true ) { // busy-wait for work while( true ) { if( workItem != NO_WORK ) break; } // do the work ... // wait for everyone else to finish barrier.await(); } } } Also works well functionally (it does what it should). blocksize | system | user | cycles/sec 256k 1.9% 85% 1.30 64k 2.2% 81% 6.3 16k 4.2% 62% 33 4096 7.5% 40% 107 1024 10.4% 23% 210 256 12.0% 12.0% 310 64 11.9% 10.2% 550 16 12.2% 8.6% 741 For small work items, this increases throughput by a further 10% over the CyclicBarrier + wait/notify variant, which is not insignificant. But it is still much lower-throughput than V1 with the ExecutorService. V5 - ? So what is the best synchronization mechanism for such a (presumably not uncommon) problem? I am weary of writing my own sync mechanism to completely replace ExecutorService (assuming that it is too generic and there has to be something that can still be taken out to make it more efficient). It is not my area of expertise and I'm concerned that I'd spend a lot of time debugging it (since I'm not even sure my wait/notify and busy wait variants are correct) for uncertain gain. Any advice would be greatly appreciated.

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328  | Next Page >