Search Results

Search found 1804 results on 73 pages for 'koenig lookup'.

Page 72/73 | < Previous Page | 68 69 70 71 72 73  | Next Page >

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • How to create Custom ListForm WebPart

    - by DipeshBhanani
    Mostly all who works extensively on SharePoint (including meJ) don’t like to use out-of-box list forms (DispForm.aspx, EditForm.aspx, NewForm.aspx) as interface. Actually these OOB list forms bind hands of developers for the customization. It gives headache to developers to add just one post back event, for a dropdown field and to populate other fields in NewForm.aspx or EditForm.aspx. On top of that clients always ask such stuff. So here I am going to give you guys a flight for SharePoint Customization world. In this blog, I will explain, how to create CustomListForm WebPart. In my next blogs, I am going to explain easy deployment of List Forms through features and last, guidance on using SharePoint web controls. 1.       First thing, create a class library project through Visual Studio and inherit the class with WebPart class.     public class CustomListForm : WebPart   2.       Declare the public variables and properties which we are going to use throughout the class. You will get to know these once you see them in use.         #region "Variable Declaration"           Table spTableCntl;         FormToolBar formToolBar;         Literal ltAlertMessage;         Guid SiteId;         Guid ListId;         int ItemId;         string ListName;           #endregion           #region "Properties"           SPControlMode _ControlMode = SPControlMode.New;         [Personalizable(PersonalizationScope.Shared),          WebBrowsable(true),          WebDisplayName("Control Mode"),          WebDescription("Set Control Mode"),          DefaultValue(""),          Category("Miscellaneous")]         public SPControlMode ControlMode         {             get { return _ControlMode; }             set { _ControlMode = value; }         }           #endregion     The property “ControlMode” is used to identify the mode of the List Form. The property is of type SPControlMode which is an enum type with values (Display, Edit, New and Invalid). When we will add this WebPart to DispForm.aspx, EditForm.aspx and NewForm.aspx, we will set the WebPart property “ControlMode” to Display, Edit and New respectively.     3.       Now, we need to override the CreateChildControl method and write code to manually add SharePoint Web Controls related to each list fields as well as ToolBar controls.         protected override void CreateChildControls()         {             base.CreateChildControls();               try             {                 SiteId = SPContext.Current.Site.ID;                 ListId = SPContext.Current.ListId;                 ListName = SPContext.Current.List.Title;                   if (_ControlMode == SPControlMode.Display || _ControlMode == SPControlMode.Edit)                     ItemId = SPContext.Current.ItemId;                   SPSecurity.RunWithElevatedPrivileges(delegate()                 {                     using (SPSite site = new SPSite(SiteId))                     {                         //creating a new SPSite with credentials of System Account                         using (SPWeb web = site.OpenWeb())                         {                               //<Custom Code for creating form controls>                         }                     }                 });             }             catch (Exception ex)             {                 ShowError(ex, "CreateChildControls");             }         }   Here we are assuming that we are developing this WebPart to plug into List Forms. Hence we will get the List Id and List Name from the current context. We can have Item Id only in case of Display and Edit Mode. We are putting our code into “RunWithElevatedPrivileges” to elevate privileges to System Account. Now, let’s get deep down into the main code and expand “//<Custom Code for creating form controls>”. Before initiating any SharePoint control, we need to set context of SharePoint web controls explicitly so that it will be instantiated with elevated System Account user. Following line does the job.     //To create SharePoint controls with new web object and System Account credentials     SPControl.SetContextWeb(Context, web);   First thing, let’s add main table as container for all controls.     //Table to render webpart     Table spTableMain = new Table();     spTableMain.CellPadding = 0;     spTableMain.CellSpacing = 0;     spTableMain.Width = new Unit(100, UnitType.Percentage);     this.Controls.Add(spTableMain);   Now we need to add Top toolbar with Save and Cancel button at top as you see in the below screen shot.       // Add Row and Cell for Top ToolBar     TableRow spRowTopToolBar = new TableRow();     spTableMain.Rows.Add(spRowTopToolBar);     TableCell spCellTopToolBar = new TableCell();     spRowTopToolBar.Cells.Add(spCellTopToolBar);     spCellTopToolBar.Width = new Unit(100, UnitType.Percentage);         ToolBar toolBarTop = (ToolBar)Page.LoadControl("/_controltemplates/ToolBar.ascx");     toolBarTop.CssClass = "ms-formtoolbar";     toolBarTop.ID = "toolBarTbltop";     toolBarTop.RightButtons.SeparatorHtml = "<td class=ms-separator> </td>";       if (_ControlMode != SPControlMode.Display)     {         SaveButton btnSave = new SaveButton();         btnSave.ControlMode = _ControlMode;         btnSave.ListId = ListId;           if (_ControlMode == SPControlMode.New)             btnSave.RenderContext = SPContext.GetContext(web);         else         {             btnSave.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemId = ItemId;         }         toolBarTop.RightButtons.Controls.Add(btnSave);     }       GoBackButton goBackButtonTop = new GoBackButton();     toolBarTop.RightButtons.Controls.Add(goBackButtonTop);     goBackButtonTop.ControlMode = SPControlMode.Display;       spCellTopToolBar.Controls.Add(toolBarTop);   Here we have use “SaveButton” and “GoBackButton” which are internal SharePoint web controls for save and cancel functionality. I have set some of the properties of Save Button with if-else condition because we will not have Item Id in case of New Mode. Item Id property is used to identify which SharePoint List Item need to be saved. Now, add Form Toolbar to the page which contains “Attach File”, “Delete Item” etc buttons.       // Add Row and Cell for FormToolBar     TableRow spRowFormToolBar = new TableRow();     spTableMain.Rows.Add(spRowFormToolBar);     TableCell spCellFormToolBar = new TableCell();     spRowFormToolBar.Cells.Add(spCellFormToolBar);     spCellFormToolBar.Width = new Unit(100, UnitType.Percentage);       FormToolBar formToolBar = new FormToolBar();     formToolBar.ID = "formToolBar";     formToolBar.ListId = ListId;     if (_ControlMode == SPControlMode.New)         formToolBar.RenderContext = SPContext.GetContext(web);     else     {         formToolBar.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemId = ItemId;     }     formToolBar.ControlMode = _ControlMode;     formToolBar.EnableViewState = true;       spCellFormToolBar.Controls.Add(formToolBar);     The ControlMode property will take care of which button to be displayed on the toolbar. E.g. “Attach files”, “Delete Item” in new/edit forms and “New Item”, “Edit Item”, “Delete Item”, “Manage Permissions” etc in display forms. Now add main section which contains form field controls.     //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       //Add a Blank Row with height of 5px to render space between ToolBar table and Control table     TableRow spRowLine1 = new TableRow();     spTableMain.Rows.Add(spRowLine1);     TableCell spCellLine1 = new TableCell();     spRowLine1.Cells.Add(spCellLine1);     spCellLine1.Height = new Unit(5, UnitType.Pixel);     spCellLine1.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       //Add Row and Cell for Form Controls Section     TableRow spRowCntl = new TableRow();     spTableMain.Rows.Add(spRowCntl);     TableCell spCellCntl = new TableCell();       //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       TableRow spRowLine2 = new TableRow();     spTableMain.Rows.Add(spRowLine2);     TableCell spCellLine2 = new TableCell();     spRowLine2.Cells.Add(spCellLine2);     spCellLine2.CssClass = "ms-formline";     spCellLine2.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       // Add Blank row with height of 5 pixel     TableRow spRowLine3 = new TableRow();     spTableMain.Rows.Add(spRowLine3);     TableCell spCellLine3 = new TableCell();     spRowLine3.Cells.Add(spCellLine3);     spCellLine3.Height = new Unit(5, UnitType.Pixel);     spCellLine3.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));   You can add bottom toolbar also to get same look and feel as OOB forms. I am not adding here as the blog will be much lengthy. At last, you need to write following lines to allow unsafe updates for Save and Delete button.     // Allow unsafe update on web for save button and delete button     if (this.Page.IsPostBack && this.Page.Request["__EventTarget"] != null         && (this.Page.Request["__EventTarget"].Contains("IOSaveItem")         || this.Page.Request["__EventTarget"].Contains("IODeleteItem")))     {         SPContext.Current.Web.AllowUnsafeUpdates = true;     }   So that’s all. We have finished writing Custom Code for adding field control. But something most important is skipped. In above code, I have called function “CreateFieldControls(web);” to add SharePoint field controls to the page. Let’s see the implementation of the function:     private void CreateFieldControls(SPWeb pWeb)     {         SPList listMain = pWeb.Lists[ListId];         SPFieldCollection fields = listMain.Fields;           //Main Table to render all fields         spTableCntl = new Table();         spTableCntl.BorderWidth = new Unit(0);         spTableCntl.CellPadding = 0;         spTableCntl.CellSpacing = 0;         spTableCntl.Width = new Unit(100, UnitType.Percentage);         spTableCntl.CssClass = "ms-formtable";           SPContext controlContext = SPContext.GetContext(this.Context, ItemId, ListId, pWeb);           foreach (SPField listField in fields)         {             string fieldDisplayName = listField.Title;             string fieldInternalName = listField.InternalName;               //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;               FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }               //Add row to display a field row             TableRow spCntlRow = new TableRow();             spTableCntl.Rows.Add(spCntlRow);               //Add the cells for containing field lable and control             TableCell spCellLabel = new TableCell();             spCellLabel.Width = new Unit(30, UnitType.Percentage);             spCellLabel.CssClass = "ms-formlabel";             spCntlRow.Cells.Add(spCellLabel);             TableCell spCellControl = new TableCell();             spCellControl.Width = new Unit(70, UnitType.Percentage);             spCellControl.CssClass = "ms-formbody";             spCntlRow.Cells.Add(spCellControl);               //Add the control to the table cells             spCellLabel.Controls.Add(fieldLabel);             spCellControl.Controls.Add(fieldControl);               //Add description if there is any in case of New and Edit Mode             if (_ControlMode != SPControlMode.Display && listField.Description != string.Empty)             {                 FieldDescription fieldDesc = new FieldDescription();                 fieldDesc.FieldName = fieldInternalName;                 fieldDesc.ListId = ListId;                 spCellControl.Controls.Add(fieldDesc);             }               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }         }         fields = null;     }   First of all, I have declared List object and got list fields in field collection object called “fields”. Then I have added a table for the container of all controls and assign CSS class as "ms-formtable" so that it gives consistent look and feel of SharePoint. Now it’s time to navigate through all fields and add them if required. Here we don’t need to add hidden or system fields. We also don’t want to display read-only fields in new and edit forms. Following lines does this job.             //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;   Let’s move to the next line of code.             FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;   We have used “FieldLabel” control for displaying field title. The advantage of using Field Label is, SharePoint automatically adds red star besides field label to identify it as mandatory field if there is any. Here is most important part to understand. The “BaseFieldControl”. It will render the respective web controls according to type of the field. For example, if it’s single line of text, then Textbox, if it’s look up then it renders dropdown. Additionally, the “ControlMode” property tells compiler that which mode (display/edit/new) controls need to be rendered with. In display mode, it will render label with field value. In edit mode, it will render respective control with item value and in new mode it will render respective control with empty value. Please note that, it’s not always the case when dropdown field will be rendered for Lookup field or Choice field. You need to understand which controls are rendered for which list fields. I am planning to write a separate blog which I hope to publish it very soon. Moreover, we also need to assign list field specific properties like List Id, Field Name etc to identify which SharePoint List field is attached with the control.             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }   Here, I have separate code for new mode and Edit/Display mode because we will not have Item Id to assign in New Mode. We also need to set CSS class for cell containing Label and Controls so that those controls get rendered with SharePoint theme.             spCellLabel.CssClass = "ms-formlabel";             spCellControl.CssClass = "ms-formbody";   “FieldDescription” control is used to add field description if there is any.    Now it’s time to add some more customization,               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }   The above code will disable the title field in edit mode. You can add more code here to achieve more customization according to your requirement. Some of the examples are as follow:             //Adding post back event on UserField to auto populate some other dependent field             //in new mode and disable it in edit mode             if (_ControlMode != SPControlMode.Display && fieldDisplayName == "Manager")             {                 if (fieldControl.Controls[0].FindControl("UserField") != null)                 {                     PeopleEditor pplEditor = (PeopleEditor)fieldControl.Controls[0].FindControl("UserField");                     if (_ControlMode == SPControlMode.New)                         pplEditor.AutoPostBack = true;                     else                         pplEditor.Enabled = false;                 }             }               //Add JavaScript Event on Dropdown field. Don't forget to add the JavaScript function on the page.             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Designation")             {                 DropDownList ddlCategory = (DropDownList)fieldControl.Controls[0];                 ddlCategory.Attributes.Add("onchange", string.Format("javascript:DropdownChangeEvent('{0}');return false;", ddlCategory.ClientID));             }    Following are the screenshots of my Custom ListForm WebPart. Let’s play a game, check out your OOB List forms of SharePoint, compare with these screens and find out differences.   DispForm.aspx:   EditForm.aspx:   NewForm.aspx:   Enjoy the SharePoint Soup!!! ­­­­­­­­­­­­­­­­­­­­

    Read the article

  • WebLogic job scheduling

    - by XpiritO
    Hello, overflowers :) I'm trying to implement a WebLogic job scheduling example, to test my cluster capabilities of fail-over on scheduled tasks (to ensure that these tasks are executed on fail over scenario). With this in mind, I've been following this example and trying to configure everything accordingly. Here are the steps I've done so far: Configured a cluster with 1 admin server (AdminServer) and 2 managed instances (Noddy and Snoopy); Set up database tables (using Oracle XE): ACTIVE and WEBLOGIC_TIMERS; Set up data source to access DB and associated it to the scheduling tasks under "Settings for cluster" "Scheduling"; Implemented a job (TimerListener) and a servlet to initialize the job scheduling, as follows: . package timedexecution; import java.io.IOException; import java.io.PrintWriter; import java.io.Serializable; import java.text.SimpleDateFormat; import java.util.Date; import javax.naming.InitialContext; import javax.naming.NamingException; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import commonj.timers.Timer; import commonj.timers.TimerListener; import commonj.timers.TimerManager; public class TimerServlet extends HttpServlet { private static final long serialVersionUID = 1L; protected static void logMessage(String message, PrintWriter out){ out.write("<p>"+ message +"</p>"); System.out.println(message); } @Override public void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); // out.println("<html>"); out.println("<head><title>TimerServlet</title></head>"); // try { // logMessage("service() entering try block to intialize the timer from JNDI", out); // InitialContext ic = new InitialContext(); TimerManager jobScheduler = (TimerManager)ic.lookup("weblogic.JobScheduler"); // logMessage("jobScheduler reference " + jobScheduler, out); // jobScheduler.schedule(new ExampleTimerListener(), 0, 30*1000); // logMessage("Timer scheduled!", out); // //execute this job every 30 seconds logMessage("service() started the timer", out); // logMessage("Started the timer - status:", out); // } catch (NamingException ne) { String msg = ne.getMessage(); logMessage("Timer schedule failed!", out); logMessage(msg, out); } catch (Throwable t) { logMessage("service() error initializing timer manager with JNDI name weblogic.JobScheduler " + t,out); } // out.println("</body></html>"); out.close(); } private static class ExampleTimerListener implements Serializable, TimerListener { private static final long serialVersionUID = 8313912206357147939L; public void timerExpired(Timer timer) { SimpleDateFormat sdf = new SimpleDateFormat(); System.out.println( "timerExpired() called at " + sdf.format( new Date() ) ); } } } Then I executed the servlet to start the scheduling on the first managed instance (Noddy server), which returned as expected: (Servlet execution output) service() entering try block to intialize the timer from JNDI jobScheduler reference weblogic.scheduler.TimerServiceImpl@43b4c7 Timer scheduled! service() started the timer Started the timer - status: Which resulted in the creation of 2 rows in my DB tables: WEBLOGIC_TIMERS table state after servlet execution: "EDIT"; "TIMER_ID"; "LISTENER"; "START_TIME"; "INTERVAL"; "TIMER_MANAGER_NAME"; "DOMAIN_NAME"; "CLUSTER_NAME"; ""; "Noddy_1268653040156"; "[datatype]"; "1268653040156"; "30000"; "weblogic.JobScheduler"; "myCluster"; "Cluster" ACTIVE table state after servlet execution: "EDIT"; "SERVER"; "INSTANCE"; "DOMAINNAME"; "CLUSTERNAME"; "TIMEOUT"; ""; "service.SINGLETON_MASTER"; "6382071947583985002/Noddy"; "QRENcluster"; "Cluster"; "10.03.15" Although, the job is not executed as scheduled. It should print a message on the server's log output (Noddy.out file) with a timestamp, saying that the timer had expired. It doesn't. My log files state as follows: Admin server log (myCluster.log file): ####<15/Mar/2010 10H45m GMT> <Warning> <Cluster> <test-ad> <Noddy> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268649925727> <BEA-000192> <No currently living server was found that could host TimerMaster. The server will retry in a few seconds.> Noddy server log (Noddy.out file): service() entering try block to intialize the timer from JNDI jobScheduler reference weblogic.scheduler.TimerServiceImpl@43b4c7 Timer scheduled! service() started the timer Started the timer - status: <15/Mar/2010 10H45m GMT> <Warning> <Cluster> <BEA-000192> <No currently living server was found that could host TimerMaster. The server will retry in a few seconds.> (Noddy.log file): ####<15/Mar/2010 11H24m GMT> <Info> <Common> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268652270128> <BEA-000628> <Created "1" resources for pool "TxDataSourceOracle", out of which "1" are available and "0" are unavailable.> ####<15/Mar/2010 11H37m GMT> <Info> <Cluster> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1268653040226> <BEA-000182> <Job Scheduler created a job with ID Noddy_1268653040156 for TimerListener with description timedexecution.TimerServlet$ExampleTimerListener@2ce79a> ####<15/Mar/2010 11H39m GMT> <Info> <JDBC> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268653166307> <BEA-001128> <Connection for pool "TxDataSourceOracle" closed.> Can anyone help me out discovering what's wrong with my configuration? Thanks in advance for your help!

    Read the article

  • User defined datatypes CANNOT be returned in web service in Jboss 5.0.1

    - by user1503117
    I am using Jboss 5.0.1, jdk 1.6.0 update 31 and implementing an EJB as a web service and my method in web service module returns an Array of JavaBean objects in my example BenefitLevel array object. When executed in JBoss it throws the following exception: 08:57:08,552 ERROR [ServiceProxy] Service error javax.xml.rpc.ServiceException: Cannot create proxy at org.jboss.ws.core.jaxrpc.client.ServiceImpl.getPort(ServiceImpl.java:359) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.ws.core.jaxrpc.client.ServiceProxy.invoke(ServiceProxy.java:127) at $Proxy105.getCarrierWSSEIPort(Unknown Source) at org.apache.jsp.index_jsp._jspService(index_jsp.java:92) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:369) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:322) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:249) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:601) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.IllegalStateException: Cannot synchronize to any of these methods: public abstract stubs.BenefitLevel[] stubs.CarrierWSSEI.getActiveBenData() throws java.rmi.RemoteException OperationMetaData: qname={urn:CarrierWS/wsdl}getActiveBenData javaName=getActiveBenData style=rpc/literal oneWay=false soapAction= ReturnMetaData: xmlName=result partName=result xmlType={urn:CarrierWS/types/arrays/com/test/cas/carrier/plan/info}BenefitLevelArray javaType=com.benefitpartnersinc.cas.carrier.plan.info.BenefitLevel[] mode=OUT inHeader=false index=-1 at org.jboss.ws.metadata.umdm.OperationMetaData.eagerInitialize(OperationMetaData.java:491) at org.jboss.ws.metadata.umdm.EndpointMetaData.eagerInitializeOperations(EndpointMetaData.java:557) at org.jboss.ws.metadata.umdm.EndpointMetaData.initializeInternal(EndpointMetaData.java:541) at org.jboss.ws.metadata.umdm.EndpointMetaData.setServiceEndpointInterfaceName(EndpointMetaData.java:220) at org.jboss.ws.core.jaxrpc.client.ServiceImpl.getPort(ServiceImpl.java:345) ... 33 more 08:57:08,567 ERROR [STDERR] javax.xml.rpc.ServiceException: Cannot create proxy 08:57:08,567 ERROR [STDERR] at org.jboss.ws.core.jaxrpc.client.ServiceImpl.getPort(ServiceImpl.java:359) 08:57:08,567 ERROR [STDERR] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 08:57:08,567 ERROR [STDERR] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 08:57:08,567 ERROR [STDERR] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 08:57:08,567 ERROR [STDERR] at java.lang.reflect.Method.invoke(Method.java:597) 08:57:08,567 ERROR [STDERR] at org.jboss.ws.core.jaxrpc.client.ServiceProxy.invoke(ServiceProxy.java:127) 08:57:08,567 ERROR [STDERR] at $Proxy105.getCarrierWSSEIPort(Unknown Source) 08:57:08,567 ERROR [STDERR] at org.apache.jsp.index_jsp._jspService(index_jsp.java:92) 08:57:08,567 ERROR [STDERR] at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) 08:57:08,567 ERROR [STDERR] at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) 08:57:08,567 ERROR [STDERR] at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:369) 08:57:08,567 ERROR [STDERR] at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:322) 08:57:08,567 ERROR [STDERR] at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:249) 08:57:08,567 ERROR [STDERR] at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) 08:57:08,567 ERROR [STDERR] at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) 08:57:08,567 ERROR [STDERR] at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190) 08:57:08,567 ERROR [STDERR] at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92) 08:57:08,567 ERROR [STDERR] at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126) 08:57:08,567 ERROR [STDERR] at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) 08:57:08,567 ERROR [STDERR] at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) 08:57:08,567 ERROR [STDERR] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330) 08:57:08,567 ERROR [STDERR] at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829) 08:57:08,567 ERROR [STDERR] at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:601) 08:57:08,567 ERROR [STDERR] at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) 08:57:08,567 ERROR [STDERR] at java.lang.Thread.run(Thread.java:662) 08:57:08,567 ERROR [STDERR] Caused by: java.lang.IllegalStateException: Cannot synchronize to any of these methods: public abstract stubs.BenefitLevel[] stubs.CarrierWSSEI.getActiveBenData() throws java.rmi.RemoteException OperationMetaData: qname={urn:CarrierWS/wsdl}getActiveBenData javaName=getActiveBenData style=rpc/literal oneWay=false soapAction= ReturnMetaData: xmlName=result partName=result xmlType={urn:CarrierWS/types/arrays/com/test/cas/carrier/plan/info}BenefitLevelArray javaType=com.test.cas.carrier.plan.info.BenefitLevel[] mode=OUT inHeader=false index=-1 08:57:08,567 ERROR [STDERR] at org.jboss.ws.metadata.umdm.OperationMetaData.eagerInitialize(OperationMetaData.java:491) 08:57:08,567 ERROR [STDERR] at org.jboss.ws.metadata.umdm.EndpointMetaData.eagerInitializeOperations(EndpointMetaData.java:557) 08:57:08,567 ERROR [STDERR] at org.jboss.ws.metadata.umdm.EndpointMetaData.initializeInternal(EndpointMetaData.java:541) 08:57:08,567 ERROR [STDERR] at org.jboss.ws.metadata.umdm.EndpointMetaData.setServiceEndpointInterfaceName(EndpointMetaData.java:220) 08:57:08,567 ERROR [STDERR] at org.jboss.ws.core.jaxrpc.client.ServiceImpl.getPort(ServiceImpl.java:345) 08:57:08,567 ERROR [STDERR] ... 33 more My Web client code is as follows : <%@page import="java.util.Hashtable"%> <%@page import="javax.naming.*,com.q4.*,javax.xml.rpc.Stub,stubs.CarrierWS,stubs.CarrierWSSEI,stubs.CarrierWSSEI_Impl"%> <%@page contentType="text/html" pageEncoding="UTF-8"%> <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <h1>Hello World!</h1> <% try { InitialContext ic = new InitialContext( ); CarrierWS carrierws = (CarrierWS)ic.lookup("java:comp/env/service/CarrierWS"); out.println("========================" + carrierws); CarrierWSSEI sei = carrierws.getCarrierWSSEIPort(); out.println("Invoking the service please wait ............." + carrierws.getCarrierWSSEIPort()); ((Stub)sei)._setProperty(Stub.ENDPOINT_ADDRESS_PROPERTY,"http://localhost:8080/TestWS3WAR/CarrierWS"); out.println("Invoking the service please wait ............." + sei.getActiveBenData().length); } catch(Exception e) { out.println("Exception occurred : " + e.getMessage()); e.printStackTrace(); } %> </body> </html> Please help me where I am going wrong.

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Spring's EntityManager not persisting

    - by Fernando Camargo
    Well, my project was using EJB and JPA (with Hibernate), but I had to switch to Spring. Everything was working well before that. The EJB used to inject the EntityManager, controled the transaction, etc. Ok, when I switched to Spring, I had a lot of problems because I'm new on Spring. But after everything is running, I have the problem: the data is never saved on database. I configured my Spring to control the transactions, I have spring beans used in JSF, that has spring services that do the hard work. This services have a EntityManager injected and use @Transactional REQUIRED. This services pass the EntityManager to a DAO that call entityManager.persist(bean). The selects appears to work well, the JTA transaction appears to work well to (I saw in log), but the entity is not saved! Here is the log: INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter: doFilterInternal() (linha 136): Opening JPA EntityManager in OpenEntityManagerInViewFilter INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.beans.factory.support.DefaultListableBeanFactory: doGetBean() (linha 245): Returning cached instance of singleton bean 'transactionManager' INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: getTransaction() (linha 365): Creating new transaction with name [br.org.cni.pronatec.controller.service.MontanteServiceImpl.adicionarValor]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; '' INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doBegin() (linha 493): Opened new Session [org.hibernate.impl.SessionImpl@2b2fe2f0] for Hibernate transaction INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doBegin() (linha 504): Preparing JDBC Connection of Hibernate Session [org.hibernate.impl.SessionImpl@2b2fe2f0] INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doBegin() (linha 569): Exposing Hibernate transaction as JDBC transaction [com.sun.gjc.spi.jdbc40.ConnectionHolder40@3bcd4840] INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerInvocationHandler: doJoinTransaction() (linha 383): Joined JTA transaction INFO: Hibernate: select hibernate_sequence.nextval from dual INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: processCommit() (linha 752): Initiating transaction commit INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doCommit() (linha 652): Committing Hibernate transaction on Session [org.hibernate.impl.SessionImpl@2b2fe2f0] INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doCleanupAfterCompletion() (linha 734): Closing Hibernate Session [org.hibernate.impl.SessionImpl@2b2fe2f0] after transaction INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.SessionFactoryUtils: closeSession() (linha 800): Closing Hibernate Session INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter: doFilterInternal() (linha 154): Closing JPA EntityManager in OpenEntityManagerInViewFilter INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.jpa.EntityManagerFactoryUtils: closeEntityManager() (linha 343): Closing JPA EntityManager In the log, I see it commiting the transaction, but I don't see the insert query (the Hibernate is printing any query). I also see that the Hibernate lookup to get the next value of the sequence ID. But after that, it never really inserts. Here is the spring context configuration: <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="PronatecPU" /> <property name="persistenceXmlLocation" value="classpath:META-INF/persistence.xml" /> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver"/> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" > <property name="transactionManagerName" value="java:/TransactionManager" /> <property name="userTransactionName" value="UserTransaction" /> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> <tx:annotation-driven transaction-manager="transactionManager" /> Here is my persistence.xml: <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="PronatecPU" transaction-type="JTA"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>jdbc/pronatec</jta-data-source> <class>br.org.cni.pronatec.model.bean.AgendamentoBuscaSistec</class> <class>br.org.cni.pronatec.model.bean.AgendamentoExportacaoZeus</class> <class>br.org.cni.pronatec.model.bean.AgendamentoImportacaoZeus</class> <class>br.org.cni.pronatec.model.bean.Aluno</class> <class>br.org.cni.pronatec.model.bean.Curso</class> <class>br.org.cni.pronatec.model.bean.DepartamentoRegional</class> <class>br.org.cni.pronatec.model.bean.Dof</class> <class>br.org.cni.pronatec.model.bean.Escola</class> <class>br.org.cni.pronatec.model.bean.Inconsistencia</class> <class>br.org.cni.pronatec.model.bean.Matricula</class> <class>br.org.cni.pronatec.model.bean.Montante</class> <class>br.org.cni.pronatec.model.bean.ParametrosVingentes</class> <class>br.org.cni.pronatec.model.bean.TipoCurso</class> <class>br.org.cni.pronatec.model.bean.Turma</class> <class>br.org.cni.pronatec.model.bean.UnidadeFederativa</class> <class>br.org.cni.pronatec.model.bean.ValorAssistenciaEstudantil</class> <class>br.org.cni.pronatec.model.bean.ValorHora</class> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="current_session_context_class" value="thread"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.format_sql" value="true"/> <property name="hibernate.dialect" value="org.hibernate.dialect.OracleDialect"/> <property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.SunONETransactionManagerLookup"/> <property name="hibernate.hbm2ddl.auto" value="update"/> </properties> </persistence-unit> </persistence> Here is my service that is injected in the managed bean: @Service @Scope("prototype") @Transactional(propagation= Propagation.REQUIRED) public class MontanteServiceImpl { // more code @PersistenceContext(unitName="PronatecPU", type= PersistenceContextType.EXTENDED) private EntityManager entityManager; // more code // The method that is called by another public method that do something before private void salvarMontante(Montante montante) { montante.setDataTransacao(new Date()); MontanteDao montanteDao = new MontanteDao(entityManager); montanteDao.salvar(montante); } // more code } My MontanteDao inherits from a base DAO, like this: public class MontanteDao extends BaseDao<Montante> { public MontanteDao(EntityManager entityManager) { super(entityManager); } } And the method that is called in BaseDao is this: public void salvar(T bean) { entityManager.persist(bean); } Like you can see, it just pick the injected entityManager and call the persist() method. The transaction is being controlled by the Spring, like is printed in the log, but the insert query is never printed in log and it is never saved. I'm sorry about my bad english. Thanks in advance for who helps.

    Read the article

  • Sending email to gmail account using c++ on windows error check

    - by LCD Fire
    I know this has been disscused a lot, but I I'm not asking how to do it, I'm just asking why it doesn't work. What I am doing wrong. It says that the email was sent succesfully but I don't see it in my inbox. I want to send an email to a gmail account, not through it. #include <iostream> #include <windows.h> #include <fstream> #include <conio.h> #pragma comment(lib, "ws2_32.lib") // Insist on at least Winsock v1.1 const int VERSION_MAJOR = 1; const int VERSION_MINOR = 1; #define CRLF "\r\n" // carriage-return/line feed pair using namespace std; // Basic error checking for send() and recv() functions void Check(int iStatus, char *szFunction) { if((iStatus != SOCKET_ERROR) && (iStatus)) return; cerr<< "Error during call to " << szFunction << ": " << iStatus << " - " << GetLastError() << endl; } int main(int argc, char *argv[]) { int iProtocolPort = 25; char szSmtpServerName[64] = ""; char szToAddr[64] = ""; char szFromAddr[64] = ""; char szBuffer[4096] = ""; char szLine[255] = ""; char szMsgLine[255] = ""; SOCKET hServer; WSADATA WSData; LPHOSTENT lpHostEntry; LPSERVENT lpServEntry; SOCKADDR_IN SockAddr; // Check for four command-line args //if(argc != 5) // ShowUsage(); // Load command-line args lstrcpy(szSmtpServerName, "smtp.gmail.com"); lstrcpy(szToAddr, "[email protected]"); lstrcpy(szFromAddr, "[email protected]"); // Create input stream for reading email message file ifstream MsgFile("D:\\d.txt"); // Attempt to intialize WinSock (1.1 or later) if(WSAStartup(MAKEWORD(VERSION_MAJOR, VERSION_MINOR), &WSData)) { cout << "Cannot find Winsock v" << VERSION_MAJOR << "." << VERSION_MINOR << " or later!" << endl; return 1; } // Lookup email server's IP address. lpHostEntry = gethostbyname(szSmtpServerName); if(!lpHostEntry) { cout << "Cannot find SMTP mail server " << szSmtpServerName << endl; return 1; } // Create a TCP/IP socket, no specific protocol hServer = socket(PF_INET, SOCK_STREAM, 0); if(hServer == INVALID_SOCKET) { cout << "Cannot open mail server socket" << endl; return 1; } // Get the mail service port lpServEntry = getservbyname("mail", 0); // Use the SMTP default port if no other port is specified if(!lpServEntry) iProtocolPort = htons(IPPORT_SMTP); else iProtocolPort = lpServEntry->s_port; // Setup a Socket Address structure SockAddr.sin_family = AF_INET; SockAddr.sin_port = iProtocolPort; SockAddr.sin_addr = *((LPIN_ADDR)*lpHostEntry->h_addr_list); // Connect the Socket if(connect(hServer, (PSOCKADDR) &SockAddr, sizeof(SockAddr))) { cout << "Error connecting to Server socket" << endl; return 1; } // Receive initial response from SMTP server Check(recv(hServer, szBuffer, sizeof(szBuffer), 0), "recv() Reply"); // Send HELO server.com sprintf(szMsgLine, "HELO %s%s", szSmtpServerName, CRLF); Check(send(hServer, szMsgLine, strlen(szMsgLine), 0), "send() HELO"); Check(recv(hServer, szBuffer, sizeof(szBuffer), 0), "recv() HELO"); // Send MAIL FROM: <[email protected]> sprintf(szMsgLine, "MAIL FROM:<%s>%s", szFromAddr, CRLF); Check(send(hServer, szMsgLine, strlen(szMsgLine), 0), "send() MAIL FROM"); Check(recv(hServer, szBuffer, sizeof(szBuffer), 0), "recv() MAIL FROM"); // Send RCPT TO: <[email protected]> sprintf(szMsgLine, "RCPT TO:<%s>%s", szToAddr, CRLF); Check(send(hServer, szMsgLine, strlen(szMsgLine), 0), "send() RCPT TO"); Check(recv(hServer, szBuffer, sizeof(szBuffer), 0), "recv() RCPT TO"); // Send DATA sprintf(szMsgLine, "DATA%s", CRLF); Check(send(hServer, szMsgLine, strlen(szMsgLine), 0), "send() DATA"); Check(recv(hServer, szBuffer, sizeof(szBuffer), 0), "recv() DATA"); //strat writing about the subject, end it with two CRLF chars and after that you can //write data to the body oif the message sprintf(szMsgLine, "Subject: My own subject %s%s", CRLF, CRLF); Check(send(hServer, szMsgLine, strlen(szMsgLine), 0), "send() DATA"); // Send all lines of message body (using supplied text file) MsgFile.getline(szLine, sizeof(szLine)); // Get first line do // for each line of message text... { sprintf(szMsgLine, "%s%s", szLine, CRLF); Check(send(hServer, szMsgLine, strlen(szMsgLine), 0), "send() message-line"); MsgFile.getline(szLine, sizeof(szLine)); // get next line. } while(!MsgFile.eof()); // Send blank line and a period sprintf(szMsgLine, "%s.%s", CRLF, CRLF); Check(send(hServer, szMsgLine, strlen(szMsgLine), 0), "send() end-message"); Check(recv(hServer, szBuffer, sizeof(szBuffer), 0), "recv() end-message"); // Send QUIT sprintf(szMsgLine, "QUIT%s", CRLF); Check(send(hServer, szMsgLine, strlen(szMsgLine), 0), "send() QUIT"); Check(recv(hServer, szBuffer, sizeof(szBuffer), 0), "recv() QUIT"); // Report message has been sent cout<< "Sent " << argv[4] << " as email message to " << szToAddr << endl; // Close server socket and prepare to exit. closesocket(hServer); WSACleanup(); _getch(); return 0; }

    Read the article

  • Server compromised. Bounce message contains many email addresses message was not sent to

    - by Tim Duncklee
    This is not a dupe. Please read and understand the issue before marking this as a duplicate question that has been answered already. Several customers are reporting bounce messages like the one below. At first I thought their computers had a virus but then I received one that was server generated so the problem is with the server. I've inspected the logs and these email addresses do not appear in the logs. The only thing I see that I do not remember seeing in the past are entries like this: Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: hook_dir = '/var/qmail//handlers/before-queue' Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: recipient[3] = '[email protected]' Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: handlers dir = '/var/qmail//handlers/before-queue/recipient/[email protected]' I've searched here and the web and maybe I'm just not entering the right search terms but I find nothing on this issue. Does anyone know how a hacker would attach additional email addresses to a message at the server and have them not appear in the logs? CentOS release 5.4, Plesk 8.6, QMail 1.03 Hi. This is the qmail-send program at psa.aaaaaa.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 82.201.133.227 does not like recipient. Remote host said: 550 #5.1.0 Address rejected. Giving up on 82.201.133.227. <[email protected]>: 64.18.7.10 does not like recipient. Remote host said: 550 No such user - psmtp Giving up on 64.18.7.10. <[email protected]>: 173.194.68.27 does not like recipient. Remote host said: 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://support.google.com/mail/bin/answer.py?answer=6596 w8si1903qag.18 - gsmtp Giving up on 173.194.68.27. <[email protected]>: 207.115.36.23 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.23. <[email protected]>: 207.115.37.22 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.22. <[email protected]>: 207.115.37.20 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.20. <[email protected]>: 207.115.37.23 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.23. <[email protected]>: 207.115.36.22 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.22. <[email protected]>: 74.205.16.140 does not like recipient. Remote host said: 553 sorry, that domain isn't in my list of allowed rcpthosts; no valid cert for gatewaying (#5.7.1) Giving up on 74.205.16.140. <[email protected]>: 207.115.36.20 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.20. <[email protected]>: 207.115.37.21 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.21. <[email protected]>: 192.169.41.23 failed after I sent the message. Remote host said: 554 qq Sorry, no valid recipients (#5.1.3) --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 15962 invoked from network); 1 May 2013 06:49:34 -0400 Received: from exprod6mo107.postini.com (64.18.1.18) by psa.aaaaaa.com with (DHE-RSA-AES256-SHA encrypted) SMTP; 1 May 2013 06:49:34 -0400 Received: from aaaaaa.com (exprod6lut001.postini.com [64.18.1.199]) by exprod6mo107.postini.com (Postfix) with SMTP id 47F80B8CA4 for <[email protected]>; Wed, 1 May 2013 03:49:33 -0700 (PDT) From: "Support" <[email protected]> To: [email protected] Subject: Detected Potential Junk Mail Date: Wed, 1 May 2013 03:49:33 -0700 Dear [email protected], junk mail protection service has detected suspicious email message(s) since your last visit and directed them to your Message Center. You can inspect your suspicious email at: ... UPDATE: After not seeing this problem for a while, I personally sent a message and immediately got a bounce with several bad addresses that I know I did not send to. These are addresses that are not on my system or on the server. This problem happens with both Mac and Windows clients and with messages generated from Postini and sent to users on my system. This is NOT backscatter. If it was backscatter it would not have the contents of my message in it. UPDATE #2 Here is another bounce. This one was sent by me and the bounce came back immediately. Hi. This is the qmail-send program at psa.aaaaaa.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 71.74.56.227 does not like recipient. Remote host said: 550 5.1.1 <[email protected]>... User unknown Giving up on 71.74.56.227. <[email protected]>: Connected to 208.34.236.3 but sender was rejected. Remote host said: 550 5.7.1 This system is configured to reject mail from 174.142.62.210 [174.142.62.210] (Host blacklisted - Found on Realtime Black List server 'bl.mailspike.net') <[email protected]>: 66.96.80.22 failed after I sent the message. Remote host said: 552 sorry, mailbox [email protected] is over quota temporarily (#5.1.1) <[email protected]>: 83.145.109.52 does not like recipient. Remote host said: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown in virtual mailbox table Giving up on 83.145.109.52. <[email protected]>: 69.49.101.234 does not like recipient. Remote host said: 550 5.7.1 <[email protected]>... H:M12 [174.142.62.210] Connection refused due to abuse. Please see http://mailspike.org/anubis/lookup.html or contact your E-mail provider. Giving up on 69.49.101.234. <[email protected]>: 212.55.154.36 does not like recipient. Remote host said: 550-The account has been suspended for inactivity 550 A conta do destinatario encontra-se suspensa por inactividade (#5.2.1) Giving up on 212.55.154.36. <[email protected]>: 199.168.90.102 failed after I sent the message. Remote host said: 552 Transaction [email protected] failed, remote said "550 No such user" (#5.1.1) <[email protected]>: 98.136.217.192 failed after I sent the message. Remote host said: 554 delivery error: dd Sorry your message to [email protected] cannot be delivered. This account has been disabled or discontinued [#102]. - mta1210.sbc.mail.gq1.yahoo.com --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 2618 invoked from network); 2 Jun 2013 22:32:51 -0400 Received: from 75-138-254-239.dhcp.jcsn.tn.charter.com (HELO ?192.168.0.66?) (75.138.254.239) by psa.aaaaaa.com with SMTP; 2 Jun 2013 22:32:48 -0400 User-Agent: Microsoft-Entourage/12.34.0.120813 Date: Sun, 02 Jun 2013 21:32:39 -0500 Subject: Refinance From: Tim Duncklee <[email protected]> To: Scott jones <[email protected]> Message-ID: <CDD16A79.67344%[email protected]> Thread-Topic: Reference Thread-Index: Ac5gAp2QmTs+LRv0SEOy7AJTX2DWzQ== Mime-version: 1.0 Content-type: multipart/mixed; boundary="B_3453053568_12034440" > This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. --B_3453053568_12034440 Content-type: multipart/related; boundary="B_3453053568_11982218" --B_3453053568_11982218 Content-type: multipart/alternative; boundary="B_3453053568_12000660" --B_3453053568_12000660 Content-type: text/plain; charset="ISO-8859-1" Content-transfer-encoding: quoted-printable Scott, ... email body here ... Here are the relevant log entries: Jun 2 22:32:50 psa qmail-queue[2616]: mail: all addreses are uncheckable - need to skip scanning (by deny mode) Jun 2 22:32:50 psa qmail-queue[2616]: scan: the message(drweb.tmp.i2SY0n) sent by [email protected] to [email protected] should be passed without checks, because contains uncheckable addresses Jun 2 22:32:50 psa qmail-queue-handlers[2617]: Handlers Filter before-queue for qmail started ... Jun 2 22:32:50 psa qmail-queue-handlers[2617]: [email protected] Jun 2 22:32:50 psa qmail-queue-handlers[2617]: [email protected] Jun 2 22:32:50 psa qmail-queue-handlers[2617]: hook_dir = '/var/qmail//handlers/before-queue' Jun 2 22:32:50 psa qmail-queue-handlers[2617]: recipient[3] = '[email protected]' Jun 2 22:32:50 psa qmail-queue-handlers[2617]: handlers dir = '/var/qmail//handlers/before-queue/recipient/[email protected]' Jun 2 22:32:51 psa qmail: 1370226771.060211 starting delivery 57: msg 1540285 to remote [email protected] Jun 2 22:32:51 psa qmail: 1370226771.060402 status: local 0/10 remote 1/20 Jun 2 22:32:51 psa qmail: 1370226771.060556 new msg 4915232 Jun 2 22:32:51 psa qmail: 1370226771.060671 info msg 4915232: bytes 687899 from <[email protected]> qp 2618 uid 2020 Jun 2 22:32:51 psa qmail-remote-handlers[2619]: Handlers Filter before-remote for qmail started ... Jun 2 22:32:51 psa qmail-queue-handlers[2617]: starter: submitter[2618] exited normally Jun 2 22:32:51 psa qmail-remote-handlers[2619]: from= Jun 2 22:32:51 psa qmail-remote-handlers[2619]: [email protected] Jun 2 22:32:51 psa qmail: 1370226771.078732 starting delivery 58: msg 4915232 to remote [email protected] Jun 2 22:32:51 psa qmail: 1370226771.078825 status: local 0/10 remote 2/20 Jun 2 22:32:51 psa qmail-remote-handlers[2621]: Handlers Filter before-remote for qmail started ... Jun 2 22:32:51 psa qmail-remote-handlers[2621]: [email protected] Jun 2 22:32:51 psa qmail-remote-handlers[2621]: [email protected]

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • Cisco 891w multiple VLAN configuration

    - by Jessica
    I'm having trouble getting my guest network up. I have VLAN 1 that contains all our network resources (servers, desktops, printers, etc). I have the wireless configured to use VLAN1 but authenticate with wpa2 enterprise. The guest network I just wanted to be open or configured with a simple WPA2 personal password on it's own VLAN2. I've looked at tons of documentation and it should be working but I can't even authenticate on the guest network! I've posted this on cisco's support forum a week ago but no one has really responded. I could really use some help. So if anyone could take a look at the configurations I posted and steer me in the right direction I would be extremely grateful. Thank you! version 15.0 service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption ! hostname ESI ! boot-start-marker boot-end-marker ! logging buffered 51200 warnings ! aaa new-model ! ! aaa authentication login userauthen local aaa authorization network groupauthor local ! ! ! ! ! aaa session-id common ! ! ! clock timezone EST -5 clock summer-time EDT recurring service-module wlan-ap 0 bootimage autonomous ! crypto pki trustpoint TP-self-signed-3369945891 enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-3369945891 revocation-check none rsakeypair TP-self-signed-3369945891 ! ! crypto pki certificate chain TP-self-signed-3369945891 certificate self-signed 01 (cert is here) quit ip source-route ! ! ip dhcp excluded-address 192.168.1.1 ip dhcp excluded-address 192.168.1.5 ip dhcp excluded-address 192.168.1.2 ip dhcp excluded-address 192.168.1.200 192.168.1.210 ip dhcp excluded-address 192.168.1.6 ip dhcp excluded-address 192.168.1.8 ip dhcp excluded-address 192.168.3.1 ! ip dhcp pool ccp-pool import all network 192.168.1.0 255.255.255.0 default-router 192.168.1.1 dns-server 10.171.12.5 10.171.12.37 lease 0 2 ! ip dhcp pool guest import all network 192.168.3.0 255.255.255.0 default-router 192.168.3.1 dns-server 10.171.12.5 10.171.12.37 ! ! ip cef no ip domain lookup no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO891W-AGN-A-K9 sn FTX153085WL ! ! username ESIadmin privilege 15 secret 5 $1$g1..$JSZ0qxljZAgJJIk/anDu51 username user1 password 0 pass ! ! ! class-map type inspect match-any ccp-cls-insp-traffic match protocol cuseeme match protocol dns match protocol ftp match protocol h323 match protocol https match protocol icmp match protocol imap match protocol pop3 match protocol netshow match protocol shell match protocol realmedia match protocol rtsp match protocol smtp match protocol sql-net match protocol streamworks match protocol tftp match protocol vdolive match protocol tcp match protocol udp class-map type inspect match-all ccp-insp-traffic match class-map ccp-cls-insp-traffic class-map type inspect match-any ccp-cls-icmp-access match protocol icmp class-map type inspect match-all ccp-invalid-src match access-group 100 class-map type inspect match-all ccp-icmp-access match class-map ccp-cls-icmp-access class-map type inspect match-all ccp-protocol-http match protocol http ! ! policy-map type inspect ccp-permit-icmpreply class type inspect ccp-icmp-access inspect class class-default pass policy-map type inspect ccp-inspect class type inspect ccp-invalid-src drop log class type inspect ccp-protocol-http inspect class type inspect ccp-insp-traffic inspect class class-default drop policy-map type inspect ccp-permit class class-default drop ! zone security out-zone zone security in-zone zone-pair security ccp-zp-self-out source self destination out-zone service-policy type inspect ccp-permit-icmpreply zone-pair security ccp-zp-in-out source in-zone destination out-zone service-policy type inspect ccp-inspect zone-pair security ccp-zp-out-self source out-zone destination self service-policy type inspect ccp-permit ! ! crypto isakmp policy 1 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group 3000client key 67Nif8LLmqP_ dns 10.171.12.37 10.171.12.5 pool dynpool acl 101 ! ! crypto ipsec transform-set myset esp-3des esp-sha-hmac ! crypto dynamic-map dynmap 10 set transform-set myset ! ! crypto map clientmap client authentication list userauthen crypto map clientmap isakmp authorization list groupauthor crypto map clientmap client configuration address initiate crypto map clientmap client configuration address respond crypto map clientmap 10 ipsec-isakmp dynamic dynmap ! ! ! ! ! interface FastEthernet0 ! ! interface FastEthernet1 ! ! interface FastEthernet2 ! ! interface FastEthernet3 ! ! interface FastEthernet4 ! ! interface FastEthernet5 ! ! interface FastEthernet6 ! ! interface FastEthernet7 ! ! interface FastEthernet8 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto ! ! interface GigabitEthernet0 description $FW_OUTSIDE$$ES_WAN$ ip address 10...* 255.255.254.0 ip nat outside ip virtual-reassembly zone-member security out-zone duplex auto speed auto crypto map clientmap ! ! interface wlan-ap0 description Service module interface to manage the embedded AP ip unnumbered Vlan1 arp timeout 0 ! ! interface Wlan-GigabitEthernet0 description Internal switch interface connecting to the embedded AP switchport trunk allowed vlan 1-3,1002-1005 switchport mode trunk ! ! interface Vlan1 description $ETH-SW-LAUNCH$$INTF-INFO-FE 1$$FW_INSIDE$ ip address 192.168.1.1 255.255.255.0 ip nat inside ip virtual-reassembly zone-member security in-zone ip tcp adjust-mss 1452 crypto map clientmap ! ! interface Vlan2 description guest ip address 192.168.3.1 255.255.255.0 ip access-group 120 in ip nat inside ip virtual-reassembly zone-member security in-zone ! ! interface Async1 no ip address encapsulation slip ! ! ip local pool dynpool 192.168.1.200 192.168.1.210 ip forward-protocol nd ip http server ip http access-class 23 ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ! ! ip dns server ip nat inside source list 23 interface GigabitEthernet0 overload ip route 0.0.0.0 0.0.0.0 10.165.0.1 ! access-list 23 permit 192.168.1.0 0.0.0.255 access-list 100 remark CCP_ACL Category=128 access-list 100 permit ip host 255.255.255.255 any access-list 100 permit ip 127.0.0.0 0.255.255.255 any access-list 100 permit ip 10.165.0.0 0.0.1.255 any access-list 110 permit ip 192.168.0.0 0.0.5.255 any access-list 120 remark ESIGuest Restriction no cdp run ! ! ! ! ! ! control-plane ! ! alias exec dot11radio service-module wlan-ap 0 session Access point version 12.4 no service pad service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption ! hostname ESIRouter ! no logging console enable secret 5 $1$yEH5$CxI5.9ypCBa6kXrUnSuvp1 ! aaa new-model ! ! aaa group server radius rad_eap server 192.168.1.5 auth-port 1812 acct-port 1813 ! aaa group server radius rad_acct server 192.168.1.5 auth-port 1812 acct-port 1813 ! aaa authentication login eap_methods group rad_eap aaa authentication enable default line enable aaa authorization exec default local aaa authorization commands 15 default local aaa accounting network acct_methods start-stop group rad_acct ! aaa session-id common clock timezone EST -5 clock summer-time EDT recurring ip domain name ESI ! ! dot11 syslog dot11 vlan-name one vlan 1 dot11 vlan-name two vlan 2 ! dot11 ssid one vlan 1 authentication open eap eap_methods authentication network-eap eap_methods authentication key-management wpa version 2 accounting rad_acct ! dot11 ssid two vlan 2 authentication open guest-mode ! dot11 network-map ! ! username ESIadmin privilege 15 secret 5 $1$p02C$WVHr5yKtRtQxuFxPU8NOx. ! ! bridge irb ! ! interface Dot11Radio0 no ip address no ip route-cache ! encryption vlan 1 mode ciphers aes-ccm ! broadcast-key vlan 1 change 30 ! ! ssid one ! ssid two ! antenna gain 0 station-role root ! interface Dot11Radio0.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding bridge-group 1 spanning-disabled ! interface Dot11Radio0.2 encapsulation dot1Q 2 no ip route-cache bridge-group 2 bridge-group 2 subscriber-loop-control bridge-group 2 block-unknown-source no bridge-group 2 source-learning no bridge-group 2 unicast-flooding bridge-group 2 spanning-disabled ! interface Dot11Radio1 no ip address no ip route-cache shutdown ! encryption vlan 1 mode ciphers aes-ccm ! broadcast-key vlan 1 change 30 ! ! ssid one ! antenna gain 0 dfs band 3 block channel dfs station-role root ! interface Dot11Radio1.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding bridge-group 1 spanning-disabled ! interface GigabitEthernet0 description the embedded AP GigabitEthernet 0 is an internal interface connecting AP with the host router no ip address no ip route-cache ! interface GigabitEthernet0.1 encapsulation dot1Q 1 native no ip route-cache bridge-group 1 no bridge-group 1 source-learning bridge-group 1 spanning-disabled ! interface GigabitEthernet0.2 encapsulation dot1Q 2 no ip route-cache bridge-group 2 no bridge-group 2 source-learning bridge-group 2 spanning-disabled ! interface BVI1 ip address 192.168.1.2 255.255.255.0 no ip route-cache ! ip http server no ip http secure-server ip http help-path http://www.cisco.com/warp/public/779/smbiz/prodconfig/help/eag access-list 10 permit 192.168.1.0 0.0.0.255 radius-server host 192.168.1.5 auth-port 1812 acct-port 1813 key ***** bridge 1 route ip

    Read the article

  • Redmine install not working and displaying directory contents - Ubuntu 10.04

    - by Casey Flynn
    I've gone through the steps to set up and install the redmine project tracking web app on my VPS with Apache2 but I'm running into a situation where instead of displaying the redmine app, I just see the directory contents: Does anyone know what could be the problem? I'm not sure what other files might be of use to diagnose what's going on. Thanks! # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # Define an access log for VirtualHosts that don't define their own logfile CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ # Enable fastcgi for .fcgi files # (If you're using a distro package for mod_fcgi, something like # this is probably already present) #<IfModule mod_fcgid.c> # AddHandler fastcgi-script .fcgi # FastCgiIpcDir /var/lib/apache2/fastcgi #</IfModule> LoadModule fcgid_module /usr/lib/apache2/modules/mod_fcgid.so LoadModule passenger_module /var/lib/gems/1.8/gems/passenger-3.0.7/ext/apache2/mod_passenger.so PassengerRoot /var/lib/gems/1.8/gems/passenger-3.0.7 PassengerRuby /usr/bin/ruby1.8 ServerName demo and my vhosts file #No DNS server, default ip address v-host #domain: none #public: /home/casey/public_html/app/ <VirtualHost *:80> ServerAdmin webmaster@localhost # ScriptAlias /redmine /home/casey/public_html/app/redmine/dispatch.fcgi DirectoryIndex index.html DocumentRoot /home/casey/public_html/app/public <Directory "/home/casey/trac/htdocs"> Order allow,deny Allow from all </Directory> <Directory /var/www/redmine> RailsBaseURI /redmine PassengerResolveSymlinksInDocumentRoot on </Directory> # <Directory /> # Options FollowSymLinks # AllowOverride None # </Directory> # <Directory /var/www/> # Options Indexes FollowSymLinks MultiViews # AllowOverride None # Order allow,deny # allow from all # </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /home/casey/public_html/app/log/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog /home/casey/public_html/app/log/access.log combined # Alias /doc/ "/usr/share/doc/" # <Directory "/usr/share/doc/"> # Options Indexes MultiViews FollowSymLinks # AllowOverride None # Order deny,allow # Deny from all # Allow from 127.0.0.0/255.0.0.0 ::1/128 # </Directory> </VirtualHost>

    Read the article

  • Cisco ASA: How to route PPPoE-assigned subnet?

    - by Martijn Heemels
    We've just received a fiber uplink, and I'm trying to configure our Cisco ASA 5505 to properly use it. The provider requires us to connect via PPPoE, and I managed to configure the ASA as a PPPoE client and establish a connection. The ASA is assigned an IP address by PPPoE, and I can ping out from the ASA to the internet, but I should have access to an entire /28 subnet. I can't figure out how to get that subnet configured on the ASA, so that I can route or NAT the available public addresses to various internal hosts. My assigned range is: 188.xx.xx.176/28 The address I get via PPPoE is 188.xx.xx.177/32, which according to our provider is our Default Gateway address. They claim the subnet is correctly routed to us on their side. How does the ASA know which range it is responsible for on the Fiber interface? How do I use the addresses from my range? To clarify my config; The ASA is currently configured to default-route to our ADSL uplink on port Ethernet0/0 (interface vlan2, nicknamed Outside). The fiber is connected to port Ethernet0/2 (interface vlan50, nicknamed Fiber) so I can configure and test it before making it the default route. Once I'm clear on how to set it all up, I'll fully replace the Outside interface with Fiber. My config (rather long): : Saved : ASA Version 8.3(2)4 ! hostname gw domain-name example.com enable password ****** encrypted passwd ****** encrypted names name 10.10.1.0 Inside-dhcp-network description Desktops and clients that receive their IP via DHCP name 10.10.0.208 svn.example.com description Subversion server name 10.10.0.205 marvin.example.com description LAMP development server name 10.10.0.206 dns.example.com description DNS, DHCP, NTP ! interface Vlan2 description Old ADSL WAN connection nameif outside security-level 0 ip address 192.168.1.2 255.255.255.252 ! interface Vlan10 description LAN vlan 10 Regular LAN traffic nameif inside security-level 100 ip address 10.10.0.254 255.255.0.0 ! interface Vlan11 description LAN vlan 11 Lab/test traffic nameif lab security-level 90 ip address 10.11.0.254 255.255.0.0 ! interface Vlan20 description LAN vlan 20 ISCSI traffic nameif iscsi security-level 100 ip address 10.20.0.254 255.255.0.0 ! interface Vlan30 description LAN vlan 30 DMZ traffic nameif dmz security-level 50 ip address 10.30.0.254 255.255.0.0 ! interface Vlan40 description LAN vlan 40 Guests access to the internet nameif guests security-level 50 ip address 10.40.0.254 255.255.0.0 ! interface Vlan50 description New WAN Corporate Internet over fiber nameif fiber security-level 0 pppoe client vpdn group KPN ip address pppoe ! interface Ethernet0/0 switchport access vlan 2 speed 100 duplex full ! interface Ethernet0/1 switchport trunk allowed vlan 10,11,30,40 switchport trunk native vlan 10 switchport mode trunk ! interface Ethernet0/2 switchport access vlan 50 speed 100 duplex full ! interface Ethernet0/3 shutdown ! interface Ethernet0/4 shutdown ! interface Ethernet0/5 switchport access vlan 20 ! interface Ethernet0/6 shutdown ! interface Ethernet0/7 shutdown ! boot system disk0:/asa832-4-k8.bin ftp mode passive clock timezone CEST 1 clock summer-time CEDT recurring last Sun Mar 2:00 last Sun Oct 3:00 dns domain-lookup inside dns server-group DefaultDNS name-server dns.example.com domain-name example.com same-security-traffic permit inter-interface same-security-traffic permit intra-interface object network inside-net subnet 10.10.0.0 255.255.0.0 object network svn.example.com host 10.10.0.208 object network marvin.example.com host 10.10.0.205 object network lab-net subnet 10.11.0.0 255.255.0.0 object network dmz-net subnet 10.30.0.0 255.255.0.0 object network guests-net subnet 10.40.0.0 255.255.0.0 object network dhcp-subnet subnet 10.10.1.0 255.255.255.0 description DHCP assigned addresses on Vlan 10 object network Inside-vpnpool description Pool of assignable addresses for VPN clients object network vpn-subnet subnet 10.10.3.0 255.255.255.0 description Address pool assignable to VPN clients object network dns.example.com host 10.10.0.206 description DNS, DHCP, NTP object-group service iscsi tcp description iscsi storage traffic port-object eq 3260 access-list outside_access_in remark Allow access from outside to HTTP on svn. access-list outside_access_in extended permit tcp any object svn.example.com eq www access-list Insiders!_splitTunnelAcl standard permit 10.10.0.0 255.255.0.0 access-list iscsi_access_in remark Prevent disruption of iscsi traffic from outside the iscsi vlan. access-list iscsi_access_in extended deny tcp any interface iscsi object-group iscsi log warnings ! snmp-map DenyV1 deny version 1 ! pager lines 24 logging enable logging timestamp logging asdm-buffer-size 512 logging monitor warnings logging buffered warnings logging history critical logging asdm errors logging flash-bufferwrap logging flash-minimum-free 4000 logging flash-maximum-allocation 2000 mtu outside 1500 mtu inside 1500 mtu lab 1500 mtu iscsi 9000 mtu dmz 1500 mtu guests 1500 mtu fiber 1492 ip local pool DHCP_VPN 10.10.3.1-10.10.3.20 mask 255.255.0.0 ip verify reverse-path interface outside no failover icmp unreachable rate-limit 10 burst-size 5 asdm image disk0:/asdm-635.bin asdm history enable arp timeout 14400 nat (inside,outside) source static any any destination static vpn-subnet vpn-subnet ! object network inside-net nat (inside,outside) dynamic interface object network svn.example.com nat (inside,outside) static interface service tcp www www object network lab-net nat (lab,outside) dynamic interface object network dmz-net nat (dmz,outside) dynamic interface object network guests-net nat (guests,outside) dynamic interface access-group outside_access_in in interface outside access-group iscsi_access_in in interface iscsi route outside 0.0.0.0 0.0.0.0 192.168.1.1 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa-server SBS2003 protocol radius aaa-server SBS2003 (inside) host 10.10.0.204 timeout 5 key ***** aaa authentication enable console SBS2003 LOCAL aaa authentication ssh console SBS2003 LOCAL aaa authentication telnet console SBS2003 LOCAL http server enable http 10.10.0.0 255.255.0.0 inside snmp-server host inside 10.10.0.207 community ***** version 2c snmp-server location Server room snmp-server contact [email protected] snmp-server community ***** snmp-server enable traps snmp authentication linkup linkdown coldstart snmp-server enable traps syslog crypto ipsec transform-set TRANS_ESP_AES-256_SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set TRANS_ESP_AES-256_SHA mode transport crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs group5 crypto dynamic-map outside_dyn_map 20 set transform-set TRANS_ESP_AES-256_SHA crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet 10.10.0.0 255.255.0.0 inside telnet timeout 5 ssh scopy enable ssh 10.10.0.0 255.255.0.0 inside ssh timeout 5 ssh version 2 console timeout 30 management-access inside vpdn group KPN request dialout pppoe vpdn group KPN localname INSIDERS vpdn group KPN ppp authentication pap vpdn username INSIDERS password ***** store-local dhcpd address 10.40.1.0-10.40.1.100 guests dhcpd dns 8.8.8.8 8.8.4.4 interface guests dhcpd update dns interface guests dhcpd enable guests ! threat-detection basic-threat threat-detection scanning-threat threat-detection statistics host number-of-rate 2 threat-detection statistics port number-of-rate 3 threat-detection statistics protocol number-of-rate 3 threat-detection statistics access-list threat-detection statistics tcp-intercept rate-interval 30 burst-rate 400 average-rate 200 ntp server dns.example.com source inside prefer webvpn group-policy DfltGrpPolicy attributes vpn-tunnel-protocol IPSec l2tp-ipsec group-policy Insiders! internal group-policy Insiders! attributes wins-server value 10.10.0.205 dns-server value 10.10.0.206 vpn-tunnel-protocol IPSec l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value Insiders!_splitTunnelAcl default-domain value example.com username martijn password ****** encrypted privilege 15 username marcel password ****** encrypted privilege 15 tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group Insiders! type remote-access tunnel-group Insiders! general-attributes address-pool DHCP_VPN authentication-server-group SBS2003 LOCAL default-group-policy Insiders! tunnel-group Insiders! ipsec-attributes pre-shared-key ***** ! class-map global-class match default-inspection-traffic class-map type inspect http match-all asdm_medium_security_methods match not request method head match not request method post match not request method get ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map type inspect http http_inspection_policy parameters protocol-violation action drop-connection policy-map global-policy class global-class inspect dns inspect esmtp inspect ftp inspect h323 h225 inspect h323 ras inspect http inspect icmp inspect icmp error inspect mgcp inspect netbios inspect pptp inspect rtsp inspect snmp DenyV1 ! service-policy global-policy global smtp-server 123.123.123.123 prompt hostname context call-home profile CiscoTAC-1 no active destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService destination address email [email protected] destination transport-method http subscribe-to-alert-group diagnostic subscribe-to-alert-group environment subscribe-to-alert-group inventory periodic monthly subscribe-to-alert-group configuration periodic monthly subscribe-to-alert-group telemetry periodic daily hpm topN enable Cryptochecksum:a76bbcf8b19019771c6d3eeecb95c1ca : end asdm image disk0:/asdm-635.bin asdm location svn.example.com 255.255.255.255 inside asdm location marvin.example.com 255.255.255.255 inside asdm location dns.example.com 255.255.255.255 inside asdm history enable

    Read the article

  • Apache + Codeigniter + New Server + Unexpected Errors

    - by ngl5000
    Alright here is the situation: I use to have my codeigniter site at bluehost were I did not have root access, I have since moved that site to rackspace. I have not changed any of the PHP code yet there has been some unexpected behavior. Unexpected Behavior: http://mysite.com/robots.txt Both old and new resolve to the robots file http://mysite.com/robots.txt/ The old bluehost setup resolves to my codeigniter 404 error page. The rackspace config resolves to: Not Found The requested URL /robots.txt/ was not found on this server. **This instance leads me to believe that there could be a problem with my mod rewrites or lack there of. The first one produces the error correctly through php while it seems the second senario lets the server handle this error. The next instance of this problem is even more troubling: 'http://mysite.com/search/term/9 x 1-1%2F2 white/' New site results in: Bad Request Your browser sent a request that this server could not understand. Old site results in: The actual page being loaded and the search term being unencoded. I have to assume that this has something to do with the fact that when I went to the new server I went from root level htaccess file to httpd.conf file and virtual server default and default-ssl. Here they are: Default file: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / # force no www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Default-ssl File <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / RewriteCond %{SERVER_PORT} !^443 RewriteRule ^ https://mysite.com%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # Use our self-signed certificate by default SSLCertificateFile /etc/apache2/ssl/certs/www.mysite.com.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.mysite.com.key # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. # SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem # SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown httpd.conf File Just a lot of stuff from html5 boiler plate, I will post it if need be Old htaccess file <IfModule mod_rewrite.c> # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)/$ /$1 [r=301,L] # codeigniter direct RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> Any Help would be hugely appreciated!!

    Read the article

  • Can't get simple Apache VHost up and running

    - by TK Kocheran
    Unfortunately, I can't seem to get a simple Apache VHost online. I used to simply have one VHost which bound to all: <VirtualHost *:80>, but this isn't appropriate for security anymore. I need to have one VHost for localhost requests (ie my dev server) and one for incoming requests via my domain name. Here's my new VHost: NameVirtualHost domain1.com <VirtualHost domain1.com:80> DocumentRoot /var/www ServerName domain1.com </VirtualHost> <VirtualHost domain2.com:80> DocumentRoot /var/www ServerName domain2.com </VirtualHost> After I restart my server, I see the following errors in my log: [Wed Feb 16 11:26:36 2011] [error] [client ####.###.###.###] File does not exist: /htdocs [Wed Feb 16 11:26:36 2011] [error] [client ####.###.###.###] File does not exist: /htdocs What am I doing wrong? EDIT As per the answer give below, I have modified my configuration. Here are my configuration files: /etc/apache2/ports.conf: Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> Here are my actual defined sites: /etc/apache2/sites-enabled/000-localhost: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerAdmin ######### DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> RewriteEngine On RewriteLog "/var/log/apache2/mod_rewrite.log" RewriteLogLevel 9 <Location /> <Limit GET POST PUT> order allow,deny allow from all deny from 65.34.248.110 deny from 69.122.239.3 deny from 58.218.199.147 deny from 65.34.248.110 </Limit> </Location> </VirtualHost> /etc/apache2/sites-enabled/001-rfkrocktk.dyndns.org: NameVirtualHost rfkrocktk.dyndns.org:80 <VirtualHost rfkrocktk.dyndns.org:80> DocumentRoot /var/www ServerName rfkrocktk.dyndns.org </VirtualHost> And, just for kicks, my main file: /etc/apache2/apache2.conf: # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # Define an access log for VirtualHosts that don't define their own logfile CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ what else do I need to do to fix it? Should I be telling apache to listen on 127.0.0.1:80, or isn't it already listening there?

    Read the article

  • Unstable DNS with bind

    - by yasser abd
    we have a Centos machine called jupiter, on which I have installed bind9, On every other machine the DNS is set to be the IP address of jupiter (192.168.2.101), as you can see in the output of the following command in windows >ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : mypcs Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme 57xx Gigabit Controller Physical Address. . . . . . . . . : 00-1A-A0-AC-E4-CC DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::c16d:3ae4:5907:30c4%8(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.2.98(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Thursday, September 20, 2012 10:26:11 AM Lease Expires . . . . . . . . . . : Sunday, September 23, 2012 10:26:10 AM Default Gateway . . . . . . . . . : 192.168.2.1 DHCP Server . . . . . . . . . . . : 192.168.2.1 DHCPv6 IAID . . . . . . . . . . . : 201333408 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-16-3A-50-01-00-1A-A0-AC-E4-CC DNS Servers . . . . . . . . . . . : 192.168.2.101 192.168.2.1 192.168.2.1 NetBIOS over Tcpip. . . . . . . . : Enabled All machines can always nslookup one of the domain (mydomain.com) that is set in the jupiter's DNS server, you can see that in the output of nslookup on the same windows machine: >nslookup mydomain.com Server: UnKnown Address: 192.168.2.101 Name: mydomain.com Address: 192.168.2.100 The problem is, sometimes mydomain.com can not be pinged, here is the output of the ping on the same windows machine >ping mydomain.com Ping request could not find host mydomain.com. Please check the name and try again. This looks very random, and happens once in a while, so the machine can lookup the DNS records but can't ping it, nor can browse the website that is hosted on mydomain.com, which should resolve to 192.168.2.100 On a linux machine that has the same DNS settings, the output of dig command for mydomain is as follows: $ dig mydomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.2 <<>> mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36090 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;mydomain.com. IN A ;; ANSWER SECTION: mydomain.com. 86400 IN A 192.168.2.100 ;; AUTHORITY SECTION: mydomain.com. 86400 IN NS jupiter. ;; ADDITIONAL SECTION: jupiter. 86400 IN A 192.168.2.101 ;; Query time: 1 msec ;; SERVER: 192.168.2.101#53(192.168.2.101) ;; WHEN: Thu Sep 20 16:32:14 2012 ;; MSG SIZE rcvd: 83 We've never had the same problem on MACs, they always resolve mydomain.com Here is how I have defined mydomain.com on Bind9's configs on Jupiter, notice that the name of the machine on 192.168.2.100 is venus, so I have this file: /var/named/named.venus: $TTL 1D @ IN SOA jupiter. admin.ourcompany.com. ( 2003052800 ; serial 86400 ; refresh 300 ; retry 604800 ; expire 3600 ; minimum ) @ IN NS jupiter. @ IN A 192.168.2.100 * IN A 192.168.2.100 /var/named/zones/named.venus.zone zone "mydomain.com" IN {type master;file "/var/named/named.venus";allow-update {none;};}; One thing to note is that I haven't defined reverse DNS lookups, only the forward DNS lookups are defined in Bind9 configs, not sure if that's relevant or not. So my question is, why is this being so unstable? what could be the cause?

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • .htaccess not working (mod_rewrite)

    - by Mike Curry
    Edit: I am pretty sure my .htaccess file is NOT being executed, and the problem is NOT with my rewrite rules. I have not having any luck getting my .htaccess with mod_rewrite working. Basically all I am trying to do is remove 'www' from "http://www.site.com" and "https://www.site.com". If there is anything I am missing (conf files, etc let me know I willl update this) I jsut can't see whats wrong here... I am using a 1&1 VPS III Virtual private server... anyone ever have this issue? I am using Ubuntu 8.04 Server LTS. Here is my .htaccess file (located @ /var/www/site/trunk/html/) Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} ^www\.(.*) [NC] RewriteRule (.*) //%1/$1 [L,R=301] My mod_rewrite is enabled: The auto regenerated sym link is there in mods-available and /usr/lib/apache2/modules/ contains mod_rewrite.so root@s15348441:/etc/apache2/mods-available# more rewrite.load LoadModule rewrite_module /usr/lib/apache2/modules/mod_rewrite.so root@s15348441:/var/log# apache2ctl -t -D DUMP_MODULES Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) setenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK My apache config files: apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minor | Minimal | Major | Prod # where Full conveys the most information, and Prod the least. # ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # ServerSignature On # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # Putting this all together, we can internationalize error responses. # # We use Alias to redirect any /error/HTTP_<error>.html.var response to # our collection of by-error message multi-language collections. We use # includes to substitute the appropriate text. # # You can modify the messages' appearance without changing any of the # default HTTP_<error>.html.var files by adding the line: # # Alias /error/include/ "/your/include/path/" # # which allows you to create your own set of files by starting with the # /usr/share/apache2/error/include/ files and copying them to /your/include/path/, # even on a per-VirtualHost basis. The default include files will display # your Apache version number and your ServerAdmin email address regardless # of the setting of ServerSignature. # # The internationalized error documents require mod_alias, mod_include # and mod_negotiation. To activate them, uncomment the following 30 lines. # Alias /error/ "/usr/share/apache2/error/" # # <Directory "/usr/share/apache2/error"> # AllowOverride None # Options IncludesNoExec # AddOutputFilter Includes html # AddHandler type-map var # Order allow,deny # Allow from all # LanguagePriority en cs de es fr it nl sv pt-br ro # ForceLanguagePriority Prefer Fallback # </Directory> # # ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var # ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var # ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var # ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var # ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var # ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var # ErrorDocument 410 /error/HTTP_GONE.html.var # ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var # ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var # ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.html.var # ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var # ErrorDocument 415 /error/HTTP_UNSUPPORTED_MEDIA_TYPE.html.var # ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var # ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var # ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var # ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var # ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ My default config file for www on apache NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] #SSLEnable #SSLVerifyClient none #SSLCertificateFile /usr/local/ssl/crt/public.crt #SSLCertificateKeyFile /usr/local/ssl/private/private.key DocumentRoot /var/www/site/trunk/html <Directory /> Options FollowSymLinks AllowOverride all </Directory> <Directory /var/www/site/trunk/html> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> My ssl config file NameVirtualHost *:443 <VirtualHost *:443> ServerAdmin [email protected] #SSLEnable #SSLVerifyClient none #SSLCertificateFile /usr/local/ssl/crt/public.crt #SSLCertificateKeyFile /usr/local/ssl/private/private.key DocumentRoot /var/www/site/trunk/html <Directory /> Options FollowSymLinks AllowOverride all </Directory> <Directory /var/www/site/trunk/html> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn SSLEngine On SSLCertificateFile /usr/local/ssl/crt/public.crt SSLCertificateKeyFile /usr/local/ssl/private/private.key CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> My /etc/apache2/httpd.conf is blank The directory /etc/apache2/conf.d has nothing in it but one file (charset) contents of /etc/apache2/conf.dcharset # Read the documentation before enabling AddDefaultCharset. # In general, it is only a good idea if you know that all your files # have this encoding. It will override any encoding given in the files # in meta http-equiv or xml encoding tags. #AddDefaultCharset UTF-8 My apache error.log [Wed Jun 03 00:12:31 2009] [error] [client 216.168.43.234] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Jun 03 05:03:51 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:03:54 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:13:48 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:13:51 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:13:54 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:13:57 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:17:28 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 05:26:23 2009] [notice] caught SIGWINCH, shutting down gracefully [Wed Jun 03 05:26:34 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations [Wed Jun 03 06:03:41 2009] [notice] caught SIGWINCH, shutting down gracefully [Wed Jun 03 06:03:51 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations [Wed Jun 03 06:25:07 2009] [notice] caught SIGWINCH, shutting down gracefully [Wed Jun 03 06:25:17 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations [Wed Jun 03 12:09:25 2009] [error] [client 61.139.105.163] File does not exist: /var/www/site/trunk/html/fastenv [Wed Jun 03 15:04:42 2009] [notice] Graceful restart requested, doing restart [Wed Jun 03 15:04:43 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations [Wed Jun 03 15:29:51 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 15:29:54 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 15:30:32 2009] [error] [client 99.247.237.46] File does not exist: /var/www/site/trunk/html/favicon.ico [Wed Jun 03 15:45:54 2009] [notice] caught SIGWINCH, shutting down gracefully [Wed Jun 03 15:46:05 2009] [notice] Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.6 with Suhosin-Patch mod_ssl/2.2.8 OpenSSL/0.9.8g configured -- resuming normal operations

    Read the article

  • Why do I get Detached Entity exception when upgrading Spring Boot 1.1.4 to 1.1.5

    - by mmeany
    On updating Spring Boot from 1.1.4 to 1.1.5 a simple web application started generating detached entity exceptions. Specifically, a post authentication inteceptor that bumped number of visits was causing the problem. A quick check of loaded dependencies showed that Spring Data has been updated from 1.6.1 to 1.6.2 and a further check of the change log shows a couple of issues relating to optimistic locking, version fields and JPA issues that have been fixed. Well I am using a version field and it starts out as Null following recommendation to not set in the specification. I have produced a very simple test scenario where I get detached entity exceptions if the version field starts as null or zero. If I create an entity with version 1 however then I do not get these exceptions. Is this expected behaviour or is there still something amiss? Below is the test scenario I have for this condition. In the scenario the service layer that has been annotated @Transactional. Each test case makes multiple calls to the service layer - the tests are working with detached entities as this is the scenario I am working with in the full blown application. The test case comprises four tests: Test 1 - versionNullCausesAnExceptionOnUpdate() In this test the version field in the detached object is Null. This is how I would usually create the object prior to passing to the service. This test fails with a Detached Entity exception. I would have expected this test to pass. If there is a flaw in the test then the rest of the scenario is probably moot. Test 2 - versionZeroCausesExceptionOnUpdate() In this test I have set the version to value Long(0L). This is an edge case test and included because I found reference to Zero values being used for version field in the Spring Data change log. This test fails with a Detached Entity exception. Of interest simply because the following two tests pass leaving this as an anomaly. Test 3 - versionOneDoesNotCausesExceptionOnUpdate() In this test the version field is set to value Long(1L). Not something I would usually do, but considering the notes in the Spring Data change log I decided to give it a go. This test passes. Would not usually set the version field, but this looks like a work-around until I figure out why the first test is failing. Test 4 - versionOneDoesNotCausesExceptionWithMultipleUpdates() Encouraged by the result of test 3 I pushed the scenario a step further and perform multiple updates on the entity that started life with a version of Long(1L). This test passes. Reinforcement that this may be a useable work-around. The entity: package com.mvmlabs.domain; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; import javax.persistence.Version; @Entity @Table(name="user_details") public class User { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; @Version private Long version; @Column(nullable = false, unique = true) private String username; @Column(nullable = false) private Integer numberOfVisits; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public Long getVersion() { return version; } public void setVersion(Long version) { this.version = version; } public Integer getNumberOfVisits() { return numberOfVisits == null ? 0 : numberOfVisits; } public void setNumberOfVisits(Integer numberOfVisits) { this.numberOfVisits = numberOfVisits; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } } The repository: package com.mvmlabs.dao; import org.springframework.data.repository.CrudRepository; import com.mvmlabs.domain.User; public interface UserDao extends CrudRepository<User, Long>{ } The service interface: package com.mvmlabs.service; import com.mvmlabs.domain.User; public interface UserService { User save(User user); User loadUser(Long id); User registerVisit(User user); } The service implementation: package com.mvmlabs.service; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Propagation; import org.springframework.transaction.annotation.Transactional; import org.springframework.transaction.support.TransactionSynchronizationManager; import com.mvmlabs.dao.UserDao; import com.mvmlabs.domain.User; @Service @Transactional(propagation=Propagation.REQUIRED, readOnly=false) public class UserServiceJpaImpl implements UserService { @Autowired private UserDao userDao; @Transactional(readOnly=true) @Override public User loadUser(Long id) { return userDao.findOne(id); } @Override public User registerVisit(User user) { user.setNumberOfVisits(user.getNumberOfVisits() + 1); return userDao.save(user); } @Override public User save(User user) { return userDao.save(user); } } The application class: package com.mvmlabs; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; @Configuration @ComponentScan @EnableAutoConfiguration public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } } The POM: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mvmlabs</groupId> <artifactId>jpa-issue</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>spring-boot-jpa-issue</name> <description>JPA Issue between spring boot 1.1.4 and 1.1.5</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.5.RELEASE</version> <relativePath /> <!-- lookup parent from repository --> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <start-class>com.mvmlabs.Application</start-class> <java.version>1.7</java.version> </properties> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> The application properties: spring.jpa.hibernate.ddl-auto: create spring.jpa.hibernate.naming_strategy: org.hibernate.cfg.ImprovedNamingStrategy spring.jpa.database: HSQL spring.jpa.show-sql: true spring.datasource.url=jdbc:hsqldb:file:./target/testdb spring.datasource.username=sa spring.datasource.password= spring.datasource.driverClassName=org.hsqldb.jdbcDriver The test case: package com.mvmlabs; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.SpringApplicationConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import com.mvmlabs.domain.User; import com.mvmlabs.service.UserService; @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) public class ApplicationTests { @Autowired UserService userService; @Test public void versionNullCausesAnExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Null"); user.setNumberOfVisits(0); user.setVersion(null); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionZeroCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Zero"); user.setNumberOfVisits(0); user.setVersion(0L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version One"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(2L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionWithMultipleUpdates() throws Exception { User user = new User(); user.setUsername("Version One Multiple"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); user = userService.registerVisit(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(3), user.getNumberOfVisits()); Assert.assertEquals(new Long(4L), user.getVersion()); } } The first two tests fail with detached entity exception. The last two tests pass as expected. Now change Spring Boot version to 1.1.4 and rerun, all tests pass. Are my expectations wrong? Edit: This code saved to GitHub at https://github.com/mmeany/spring-boot-detached-entity-issue

    Read the article

  • Setting up a pc bluetooth server for android

    - by Del
    Alright, I've been reading a lot of topics the past two or three days and nothing seems to have asked this. I am writing a PC side server for my andriod device, this is for exchanging some information and general debugging. Eventually I will be connecting to a SPP device to control a microcontroller. I have managed, using the following (Android to pc) to connect to rfcomm channel 11 and exchange data between my android device and my pc. Method m = device.getClass().getMethod("createRfcommSocket", new Class[] { int.class }); tmp = (BluetoothSocket) m.invoke(device, Integer.valueOf(11)); I have attempted the createRfcommSocketToServiceRecord(UUID) method, with absolutely no luck. For the PC side, I have been using the C Bluez stack for linux. I have the following code which registers the service and opens a server socket: int main(int argc, char **argv) { struct sockaddr_rc loc_addr = { 0 }, rem_addr = { 0 }; char buf[1024] = { 0 }; char str[1024] = { 0 }; int s, client, bytes_read; sdp_session_t *session; socklen_t opt = sizeof(rem_addr); session = register_service(); s = socket(AF_BLUETOOTH, SOCK_STREAM, BTPROTO_RFCOMM); loc_addr.rc_family = AF_BLUETOOTH; loc_addr.rc_bdaddr = *BDADDR_ANY; loc_addr.rc_channel = (uint8_t) 11; bind(s, (struct sockaddr *)&loc_addr, sizeof(loc_addr)); listen(s, 1); client = accept(s, (struct sockaddr *)&rem_addr, &opt); ba2str( &rem_addr.rc_bdaddr, buf ); fprintf(stderr, "accepted connection from %s\n", buf); memset(buf, 0, sizeof(buf)); bytes_read = read(client, buf, sizeof(buf)); if( bytes_read 0 ) { printf("received [%s]\n", buf); } sprintf(str,"to Android."); printf("sent [%s]\n",str); write(client, str, sizeof(str)); close(client); close(s); sdp_close( session ); return 0; } sdp_session_t *register_service() { uint32_t svc_uuid_int[] = { 0x00000000,0x00000000,0x00000000,0x00000000 }; uint8_t rfcomm_channel = 11; const char *service_name = "Remote Host"; const char *service_dsc = "What the remote should be connecting to."; const char *service_prov = "Your mother"; uuid_t root_uuid, l2cap_uuid, rfcomm_uuid, svc_uuid; sdp_list_t *l2cap_list = 0, *rfcomm_list = 0, *root_list = 0, *proto_list = 0, *access_proto_list = 0; sdp_data_t *channel = 0, *psm = 0; sdp_record_t *record = sdp_record_alloc(); // set the general service ID sdp_uuid128_create( &svc_uuid, &svc_uuid_int ); sdp_set_service_id( record, svc_uuid ); // make the service record publicly browsable sdp_uuid16_create(&root_uuid, PUBLIC_BROWSE_GROUP); root_list = sdp_list_append(0, &root_uuid); sdp_set_browse_groups( record, root_list ); // set l2cap information sdp_uuid16_create(&l2cap_uuid, L2CAP_UUID); l2cap_list = sdp_list_append( 0, &l2cap_uuid ); proto_list = sdp_list_append( 0, l2cap_list ); // set rfcomm information sdp_uuid16_create(&rfcomm_uuid, RFCOMM_UUID); channel = sdp_data_alloc(SDP_UINT8, &rfcomm_channel); rfcomm_list = sdp_list_append( 0, &rfcomm_uuid ); sdp_list_append( rfcomm_list, channel ); sdp_list_append( proto_list, rfcomm_list ); // attach protocol information to service record access_proto_list = sdp_list_append( 0, proto_list ); sdp_set_access_protos( record, access_proto_list ); // set the name, provider, and description sdp_set_info_attr(record, service_name, service_prov, service_dsc); int err = 0; sdp_session_t *session = 0; // connect to the local SDP server, register the service record, and // disconnect session = sdp_connect( BDADDR_ANY, BDADDR_LOCAL, SDP_RETRY_IF_BUSY ); err = sdp_record_register(session, record, 0); // cleanup //sdp_data_free( channel ); sdp_list_free( l2cap_list, 0 ); sdp_list_free( rfcomm_list, 0 ); sdp_list_free( root_list, 0 ); sdp_list_free( access_proto_list, 0 ); return session; } And another piece of code, in addition to 'sdptool browse local' which can verifty that the service record is running on the pc: int main(int argc, char **argv) { uuid_t svc_uuid; uint32_t svc_uuid_int[] = { 0x00000000,0x00000000,0x00000000,0x00000000 }; int err; bdaddr_t target; sdp_list_t *response_list = NULL, *search_list, *attrid_list; sdp_session_t *session = 0; str2ba( "01:23:45:67:89:AB", &target ); // connect to the SDP server running on the remote machine session = sdp_connect( BDADDR_ANY, BDADDR_LOCAL, SDP_RETRY_IF_BUSY ); // specify the UUID of the application we're searching for sdp_uuid128_create( &svc_uuid, &svc_uuid_int ); search_list = sdp_list_append( NULL, &svc_uuid ); // specify that we want a list of all the matching applications' attributes uint32_t range = 0x0000ffff; attrid_list = sdp_list_append( NULL, &range ); // get a list of service records that have UUID 0xabcd err = sdp_service_search_attr_req( session, search_list, \ SDP_ATTR_REQ_RANGE, attrid_list, &response_list); sdp_list_t *r = response_list; // go through each of the service records for (; r; r = r-next ) { sdp_record_t *rec = (sdp_record_t*) r-data; sdp_list_t *proto_list; // get a list of the protocol sequences if( sdp_get_access_protos( rec, &proto_list ) == 0 ) { sdp_list_t *p = proto_list; // go through each protocol sequence for( ; p ; p = p-next ) { sdp_list_t *pds = (sdp_list_t*)p-data; // go through each protocol list of the protocol sequence for( ; pds ; pds = pds-next ) { // check the protocol attributes sdp_data_t *d = (sdp_data_t*)pds-data; int proto = 0; for( ; d; d = d-next ) { switch( d-dtd ) { case SDP_UUID16: case SDP_UUID32: case SDP_UUID128: proto = sdp_uuid_to_proto( &d-val.uuid ); break; case SDP_UINT8: if( proto == RFCOMM_UUID ) { printf("rfcomm channel: %d\n",d-val.int8); } break; } } } sdp_list_free( (sdp_list_t*)p-data, 0 ); } sdp_list_free( proto_list, 0 ); } printf("found service record 0x%x\n", rec-handle); sdp_record_free( rec ); } sdp_close(session); } Output: $ ./search rfcomm channel: 11 found service record 0x10008 sdptool: Service Name: Remote Host Service Description: What the remote should be connecting to. Service Provider: Your mother Service RecHandle: 0x10008 Protocol Descriptor List: "L2CAP" (0x0100) "RFCOMM" (0x0003) Channel: 11 And for logcat I'm getting this: 07-22 15:57:06.087: ERROR/BTLD(215): ****************search UUID = 0000*********** 07-22 15:57:06.087: INFO//system/bin/btld(209): btapp_dm_GetRemoteServiceChannel() 07-22 15:57:06.087: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Wake, 0x8003 #### 07-22 15:57:06.097: INFO/ActivityManager(88): Displayed activity com.example.socktest/.socktest: 79 ms (total 79 ms) 07-22 15:57:06.697: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Sleep, 0x8004 #### 07-22 15:57:07.517: WARN/BTLD(215): ccb timer ticks: 2147483648 07-22 15:57:07.517: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Wake, 0x8003 #### 07-22 15:57:07.547: WARN/BTLD(215): info:x10 07-22 15:57:07.547: INFO/BTL-IFS(215): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 10 pbytes (hdl 14) 07-22 15:57:07.547: DEBUG/DTUN_HCID_BZ4(253): dtun_dm_sig_link_up() 07-22 15:57:07.547: INFO/DTUN_HCID_BZ4(253): dtun_dm_sig_link_up: dummy_handle = 342 07-22 15:57:07.547: DEBUG/ADAPTER(253): adapter_get_device(00:02:72:AB:7C:EE) 07-22 15:57:07.547: ERROR/BluetoothEventLoop.cpp(88): pollData[0] is revented, check next one 07-22 15:57:07.547: ERROR/BluetoothEventLoop.cpp(88): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/253/hci0/dev_00_02_72_AB_7C_EE 07-22 15:57:07.777: WARN/BTLD(215): process_service_search_attr_rsp 07-22 15:57:07.787: INFO/BTL-IFS(215): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 13 pbytes (hdl 14) 07-22 15:57:07.787: INFO/DTUN_HCID_BZ4(253): dtun_dm_sig_rmt_service_channel: success=0, service=00000000 07-22 15:57:07.787: ERROR/DTUN_HCID_BZ4(253): discovery unsuccessful! 07-22 15:57:08.497: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Sleep, 0x8004 #### 07-22 15:57:09.507: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Wake, 0x8003 #### 07-22 15:57:09.597: INFO/BTL-IFS(215): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 11 pbytes (hdl 14) 07-22 15:57:09.597: DEBUG/DTUN_HCID_BZ4(253): dtun_dm_sig_link_down() 07-22 15:57:09.597: INFO/DTUN_HCID_BZ4(253): dtun_dm_sig_link_down device = 0xf7a0 handle = 342 reason = 22 07-22 15:57:09.597: ERROR/BluetoothEventLoop.cpp(88): pollData[0] is revented, check next one 07-22 15:57:09.597: ERROR/BluetoothEventLoop.cpp(88): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/253/hci0/dev_00_02_72_AB_7C_EE 07-22 15:57:09.597: DEBUG/BluetoothA2dpService(88): Received intent Intent { act=android.bluetooth.device.action.ACL_DISCONNECTED (has extras) } 07-22 15:57:10.107: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Sleep, 0x8004 #### 07-22 15:57:12.107: DEBUG/BluetoothService(88): Cleaning up failed UUID channel lookup: 00:02:72:AB:7C:EE 00000000-0000-0000-0000-000000000000 07-22 15:57:12.107: ERROR/Socket Test(5234): connect() failed 07-22 15:57:12.107: DEBUG/ASOCKWRP(5234): asocket_abort [31,32,33] 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: s 31, how 2 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: fd (-1:31), bta -1, rc 0, wflags 0x0 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): __close_prot_rfcomm: fd 31 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): __close_prot_rfcomm: bind not completed on this socket 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): btlif_signal_event: fd (-1:31), bta -1, rc 0, wflags 0x0 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): btlif_signal_event: event BTLIF_BTS_EVT_ABORT matched 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wrp_close_s_only: wrp_close_s_only [31] (31:-1) [] 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wrp_close_s_only: data socket closed 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wsactive_del: delete wsock 31 from active list [ad3e1494] 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wrp_close_s_only: wsock fully closed, return to pool 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): btsk_free: success 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_write: wrote 1 bytes out of 1 on fd 33 07-22 15:57:12.107: DEBUG/ASOCKWRP(5234): asocket_destroy 07-22 15:57:12.107: DEBUG/ASOCKWRP(5234): asocket_abort [31,32,33] 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: s 31, how 2 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: btsk not found, normal close (31) 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_write: wrote 1 bytes out of 1 on fd 33 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_close: s 33 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_close: btsk not found, normal close (33) 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_close: s 32 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_close: btsk not found, normal close (32) 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_close: s 31 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_close: btsk not found, normal close (31) 07-22 15:57:12.157: DEBUG/Sensors(88): close_akm, fd=151 07-22 15:57:12.167: ERROR/CachedBluetoothDevice(477): onUuidChanged: Time since last connect14970690 07-22 15:57:12.237: DEBUG/Socket Test(5234): -On Stop- Sorry for bombarding you guys with what seems like a difficult question and a lot to read, but I've been working on this problem for a while and I've tried a lot of different things to get this working. Let me reiterate, I can get it to work, but not using service discovery protocol. I've tried a several different UUIDs and on two different computers, although I only have my HTC Incredible to test with. I've also heard some rumors that the BT stack wasn't working on the HTC Droid, but that isn't the case, at least, for PC interaction.

    Read the article

  • No unique bean of type [javax.persistence.EntityManagerFactory] is defined: expected single bean but found 0

    - by user659580
    Can someone tell me what's wrong with my config? I'm overly frustrated and I've been loosing my hair on this. Any pointer will be welcome. Thanks Persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"> <persistence-unit name="myPersistenceUnit" transaction-type="JTA"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <jta-data-source>jdbc/oracle</jta-data-source> <class>com.myproject.domain.UserAccount</class> <properties> <property name="eclipselink.logging.level" value="FINE"/> <property name="eclipselink.jdbc.batch-writing" value="JDBC" /> <property name="eclipselink.target-database" value="Oracle10g"/> <property name="eclipselink.cache.type.default" value="NONE"/> <!--Integrate EclipseLink with JTA in Glassfish--> <property name="eclipselink.target-server" value="SunAS9"/> <property name="eclipselink.cache.size.default" value="0"/> <property name="eclipselink.cache.shared.default" value="false"/> </properties> </persistence-unit> </persistence> Web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" id="WebApp_ID" version="3.0"> <display-name>MyProject</display-name> <persistence-unit-ref> <persistence-unit-ref-name>persistence/myPersistenceUnit</persistence-unit-ref-name> <persistence-unit-name>myPersistenceUnit</persistence-unit-name> </persistence-unit-ref> <servlet> <servlet-name>mvc-dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>mvc-dispatcher</servlet-name> <url-pattern>*.htm</url-pattern> </servlet-mapping> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/mvc-dispatcher-servlet.xml</param-value> </context-param> <context-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:applicationContext.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> </web-app> applicationContext.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:jee="http://www.springframework.org/schema/jee" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd" default-autowire="byName"> <tx:annotation-driven/> <tx:jta-transaction-manager/> <jee:jndi-lookup id="entityManagerFactory" jndi-name="persistence/myPersistenceUnit"/> <bean class="org.springframework.beans.factory.annotation.RequiredAnnotationBeanPostProcessor"/> <bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"/> <!-- enables interpretation of the @PersistenceUnit/@PersistenceContext annotations providing convenient access to EntityManagerFactory/EntityManager --> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> </beans> mvc-dispatcher-servlet.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd"> <!-- DispatcherServlet Context: defines this servlet's request-processing infrastructure --> <tx:annotation-driven /> <tx:jta-transaction-manager /> <context:component-scan base-package="com.myProject" /> <context:annotation-config /> <!-- Enables the Spring MVC @Controller programming model --> <mvc:annotation-driven /> <mvc:default-servlet-handler /> <bean class="org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping" /> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter" /> <!-- Location Tiles config --> <bean id="tilesConfigurer" class="org.springframework.web.servlet.view.tiles2.TilesConfigurer"> <property name="definitions"> <list> <value>/WEB-INF/tiles-defs.xml</value> </list> </property> </bean> <!-- Resolves views selected for rendering by Tiles --> <bean id="tilesViewResolver" class="org.springframework.web.servlet.view.UrlBasedViewResolver" p:viewClass="org.springframework.web.servlet.view.tiles2.TilesView" /> <!-- Resolves views selected for rendering by @Controllers to .jsp resources in the /WEB-INF/views directory --> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix"> <value>/WEB-INF/pages/</value> </property> <property name="suffix"> <value>.jsp</value> </property> </bean> <bean id="messageSource" class="org.springframework.context.support.ReloadableResourceBundleMessageSource"> <property name="basename" value="/WEB-INF/messages" /> <property name="cacheSeconds" value="0"/> </bean> <bean id="validator" class="org.springframework.validation.beanvalidation.LocalValidatorFactoryBean" /> </beans> UserAccountDAO.java @Repository public class UserAccountDAO implements IUserAccountDAO { private EntityManager entityManager; @PersistenceContext public void setEntityManager(EntityManager entityManager) { this.entityManager = entityManager; } @Override @Transactional(readOnly = true, propagation = Propagation.REQUIRED) public UserAccount checkLogin(String userName, String pwd) { //* Find the user in the DB Query queryUserAccount = entityManager.createQuery("select u from UserAccount u where (u.username = :userName) and (u.password = :pwd)"); ....... } } loginController.java @Controller @SessionAttributes({"userAccount"}) public class LoginLogOutController { private static final Logger logger = LoggerFactory.getLogger(UserAccount.class); @Resource private UserAccountDAO userDAO; @RequestMapping(value="/loginForm.htm", method = RequestMethod.GET) public String showloginForm(Map model) { logger.debug("Get login form"); UserAccount userAccount = new UserAccount(); model.put("userAccount", userAccount); return "loginform"; } ... Error Stack INFO: 13:52:21,657 ERROR ContextLoader:220 - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'loginController': Injection of resource dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'userAccountDAO': Injection of persistence dependencies failed; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No unique bean of type [javax.persistence.EntityManagerFactory] is defined: expected single bean but found 0 at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessPropertyValues(CommonAnnotationBeanPostProcessor.java:300) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1074) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:4664) at com.sun.enterprise.web.WebModule.contextListenerStart(WebModule.java:535) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5266) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1947) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1619) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:732)

    Read the article

  • How to solve "403 Forbidden" on CentOS6 with SELinux Disabled?

    - by André
    I have a machine on Linode that is driving me crazy. Linode does not have SELinux on CentOS6... I'm trying to configure to put my website in "/home/websites/public_html/mysite.com/public" As I don´t have SELinux enable, how can I avoid the "403 Forbidden" that I get when trying to access the webpage? Sorry for my english. Best Regards, Update1, ERROR_LOG [Mon Oct 17 14:04:16 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:08:07 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:10:25 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:10:41 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:32:35 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 14:34:45 2011] [error] [client 58.218.199.227] (13)Permission denied: access to /proxy-1.php denied [Mon Oct 17 15:32:25 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:37:26 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:37:43 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:38:32 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied [Mon Oct 17 15:42:56 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:43:12 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:45:34 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Oct 17 15:51:25 2011] [crit] [client 127.0.0.1] (13)Permission denied: /home/websites/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Upadate2, /home/websites directory drwx------ 3 websites websites 4096 Oct 17 14:52 . drwxr-xr-x. 3 root root 4096 Oct 17 13:42 .. -rw------- 1 websites websites 372 Oct 17 14:52 .bash_history -rw-r--r-- 1 websites websites 18 May 30 11:46 .bash_logout -rw-r--r-- 1 websites websites 176 May 30 11:46 .bash_profile -rw-r--r-- 1 websites websites 124 May 30 11:46 .bashrc drwxrwxr-x 3 websites apache 4096 Oct 17 13:45 public_html Update3, httpd.conf ### Section 1: Global Environment ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> #Listen 12.34.56.78:80 Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf #ExtendedStatus On User apache Group apache ServerAdmin root@localhost #ServerName www.example.com:80 UseCanonicalName Off DocumentRoot "/var/www/html" # # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # <Directory /> Options FollowSymLinks AllowOverride None </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # This should be changed to whatever you set DocumentRoot to. # <Directory "/home/websites/public_html"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # AllowOverride None # # Controls who can get stuff from this server. # Order allow,deny Allow from all </Directory> # # UserDir: The name of the directory that is appended onto a user's home # directory if a ~user request is received. # # The path to the end user account 'public_html' directory must be # accessible to the webserver userid. This usually means that ~userid # must have permissions of 711, ~userid/public_html must have permissions # of 755, and documents contained therein must be world-readable. # Otherwise, the client will only receive a "403 Forbidden" message. # # See also: http://httpd.apache.org/docs/misc/FAQ.html#forbidden # <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # #UserDir public_html </IfModule> # # Control access to UserDir directories. The following is an example # for a site where these directories are restricted to read-only. # #<Directory /home/*/public_html> # AllowOverride FileInfo AuthConfig Limit # Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec # <Limit GET POST OPTIONS> # Order allow,deny # Allow from all # </Limit> # <LimitExcept GET POST OPTIONS> # Order deny,allow # Deny from all # </LimitExcept> #</Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # # The index.html.var file (a type-map) is used to deliver content- # negotiated documents. The MultiViews Option can be used for the # same purpose, but it is much slower. # DirectoryIndex index.html index.html.var # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy All </Files> # # TypesConfig describes where the mime.types file (or equivalent) is # to be found. # TypesConfig /etc/mime.types # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # <IfModule mod_mime_magic.c> # MIMEMagicFile /usr/share/magic.mime MIMEMagicFile conf/magic </IfModule> # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off #EnableMMAP off #EnableSendfile off # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog logs/error_log LogLevel warn # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # "combinedio" includes actual counts of actual bytes received (%I) and sent (%O); this # requires the mod_logio module to be loaded. #LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # #CustomLog logs/access_log common # # If you would like to have separate agent and referer logfiles, uncomment # the following directives. # #CustomLog logs/referer_log referer #CustomLog logs/agent_log agent # # For a single logfile with access, agent, and referer information # (Combined Logfile Format), use the following directive: # CustomLog logs/access_log combined ServerSignature On Alias /icons/ "/var/www/icons/" <Directory "/var/www/icons"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> # # WebDAV module configuration section. # <IfModule mod_dav_fs.c> # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb </IfModule> # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" # # "/var/www/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ # # DefaultIcon is which icon to show for files which do not have an icon # explicitly set. # DefaultIcon /icons/unknown.gif # # AddDescription allows you to place a short description after a file in # server-generated indexes. These are only displayed for FancyIndexed # directories. # Format: AddDescription "description" filename # #AddDescription "GZIP compressed document" .gz #AddDescription "tar archive" .tar #AddDescription "GZIP compressed tar archive" .tgz # # ReadmeName is the name of the README file the server will look for by # default, and append to directory listings. # # HeaderName is the name of a file which should be prepended to # directory indexes. ReadmeName README.html HeaderName HEADER.html # # IndexIgnore is a set of filenames which directory indexing should ignore # and not include in the listing. Shell-style wildcarding is permitted. # IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t # # DefaultLanguage and AddLanguage allows you to specify the language of # a document. You can then use content negotiation to give a browser a # file in a language the user can understand. # # Specify a default language. This means that all data # going out without a specific language tag (see below) will # be marked with this one. You probably do NOT want to set # this unless you are sure it is correct for all cases. # # * It is generally better to not mark a page as # * being a certain language than marking it with the wrong # * language! # # DefaultLanguage nl # # Note 1: The suffix does not have to be the same as the language # keyword --- those with documents in Polish (whose net-standard # language code is pl) may wish to use "AddLanguage pl .po" to # avoid the ambiguity with the common suffix for perl scripts. # # Note 2: The example entries below illustrate that in some cases # the two character 'Language' abbreviation is not identical to # the two character 'Country' code for its country, # E.g. 'Danmark/dk' versus 'Danish/da'. # # Note 3: In the case of 'ltz' we violate the RFC by using a three char # specifier. There is 'work in progress' to fix this and get # the reference data for rfc1766 cleaned up. # # Catalan (ca) - Croatian (hr) - Czech (cs) - Danish (da) - Dutch (nl) # English (en) - Esperanto (eo) - Estonian (et) - French (fr) - German (de) # Greek-Modern (el) - Hebrew (he) - Italian (it) - Japanese (ja) # Korean (ko) - Luxembourgeois* (ltz) - Norwegian Nynorsk (nn) # Norwegian (no) - Polish (pl) - Portugese (pt) # Brazilian Portuguese (pt-BR) - Russian (ru) - Swedish (sv) # Simplified Chinese (zh-CN) - Spanish (es) - Traditional Chinese (zh-TW) # AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw # # LanguagePriority allows you to give precedence to some languages # in case of a tie during content negotiation. # # Just list the languages in decreasing order of preference. We have # more or less alphabetized them here. You probably want to change this. # LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW # # ForceLanguagePriority allows you to serve a result page rather than # MULTIPLE CHOICES (Prefer) [in case of a tie] or NOT ACCEPTABLE (Fallback) # [in case no accepted languages matched the available variants] # ForceLanguagePriority Prefer Fallback # # Specify a default charset for all content served; this enables # interpretation of all content as UTF-8 by default. To use the # default browser choice (ISO-8859-1), or to allow the META tags # in HTML content to override this choice, comment out this # directive: # AddDefaultCharset UTF-8 # # AddType allows you to add to or override the MIME configuration # file mime.types for specific file types. # #AddType application/x-tar .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # Despite the name similarity, the following Add* directives have nothing # to do with the FancyIndexing customization directives above. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # MIME-types for downloading Certificates and CRLs # AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # # For files that include their own HTTP headers: # #AddHandler send-as-is asis # # For type maps (negotiated resources): # (This is enabled by default to allow the Apache "It Worked" page # to be distributed in multiple languages.) # AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # AddType text/html .shtml AddOutputFilter INCLUDES .shtml # # Action lets you define media types that will execute a script whenever # a matching file is called. This eliminates the need for repeated URL # pathnames for oft-used CGI file processors. # Format: Action media/type /cgi-script/location # Format: Action handler-name /cgi-script/location # # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # Putting this all together, we can internationalize error responses. # # We use Alias to redirect any /error/HTTP_<error>.html.var response to # our collection of by-error message multi-language collections. We use # includes to substitute the appropriate text. # # You can modify the messages' appearance without changing any of the # default HTTP_<error>.html.var files by adding the line: # # Alias /error/include/ "/your/include/path/" # # which allows you to create your own set of files by starting with the # /var/www/error/include/ files and # copying them to /your/include/path/, even on a per-VirtualHost basis. # Alias /error/ "/var/www/error/" <IfModule mod_negotiation.c> <IfModule mod_include.c> <Directory "/var/www/error"> AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en es de fr ForceLanguagePriority Prefer Fallback </Directory> # ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var # ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var # ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var # ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var # ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var # ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var # ErrorDocument 410 /error/HTTP_GONE.html.var # ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var # ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var # ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.html.var # ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var # ErrorDocument 415 /error/HTTP_UNSUPPORTED_MEDIA_TYPE.html.var # ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var # ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var # ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var # ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var # ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var </IfModule> </IfModule> # # The following directives modify normal HTTP response behavior to # handle known problems with browser implementations. # BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4\.0" force-response-1.0 BrowserMatch "Java/1\.0" force-response-1.0 BrowserMatch "JDK/1\.0" force-response-1.0 # # The following directive disables redirects on non-GET requests for # a directory that does not include the trailing slash. This fixes a # problem with Microsoft WebFolders which does not appropriately handle # redirects for folders with DAV methods. # Same deal with Apple's DAV filesystem and Gnome VFS support for DAV. # BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully # # Allow server status reports generated by mod_status, # with the URL of http://servername/server-status # Change the ".example.com" to match your domain to enable. # #<Location /server-status> # SetHandler server-status # Order deny,allow # Deny from all # Allow from .example.com #</Location> # # Allow remote server configuration reports, with the URL of # http://servername/server-info (requires that mod_info.c be loaded). # Change the ".example.com" to match your domain to enable. # #<Location /server-info> # SetHandler server-info # Order deny,allow # Deny from all # Allow from .example.com #</Location> # # Proxy Server directives. Uncomment the following lines to # enable the proxy server: # #<IfModule mod_proxy.c> #ProxyRequests On # #<Proxy *> # Order deny,allow # Deny from all # Allow from .example.com #</Proxy> # # Enable/disable the handling of HTTP/1.1 "Via:" headers. # ("Full" adds the server version; "Block" removes all outgoing Via: headers) # Set to one of: Off | On | Full | Block # #ProxyVia On # # To enable a cache of proxied content, uncomment the following lines. # See http://httpd.apache.org/docs/2.2/mod/mod_cache.html for more details. # #<IfModule mod_disk_cache.c> # CacheEnable disk / # CacheRoot "/var/cache/mod_proxy" #</IfModule> # #</IfModule> # End of proxy directives. ### Section 3: Virtual Hosts # # VirtualHost: If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common #</VirtualHost> # domain: mysite.com # public: /home/websites/public_html/mysite.com/ <VirtualHost *:80> # Admin email, Server Name (domain name) and any aliases ServerAdmin [email protected] ServerName mysite.com ServerAlias www.mysite.com # Index file and Document Root (where the public files are located) DirectoryIndex index.html DocumentRoot /home/websites/public_html/mysite.com/public # Custom log file locations LogLevel warn ErrorLog /home/websites/public_html/mysite.com/log/error.log CustomLog /home/websites/public_html/mysite.com/log/access.log combined </VirtualHost>

    Read the article

< Previous Page | 68 69 70 71 72 73  | Next Page >