Search Results

Search found 6433 results on 258 pages for 'trouble shooting'.

Page 255/258 | < Previous Page | 251 252 253 254 255 256 257 258  | Next Page >

  • xDocklet, Maven and Hibernate

    - by Vinothbabu
    I am having some trouble setting up xDocklet and getting this error. Error resolving version for plugin 'xdoclet:maven2-xdoclet2-plugin' from the repositories <pluginRepositories> <pluginRepository> <id>codehaus-plugins</id> <url>http://dist.codehaus.org/</url> <layout>legacy</layout> <snapshots> <enabled>true</enabled> </snapshots> <releases> <enabled>true</enabled> </releases> </pluginRepository> </pluginRepositories> <plugin> <groupId>xdoclet</groupId> <artifactId>maven2-xdoclet2-plugin</artifactId> <executions> <execution> <id>xdoclet</id> <phase>generate-sources</phase> <goals> <goal>xdoclet</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>xdoclet-plugins</groupId> <artifactId>xdoclet-plugin-qtags</artifactId> <version>1.0.4-SNAPSHOT</version> </dependency> <dependency> <groupId>xdoclet-plugins</groupId> <artifactId>xdoclet-taglib-qtags</artifactId> <version>1.0.4-SNAPSHOT</version> </dependency> </dependencies> <goals> <goal>xdoclet</goal> </goals> <configuration> <configs> <config> <components> <component> <classname>org.xdoclet.plugin.qtags.impl.QTagImplPlugin</classname> </component> <component> <classname>org.xdoclet.plugin.qtags.impl.QTagLibraryPlugin</classname> <params> <packagereplace>org.xdoclet.plugin.${xdoclet.plugin.namespace}.qtags</packagereplace> </params> </component> <component> <classname>org.xdoclet.plugin.qtags.doclipse.QTagDoclipsePlugin</classname> <params> <filereplace>qtags.xml</filereplace> <namespace>${xdoclet.plugin.namespace}</namespace> </params> </component> <component> <classname>org.xdoclet.plugin.qtags.confluence.QTagConfluencePlugin</classname> <params> <destdir>${project.build.directory}/tag-doc</destdir> <namespace>${xdoclet.plugin.namespace}</namespace> <filereplace>${xdoclet.plugin.namespace}.confluence</filereplace> </params> </component> </components> <includes>**/*.java</includes> <params> <destdir>${project.build.directory}/generated-resources/xdoclet</destdir> </params> </config> </configs> </configuration> </plugin> Some of my questions. Would you recommend me with going with xDocklet. Is there any alternative for it? Is it one of the best way, as hbm's does get generated automatically. Any suggestions on the way how my Java Objects should get persisted in the DB? Any good tutorials over xDocklet Maven and Hibernate. I am using xDocklet to generate HBM's automatically by annotating my POJO's.

    Read the article

  • spring mvc forward to jsp

    - by jerluc
    I currently have my web.xml configured to catch 404s and send them to my spring controller which will perform a search given the original URL request. The functionality is all there as far as the catch and search go, however the trouble begins to arise when I try to return a view. <bean class="org.springframework.web.servlet.view.ContentNegotiatingViewResolver" p:order="1"> <property name="mediaTypes"> <map> <entry key="json" value="application/json" /> <entry key="jsp" value="text/html" /> </map> </property> <property name="defaultContentType" value="application/json" /> <property name="favorPathExtension" value="true" /> <property name="viewResolvers"> <list> <bean class="org.springframework.web.servlet.view.BeanNameViewResolver" /> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/WEB-INF/jsp/" /> <property name="suffix" value="" /> </bean> </list> </property> <property name="defaultViews"> <list> <bean class="org.springframework.web.servlet.view.json.MappingJacksonJsonView" /> </list> </property> <property name="ignoreAcceptHeader" value="true" /> </bean> This is a snippet from my MVC config file. The problem lies in resolving the view's path to the /WEB-INF/jsp/ directory. Using a logger in my JBoss setup, I can see that when I test this search controller by going to a non-existent page, the following occurs: Server can't find the request Request is sent to 404 error page (in this case my search controller) Search controller performs search Search controller returns view name (for this illustration, we'll assume test.jsp is returned) Based off of server logger, I can see that org.springframework.web.servlet.view.JstlView is initialized once my search controller returns the view name (so I can assume it is being picked up correctly by the InternalResourceViewResolver) Server attempts to return content to browser resulting in a 404! A couple things confuse me about this: I'm not 100% sure why this isn't resolving when test.jsp clearly exists under the /WEB-INF/jsp/ directory. Even if there was some other problem, why would this result in a 404? Shouldn't a 404 error page that results in another 404 theoretically create an infinite loop? Thanks for any help or pointers! Controller class [incomplete]: @Controller public class SiteMapController { //-------------------------------------------------------------------------------------- @Autowired(required=true) private SearchService search; @Autowired(required=true) private CatalogService catalog; //-------------------------------------------------------------------------------------- @RequestMapping(value = "/sitemap", method = RequestMethod.GET) public String sitemap (HttpServletRequest request, HttpServletResponse response) { String forwardPath = ""; try { long startTime = System.nanoTime() / 1000000; String pathQuery = (String) request.getAttribute("javax.servlet.error.request_uri"); Scanner pathScanner = new Scanner(pathQuery).useDelimiter("\\/"); String context = pathScanner.next(); List<ProductLightDTO> results = new ArrayList<ProductLightDTO>(); StringBuilder query = new StringBuilder(); String currentValue; while (pathScanner.hasNext()) { currentValue = pathScanner.next().toLowerCase(); System.out.println(currentValue); if (query.length() > 0) query.append(" AND "); if (currentValue.contains("-")) { query.append("\""); query.append(currentValue.replace("-", " ")); query.append("\""); } else { query.append(currentValue + "*"); } } //results.addAll(this.doSearch(query.toString())); System.out.println("Request: " + pathQuery); System.out.println("Built Query:" + query.toString()); //System.out.println("Result size: " + results.size()); long totalTime = (System.nanoTime() / 1000000) - startTime; System.out.println("Total TTP: " + totalTime + "ms"); if (results == null || results.size() == 0) { forwardPath = "home.jsp"; } else if (results.size() == 1) { forwardPath = "product.jsp"; } else { forwardPath = "category.jsp"; } } catch (Exception ex) { System.err.println(ex); } System.out.println("Returning view: " + forwardPath); return forwardPath; } }

    Read the article

  • ruby on rails has_many through relationship

    - by BennyB
    Hi i'm having a little trouble with a has_many through relationship for my app and was hoping to find some help. So i've got Users & Lectures. Lectures are created by one user but then other users can then "join" the Lectures that have been created. Users have their own profile feed of the Lectures they have created & also have a feed of Lectures friends have created. This question however is not about creating a lecture but rather "Joining" a lecture that has been created already. I've created a "lecturerelationships" model & controller to handle this relationship between Lectures & the Users who have Joined (which i call "actives"). Users also then MUST "Exit" the Lecture (either by clicking "Exit" or navigating to one of the header navigation links). I'm grateful if anyone can work through some of this with me... I've got: Users.rb model Lectures.rb model Users_controller Lectures_controller then the following model lecturerelationship.rb class lecturerelationship < ActiveRecord::Base attr_accessible :active_id, :joinedlecture_id belongs_to :active, :class_name => "User" belongs_to :joinedlecture, :class_name => "Lecture" validates :active_id, :presence => true validates :joinedlecture_id, :presence => true end lecturerelationships_controller.rb class LecturerelationshipsController < ApplicationController before_filter :signed_in_user def create @lecture = Lecture.find(params[:lecturerelationship][:joinedlecture_id]) current_user.join!(@lecture) redirect_to @lecture end def destroy @lecture = Lecturerelationship.find(params[:id]).joinedlecture current_user.exit!(@user) redirect_to @user end end Lectures that have been created (by friends) show up on a users feed in the following file _activity_item.html.erb <li id="<%= activity_item.id %>"> <%= link_to gravatar_for(activity_item.user, :size => 200), activity_item.user %><br clear="all"> <%= render :partial => 'shared/join', :locals => {:activity_item => activity_item} %> <span class="title"><%= link_to activity_item.title, lecture_url(activity_item) %></span><br clear="all"> <span class="user"> Joined by <%= link_to activity_item.user.name, activity_item.user %> </span><br clear="all"> <span class="timestamp"> <%= time_ago_in_words(activity_item.created_at) %> ago. </span> <% if current_user?(activity_item.user) %> <%= link_to "delete", activity_item, :method => :delete, :confirm => "Are you sure?", :title => activity_item.content %> <% end %> </li> Then you see I link to the the 'shared/join' partial above which can be seen in the file below _join.html.erb <%= form_for(current_user.lecturerelationships.build(:joinedlecture_id => activity_item.id)) do |f| %> <div> <%= f.hidden_field :joinedlecture_id %> </div> <%= f.submit "Join", :class => "btn btn-large btn-info" %> <% end %> Some more files that might be needed: config/routes.rb SampleApp::Application.routes.draw do resources :users do member do get :following, :followers, :joined_lectures end end resources :sessions, :only => [:new, :create, :destroy] resources :lectures, :only => [:create, :destroy, :show] resources :relationships, :only => [:create, :destroy] #for users following each other resources :lecturerelationships, :only => [:create, :destroy] #users joining existing lectures So what happens is the lecture comes in my activity_feed with a Join button option at the bottom...which should create a lecturerelationship of an "active" & "joinedlecture" (which obviously are supposed to be coming from the user & lecture classes. But the error i get when i click the join button is as follows: ActiveRecord::StatementInvalid in LecturerelationshipsController#create SQLite3::ConstraintException: constraint failed: INSERT INTO "lecturerelationships" ("active_id", "created_at", "joinedlecture_id", "updated_at") VALUES (?, ?, ?, ?) Also i've included my user model (seems the error is referring to it) user.rb class User < ActiveRecord::Base attr_accessible :email, :name, :password, :password_confirmation has_secure_password has_many :lectures, :dependent => :destroy has_many :lecturerelationships, :foreign_key => "active_id", :dependent => :destroy has_many :joined_lectures, :through => :lecturerelationships, :source => :joinedlecture before_save { |user| user.email = email.downcase } before_save :create_remember_token validates :name, :presence => true, :length => { :maximum => 50 } VALID_EMAIL_REGEX = /\A[\w+\-.]+@[a-z\d\-.]+\.[a-z]+\z/i validates :email, :presence => true, :format => { :with => VALID_EMAIL_REGEX }, :uniqueness => { :case_sensitive => false } validates :password, :presence => true, :length => { :minimum => 6 } validates :password_confirmation, :presence => true def activity # This feed is for "My Activity" - basically lectures i've started Lecture.where("user_id = ?", id) end def friendactivity Lecture.from_users_followed_by(self) end # lECTURE TO USER (JOINING) RELATIONSHIPS def joined?(selected_lecture) lecturerelationships.find_by_joinedlecture_id(selected_lecture.id) end def join!(selected_lecture) lecturerelationships.create!(:joinedlecture_id => selected_lecture.id) end def exit!(selected_lecture) lecturerelationships.find_by_joinedlecture_id(selected_lecture.id).destroy end end Thanks for any and all help - i'll be on here for a while so as mentioned i'd GREATLY appreciate someone who may have the time to work through my issues with me...

    Read the article

  • How can I use curl to login multiple users from one php script

    - by kamal
    Here is the scenario: I have configured multiple users with login names aa1, aa2 .. zz99 , all with the same password, now i want to login to a php based server with these login ID's. I have a working script that logs in one user with a username and password, and using curl, browses to a target page: // Assume php , since somehow the php encapsulation quotes were giving me trouble $sHost = $argv[2]; $sStart = $argv[3]; $sReqId = $argv[4]; $sPage = $argv[5]; $sReqLogFile = $argv[6]; $sRespLogFile = $argv[7]; $sUserName = $argv[8]; $sPassword = $argv[9]; $sExecDelay = $argv[10]; //optional args: if($argc 11) { $sCommonSID = $argv[11]; } //$sXhprofLogFile = ""; $sSysStatsLogFile= ""; $sBaseUrl = 'https://'.$sHost.'/'; $nExecTime = 0; $sCookieFileName = 'cookiejar/'.genRandomString().'.txt'; touch($sCookieFileName); // Set the execution delay: $sStart += $sExecDelay; // Get the PHP Session Id: if(isset($sCommonSID)) { $sSID = $sCommonSID; }else{ $sSID = getSID($sHost,$sBaseUrl, $sUserName, $sPassword); } // Sleep for 100us intervals until we reach the stated execution time: do { usleep(100); }while(getFullMicrotime()$sPage, "pageUrl"=$sBaseUrl, "execStart" =$nExecStart, "execEnd"=$nExecEnd, "respTime"=$nExecTime, "xhprofToken"=$sXhpToken, "xhprofLink"=$sXhpLink, "fiveMinLoad"=$nFiveMinLoad); }else{ $nExecStart = 0; $sUrl = "***ERROR***"; $aReturn = null; } writeReqLog($sReqId, $nExecStart, $sSID, $sUrl, $sReqLogFile); return $aReturn; } function getFullMicrotime() { $fMtime = microtime(true); if(strpos($fMtime, ' ') !== false) { list($nUsec, $nSec) = explode(' ', $fMtime); return $nSec + $nUsec; } return $fMtime; } function writeRespLog($nReqId, $sHost, $sPage, $sSID = "***ERROR***", $nExecStart = 0, $nExecEnd = 0, $nRespTime = 0, $sXhpToken = "", $sXhpLink = "", $nFiveMinLoad = 0, $sRespLogFile) { $sMsg = $nReqId; $sMsg .= "\t".$sHost; $sMsg .= "/".$sPage; $sMsg .= "\t".$sSID; $sMsg .= "\t".$nExecStart; $sMsg .= "\t".$nExecEnd; $sMsg .= "\t".$nRespTime; $sMsg .= "\t".$sXhpToken; $sMsg .= "\t".$nFiveMinLoad; error_log($sMsg."\n",3,$sRespLogFile); } function writeReqLog($nReqId, $nExecStart, $sSID, $sUrl, $sReqLogFile) { $sMsg = $nReqId; $sMsg .= "\t".$sUrl; $sMsg .= "\t".$sSID; $sMsg .= "\t".$nExecStart; error_log($sMsg."\n",3,$sReqLogFile); } function parseSIDValue($sText) { $sSID = ""; preg_match('/SID:(.*)/',$sText, $aSID); if (count($aSID)) { $sSID = $aSID[1]; } return $sSID; } function parseFiveMinLoad($sText) { $nLoad = 0; $aMatch = array(); preg_match('/--5-MIN-LOAD:(.*)--/',$sText, $aMatch); if (count($aMatch)) { $nLoad = $aMatch[1]; } return $nLoad; } function curlRequest($sUrl, $sSID="") { global $sCookieFileName; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $sUrl); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); if($sSID == "") { curl_setopt($ch, CURLOPT_COOKIEJAR, $sCookieFileName); } else { curl_setopt($ch, CURLOPT_COOKIEFILE, $sCookieFileName); } $result =curl_exec ($ch); curl_close ($ch); return $result; } function parseXHProfToken($sPageContent) { //https://ktest.server.net/xhprof/xhprof_html/index.php?run=4d004b280a990&source=mybox $sToken = ""; $sRelLink = ""; $aMatch = array(); $aResp = array(); preg_match('/$sToken, "relLink"=$sRelLink); return $aResp; } function genRandomString() { $length = 10; $characters = '0123456789abcdefghijklmnopqrstuvwxyz'; $string = ''; for ($p = 0; $p

    Read the article

  • How to detect crashing tabed webbrowser and handle it?

    - by David Eaton
    I have a desktop application (forms) with a tab control, I assign a tab and a new custom webrowser control. I open up about 10 of these tabs. Each one visits about 100 - 500 different pages. The trouble is that if 1 webbrowser control has a problem it shuts down the entire program. I want to be able to close the offending webbrowser control and open a new one in it's place. Is there any event that I need to subscribe to catch a crashing or unresponsive webbrowser control ? I am using C# on windows 7 (Forms), .NET framework v4 =============================================================== UPDATE: 1 - The Tabbed WebBrowser Example Here is the code I have and How I use the webbrowser control in the most basic way. Create a new forms project and name it SimpleWeb Add a new class and name it myWeb.cs, here is the code to use. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows.Forms; using System.Security.Policy; namespace SimpleWeb { //inhert all of webbrowser class myWeb : WebBrowser { public myWeb() { //no javascript errors this.ScriptErrorsSuppressed = true; //Something we want set? AssignEvents(); } //keep near the top private void AssignEvents() { //assign WebBrowser events to our custom methods Navigated += myWeb_Navigated; DocumentCompleted += myWeb_DocumentCompleted; Navigating += myWeb_Navigating; NewWindow += myWeb_NewWindow; } #region Events //List of events:http://msdn.microsoft.com/en-us/library/system.windows.forms.webbrowser_events%28v=vs.100%29.aspx //Fired when a new windows opens private void myWeb_NewWindow(object sender, System.ComponentModel.CancelEventArgs e) { //cancel all popup windows e.Cancel = true; //beep to let you know canceled new window Console.Beep(9000, 200); } //Fired before page is navigated (not sure if its before or during?) private void myWeb_Navigating(object sender, System.Windows.Forms.WebBrowserNavigatingEventArgs args) { } //Fired after page is navigated (but not loaded) private void myWeb_Navigated(object sender, System.Windows.Forms.WebBrowserNavigatedEventArgs args) { } //Fired after page is loaded (Catch 22 - Iframes could be considered a page, can fire more than once. Ads are good examples) private void myWeb_DocumentCompleted(System.Object sender, System.Windows.Forms.WebBrowserDocumentCompletedEventArgs args) { } #endregion //Answer supplied by mo. (modified)? public void OpenUrl(string url) { try { //this.OpenUrl(url); this.Navigate(url); } catch (Exception ex) { MessageBox.Show("Your App Crashed! Because = " + ex.ToString()); //MyApplication.HandleException(ex); } } //Keep near the bottom private void RemoveEvents() { //Remove Events Navigated -= myWeb_Navigated; DocumentCompleted -= myWeb_DocumentCompleted; Navigating -= myWeb_Navigating; NewWindow -= myWeb_NewWindow; } } } On Form1 drag a standard tabControl and set the dock to fill, you can go into the tab collection and delete the pre-populated tabs if you like. Right Click on Form1 and Select "View Code" and replace it with this code. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using mshtml; namespace SimpleWeb { public partial class Form1 : Form { public Form1() { InitializeComponent(); //Load Up 10 Tabs for (int i = 0; i <= 10; i++) { newTab("Test_" + i, "http://wwww.yahoo.com"); } } private void newTab(string Title, String Url) { //Create a new Tab TabPage newTab = new TabPage(); newTab.Name = Title; newTab.Text = Title; //create webbrowser Instance myWeb newWeb = new myWeb(); //Add webbrowser to new tab newTab.Controls.Add(newWeb); newWeb.Dock = DockStyle.Fill; //Add New Tab to Tab Pages tabControl1.TabPages.Add(newTab); newWeb.OpenUrl(Url); } } } Save and Run the project. Using the answer below by mo. , you can surf the first url with no problem, but what about all the urls the user clicks on? How do we check those? I prefer not to add events to every single html element on a page, there has to be a way to run the new urls thru the function OpenUrl before it navigates without having an endless loop. Thanks.

    Read the article

  • Inconsistent results in R with RNetCDF - why?

    - by sarcozona
    I am having trouble extracting data from NetCDF data files using RNetCDF. The data files each have 3 dimensions (longitude, latitude, and a date) and 3 variables (latitude, longitude, and a climate variable). There are four datasets, each with a different climate variable. Here is some of the output from print.nc(p8m.tmax) for clarity. The other datasets are identical except for the specific climate variable. dimensions: month = UNLIMITED ; // (1368 currently) lat = 3105 ; lon = 7025 ; variables: float lat(lat) ; lat:long_name = "latitude" ; lat:standard_name = "latitude" ; lat:units = "degrees_north" ; float lon(lon) ; lon:long_name = "longitude" ; lon:standard_name = "longitude" ; lon:units = "degrees_east" ; short tmax(lon, lat, month) ; tmax:missing_value = -9999 ; tmax:_FillValue = -9999 ; tmax:units = "degree_celsius" ; tmax:scale_factor = 0.01 ; tmax:valid_min = -5000 ; tmax:valid_max = 6000 ; I am getting behavior I don't understand when I use the var.get.nc function from the RNetCDF package. For example, when I attempt to extract 82 values beginning at stval from the maximum temperature data (p8m.tmax <- open.nc(tmaxdataset.nc)) with > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,82)) (where lon_val and lat_val specify the location in the dataset of the coordinates I'm interested in and stval is stval is set to which(time_vec==200201), which in this case equaled 1285.) I get Error: Invalid argument But after successfully extracting 80 and 81 values > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,80)) > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,81)) the command with 82 works: > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,82)) [1] 444 866 1063 ... [output snipped] The same problem occurs in the identically structured tmin file, but at 36 instead of 82: > var.get.nc(p8m.tmin,'tmin', start=c(lon_val, lat_val, stval),count=c(1,1,36)) produces Error: Invalid argument But after repeating with counts of 30, 31, etc > var.get.nc(p8m.tmin,'tmin', start=c(lon_val, lat_val, stval), count=c(1,1,36)) works. These examples make it seem like the function is failing at the last count, but that actually isn't the case. In the first example, var.get.nc gave Error: Invalid argument after I asked for 84 values. I then narrowed the failure down to the 82nd count by varying the starting point in the dataset and asking for only 1 value at a time. The particular number the problem occurs at also varies. I can close and reopen the dataset and have the problem occur at a different location. In the particular examples above, lon_val and lat_val are 1595 and 1751, respectively, identifying the location in the dataset along the lat and lon dimensions for the latitude and longitude I'm interested in. The 1595th latitude and 1751th longitude are not the problem, however. The problem occurs with all other latitude and longitudes I've tried. Varying the starting location in the dataset along the climate variable dimension (stval) and/or specifying it different (as a number in the command instead of the object stval) also does not fix the problem. This problem doesn't always occur. I can run identical code three times in a row (clearing all objects in between runs) and get a different outcome each time. The first run may choke on the 7th entry I'm trying to get, the second might work fine, and the third run might choke on the 83rd entry. I'm absolutely baffled by such inconsistent behavior. The open.nc function has also started to fail with the same Error: Invalid argument. Like the var.get.nc problems, it also occurs inconsistently. Does anyone know what causes the initial failure to extract the variable? And how I might prevent it? Could have to do with the size of the data files (~60GB each) and/or the fact that I'm accessing them through networked drives? This was also asked here: https://stat.ethz.ch/pipermail/r-help/2011-June/281233.html > sessionInfo() R version 2.13.0 (2011-04-13) Platform: i386-pc-mingw32/i386 (32-bit) locale: [1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 [3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C [5] LC_TIME=English_United States.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] reshape_0.8.4 plyr_1.5.2 RNetCDF_1.5.2-2 loaded via a namespace (and not attached): [1] tools_2.13.0

    Read the article

  • Class structure for the proposed data and its containers ?

    - by Prix
    First I would like to wish a happy new year to everyone that may read this :) I am having trouble on how to make a container for some data that I am importing into my application, and I am not sure on how to explain this very well and my english is not really a that good so I hope you can bear with my mistake and help me with some guidance. Currently with a foreach I am importing the follow fields from the data I receive: guid, itemid, name, owner(optional, can be null), zoneid, titleid, subid, heading, x, y, z, typeA, typeB, typeC From the above fields I need to store a Waypoint list of all coords a given item has moved to BUT for each guid I have a new list of waypoints. And from the waypoint list the first entry is also my initial item start location which would be my item initial position (if you notice i have a separate list for it which I was not sure would be better or not) not all items have a waypoint list but all items have the first position. So the first idea I came with to store this data was a list with a class with 2 inner classes with their list: public List<ItemList> myList = new List<ItemList>(); public class ItemList { public int guid { get; set; } public int itemid { get; set; } public string name { get; set; } public int titleid { get; set; } public itemType status { get; set; } public class Waypoint { public float posX { get; set; } public float posY { get; set; } public float posZ { get; set; } } public List<Waypoint> waypoint = new List<Waypoint>(); public class Location { public int zone { get; set; } public int subid { get; set; } public int heading { get; set; } public float posX { get; set; } public float posY { get; set; } public float posZ { get; set; } } public List<Location> position = new List<Location>(); } So here is an example of how I would add a new waypoint to a GUID that exists in the list bool itemExists = myList.Exists(item => item.guid == guid && item.itemid == itemid); if (itemExists) { int lastDistance = 3; ItemList.Waypoint nextWaypoint; ItemList myItem = myList.Find(item => item.guid == guid && item.itemid == itemid); if (myItem.waypoint.Count == 0) { nextWaypoint = new ItemList.Waypoint(); nextWaypoint.posX = PosX; nextWaypoint.posY = PosY; nextWaypoint.posZ = PosZ; } else { ItemList.Waypoint lastWaypoint = myItem.waypoint[myItem.waypoint.Count - 1]; if (lastWaypoint != null) { lastDistance = getDistance(x, y, z, lastWaypoint.posX, lastWaypoint.posY, lastWaypoint.posZ); } if (lastDistance > 2) { nextWaypoint = new ItemList.Waypoint(); nextWaypoint.posX = PosX; nextWaypoint.posY = PosY; nextWaypoint.posZ = PosZ; } } myItem.waypoint.Add(nextWaypoint); } Then to register a new item I would take advantage of the itemExist above so I won't register the same GUID again: ItemList newItem = new ItemList(); newItem.guid = guid; newItem.itemid = itemid; newItem.name = name; newItem.status = status; newItem.titleid = titleid; // Item location ItemList.Location itemLocation = new ItemList.Location(); itemLocation.subid = 0; itemLocation.zone= zone; itemLocation.heading = convertHeading(Heading); itemLocation.posX = PosX; itemLocation.posY = PosY; itemLocation.posZ = PosZ; newItem.position.Add(itemLocation); myList.Add(newItem); Could you help me with advices on how my class structure and lists should look like ? Are there better ways to interate with the lists to get lastWaypoint of a GUID or verify wether an item exist or not ? What else would you advise me in general ? PS: If you have any questions or if there is something I missed to post please let me know and I will update it.

    Read the article

  • setting up linked list Java

    - by erp
    I'm working on some basic linked list stuff, like insert, delete, go to the front or end of the list, and basically i understand the concept of all of that stuff once i have the list i guess but im having trouble setting up the list. I was wondering of you guys could tell me if im going in the right direction. (mostly just the setup) this is what i have so far: public class List { private int size; private List linkedList; List head; List cur; List next; /** * Creates an empty list. * @pre * @post */ public List(){ linkedList = new List(); this.head = null; cur = head; } /** * Delete the current element from this list. The element after the deleted element becomes the new current. * If that's not possible, then the element before the deleted element becomes the new current. * If that is also not possible, then you need to recognize what state the list is in and define current accordingly. * Nothing should be done if a delete is not possible. * @pre * @post */ public void delete(){ // delete size--; } /** * Get the value of the current element. If this is not possible, throw an IllegalArgumentException. * @pre the list is not empty * @post * @return value of the current element. */ public char get(){ return getItem(cur); } /** * Go to the last element of the list. If this is not possible, don't change the cursor. * @pre * @post */ public void goLast(){ while (cur.next != null){ cur = cur.next; } } /** * Advance the cursor to the next element. If this is not possible, don't change the cursor. * @pre * @post */ public void goNext(){ if(cur.next != null){ cur = cur.next;} //else do nothing } /** * Retreat the cursor to the previous element. If this is not possible, don't change the cursor. * @pre * @post */ public void goPrev(){ } /** * Go to top of the list. This is the position before the first element. * @pre * @post */ public void goTop(){ } /** * Go to first element of the list. If this is not possible, don't change the cursor. * @pre * @post */ public void goFirst(){ } /** * Insert the given parameter after the current element. The newly inserted element becomes the current element. * @pre * @post * @param newVal : value to insert after the current element. */ public void insert(char newVal){ cur.setItem(newVal); size++; } /** * Determines if this list is empty. Empty means this list has no elements. * @pre * @post * @return true if the list is empty. */ public boolean isEmpty(){ return head == null; } /** * Determines the size of the list. The size of the list is the number of elements in the list. * @pre * @post * @return size which is the number of elements in the list. */ public int size(){ return size; } public class Node { private char item; private Node next; public Node() { } public Node(char item) { this.item = item; } public Node(char item, Node next) { this.item = item; this.next = next; } public char getItem() { return this.item; } public void setItem(char item) { this.item = item; } public Node getNext() { return this.next; } public void setNext(Node next) { this.next = next; } } } I got the node class alright (well i think it works alright), but is it necessary to even have that class? or can i go about it without even using it (just curious). And for example on the method get() in the list class can i not call that getItem() method from the node class because it's getting an error even though i thought that was the whole point for the node class. bottom line i just wanna make sure im setting up the list right. Thanks for any help guys, im new to linked list's so bear with me!

    Read the article

  • Loop crashing program having to do with 2D arrays

    - by user450062
    I am creating an encoding program and when I instruct the program to create a 5X5 grid based on the alphabet while skipping over letters that match up to certain pre-defined variables(which are given values by user input during runtime). I have a loop that instructs the loop to keep running until the values that access the array are out of bounds, the loop seems to cause the problem. This code is standardized so there shouldn't be much trouble compiling it in another compiler. Also would it be better to seperate my program into functions? here is the code: #include<iostream> #include<fstream> #include<cstdlib> #include<string> #include<limits> using namespace std; int main(){ while (!cin.fail()) { char type[81]; char filename[20]; char key [5]; char f[2] = "q"; char g[2] = "q"; char h[2] = "q"; char i[2] = "q"; char j[2] = "q"; char k[2] = "q"; char l[2] = "q"; int a = 1; int b = 1; int c = 1; int d = 1; int e = 1; string cipherarraytemplate[5][5]= { {"a","b","c","d","e"}, {"f","g","h","i","j"}, {"k","l","m","n","o"}, {"p","r","s","t","u"}, {"v","w","x","y","z"} }; string cipherarray[5][5]= { {"a","b","c","d","e"}, {"f","g","h","i","j"}, {"k","l","m","n","o"}, {"p","r","s","t","u"}, {"v","w","x","y","z"} }; cout<<"Enter the name of a file you want to create.\n"; cin>>filename; ofstream outFile; outFile.open(filename); outFile<<fixed; outFile.precision(2); outFile.setf(ios_base::showpoint); cin.ignore(std::numeric_limits<int>::max(),'\n'); cout<<"enter your codeword(codeword can have no repeating letters)\n"; cin>>key; while (key[a] != '\0' ){ while(b < 6){ cipherarray[b][c] = key[a]; if ( f == "q" ) { cipherarray[b][c] = f; } if ( f != "q" && g == "q" ) { cipherarray[b][c] = g; } if ( g != "q" && h == "q" ) { cipherarray[b][c] = h; } if ( h != "q" && i == "q" ) { cipherarray[b][c] = i; } if ( i != "q" && j == "q" ) { cipherarray[b][c] = j; } if ( j != "q" && k == "q" ) { cipherarray[b][c] = k; } if ( k != "q" && l == "q" ) { cipherarray[b][c] = l; } a++; b++; } c++; b = 1; } while (c < 6 || b < 6){ if (cipherarraytemplate[d][e] == f || cipherarraytemplate[d][e] == g || cipherarraytemplate[d][e] == h || cipherarraytemplate[d][e] == i || cipherarraytemplate[d][e] == j || cipherarraytemplate[d][e] == k || cipherarraytemplate[d][e] == l){ d++; } else { cipherarray[b][c] = cipherarraytemplate[d][e]; d++; b++; } if (d == 6){ d = 1; e++; } if (b == 6){ c++; b = 1; } } cout<<"now enter some text."<<endl<<"To end this program press Crtl-Z\n"; while(!cin.fail()){ cin.getline(type,81); outFile<<type<<endl; } outFile.close(); } } I know there is going to be some mid-forties guy out there who is going to stumble on to this post, he's have been programming for 20-some years and he's going to look at my code and say: "what is this guy doing".

    Read the article

  • Trying to install mysql then lots of brew doctor errors

    - by gdi2290
    I couldn't install mysql I get this brew install mysql Error: You must `brew link cmake' before mysql can be installed so then I type brew ink cmake Linking /usr/local/Cellar/cmake/2.8.8... Error: Could not symlink file: /usr/local/Cellar/cmake/2.8.8/share/doc/cmake /usr/local/share/doc is not writable. You should change its permissions. when I typed brew doctor I get this Error: Some directories in /usr/local/share/locale aren't writable. This can happen if you "sudo make install" software that isn't managed by Homebrew. If a brew tries to add locale information to one of these directories, then the install will fail during the link step. You should probably chown them: /usr/local/share/locale/ar /usr/local/share/locale/ar/LC_MESSAGES /usr/local/share/locale/be /usr/local/share/locale/be/LC_MESSAGES /usr/local/share/locale/bg /usr/local/share/locale/bg/LC_MESSAGES /usr/local/share/locale/bs /usr/local/share/locale/bs/LC_MESSAGES /usr/local/share/locale/ca /usr/local/share/locale/ca/LC_MESSAGES /usr/local/share/locale/cs /usr/local/share/locale/cs/LC_MESSAGES /usr/local/share/locale/da /usr/local/share/locale/da/LC_MESSAGES /usr/local/share/locale/de /usr/local/share/locale/de/LC_MESSAGES /usr/local/share/locale/de_AT /usr/local/share/locale/de_AT/LC_MESSAGES /usr/local/share/locale/de_CH /usr/local/share/locale/de_CH/LC_MESSAGES /usr/local/share/locale/de_DE /usr/local/share/locale/de_DE/LC_MESSAGES /usr/local/share/locale/el /usr/local/share/locale/el/LC_MESSAGES /usr/local/share/locale/en_AU /usr/local/share/locale/en_AU/LC_MESSAGES /usr/local/share/locale/en_CA /usr/local/share/locale/en_CA/LC_MESSAGES /usr/local/share/locale/en_GB /usr/local/share/locale/en_GB/LC_MESSAGES /usr/local/share/locale/eo /usr/local/share/locale/eo/LC_MESSAGES /usr/local/share/locale/es /usr/local/share/locale/es/LC_MESSAGES /usr/local/share/locale/es_ES /usr/local/share/locale/es_ES/LC_MESSAGES /usr/local/share/locale/es_PE /usr/local/share/locale/es_PE/LC_MESSAGES /usr/local/share/locale/et /usr/local/share/locale/et/LC_MESSAGES /usr/local/share/locale/fi /usr/local/share/locale/fi/LC_MESSAGES /usr/local/share/locale/fr /usr/local/share/locale/fr/LC_MESSAGES /usr/local/share/locale/fr_FR /usr/local/share/locale/fr_FR/LC_MESSAGES /usr/local/share/locale/gl /usr/local/share/locale/gl/LC_MESSAGES /usr/local/share/locale/he /usr/local/share/locale/he/LC_MESSAGES /usr/local/share/locale/hi /usr/local/share/locale/hi/LC_MESSAGES /usr/local/share/locale/hr /usr/local/share/locale/hr/LC_MESSAGES /usr/local/share/locale/hu /usr/local/share/locale/hu/LC_MESSAGES /usr/local/share/locale/hu_HU /usr/local/share/locale/hu_HU/LC_MESSAGES /usr/local/share/locale/id /usr/local/share/locale/id/LC_MESSAGES /usr/local/share/locale/it /usr/local/share/locale/it/LC_MESSAGES /usr/local/share/locale/ja /usr/local/share/locale/ja/LC_MESSAGES /usr/local/share/locale/ka /usr/local/share/locale/ka/LC_MESSAGES /usr/local/share/locale/ko /usr/local/share/locale/ko/LC_MESSAGES /usr/local/share/locale/lv /usr/local/share/locale/lv/LC_MESSAGES /usr/local/share/locale/mr /usr/local/share/locale/mr/LC_MESSAGES /usr/local/share/locale/nb /usr/local/share/locale/nb/LC_MESSAGES /usr/local/share/locale/nds /usr/local/share/locale/nds/LC_MESSAGES /usr/local/share/locale/nl /usr/local/share/locale/nl/LC_MESSAGES /usr/local/share/locale/nn /usr/local/share/locale/nn/LC_MESSAGES /usr/local/share/locale/oc /usr/local/share/locale/oc/LC_MESSAGES /usr/local/share/locale/pl /usr/local/share/locale/pl/LC_MESSAGES /usr/local/share/locale/pt /usr/local/share/locale/pt/LC_MESSAGES /usr/local/share/locale/pt_BR /usr/local/share/locale/pt_BR/LC_MESSAGES /usr/local/share/locale/pt_PT /usr/local/share/locale/pt_PT/LC_MESSAGES /usr/local/share/locale/ro /usr/local/share/locale/ro/LC_MESSAGES /usr/local/share/locale/ru /usr/local/share/locale/ru/LC_MESSAGES /usr/local/share/locale/sk /usr/local/share/locale/sk/LC_MESSAGES /usr/local/share/locale/sr /usr/local/share/locale/sr/LC_MESSAGES /usr/local/share/locale/sv /usr/local/share/locale/sv/LC_MESSAGES /usr/local/share/locale/ta /usr/local/share/locale/ta/LC_MESSAGES /usr/local/share/locale/te /usr/local/share/locale/te/LC_MESSAGES /usr/local/share/locale/tr /usr/local/share/locale/tr/LC_MESSAGES /usr/local/share/locale/uk /usr/local/share/locale/uk/LC_MESSAGES /usr/local/share/locale/vi /usr/local/share/locale/vi/LC_MESSAGES /usr/local/share/locale/zh_CN /usr/local/share/locale/zh_CN/LC_MESSAGES /usr/local/share/locale/zh_HK /usr/local/share/locale/zh_HK/LC_MESSAGES /usr/local/share/locale/zh_TW /usr/local/share/locale/zh_TW/LC_MESSAGES Error: The /usr/local directory is not writable. Even if this directory was writable when you installed Homebrew, other software may change permissions on this directory. Some versions of the "InstantOn" component of Airfoil are known to do this. You should probably change the ownership and permissions of /usr/local back to your user account. Error: "config" scripts exist outside your system or Homebrew directories. ./configure scripts often look for *-config scripts to determine if software packages are installed, and what additional flags to use when compiling and linking. Having additional scripts in your path can confuse software installed via Homebrew if the config script overrides a system or Homebrew provided script of the same name. We found the following "config" scripts: /opt/sm/pkg/active/bin/curl-config /opt/sm/pkg/active/bin/ncurses5-config /opt/sm/pkg/active/bin/ncursesw5-config /opt/sm/pkg/active/bin/pkg-config /opt/sm/pkg/active/bin/xml2-config /opt/sm/pkg/active/bin/xslt-config Error: gettext was detected in your PREFIX. The gettext provided by Homebrew is "keg-only", meaning it does not get linked into your PREFIX by default. If you brew link gettext then a large number of brews that don't otherwise have a depends_on 'gettext' will pick up gettext anyway during the ./configure step. If you have a non-Homebrew provided gettext, other problems will happen especially if it wasn't compiled with the proper architectures. Error: Unbrewed dylibs were found in /usr/local/lib. If you didn't put them there on purpose they could cause problems when building Homebrew formulae, and may need to be deleted. Unexpected dylibs: /usr/local/lib/libboost_filesystem-mt.dylib /usr/local/lib/libboost_serialization-mt.dylib /usr/local/lib/libboost_system-mt.dylib /usr/local/lib/libencfs.6.dylib /usr/local/lib/libintl.8.dylib /usr/local/lib/libmacfuse_i32.2.dylib /usr/local/lib/libmacfuse_i64.2.dylib /usr/local/lib/libosxfuse_i32.2.dylib /usr/local/lib/libosxfuse_i64.2.dylib /usr/local/lib/librlog.5.0.0.dylib Error: Unbrewed .la files were found in /usr/local/lib. If you didn't put them there on purpose they could cause problems when building Homebrew formulae, and may need to be deleted. Unexpected .la files: /usr/local/lib/libosxfuse_i32.la /usr/local/lib/libosxfuse_i64.la Error: Unbrewed .pc files were found in /usr/local/lib/pkgconfig. If you didn't put them there on purpose they could cause problems when building Homebrew formulae, and may need to be deleted. Unexpected .pc files: /usr/local/lib/pkgconfig/osxfuse.pc Error: You have unlinked kegs in your Cellar Leaving kegs unlinked can lead to build-trouble and cause brews that depend on those kegs to fail to run properly once built. cmake Error: Your pkg-config is not checking "/usr/X11/lib/pkgconfig" for packages. Earlier versions of the pkg-config formula did not add this path to the search path, which means that other formula may not be able to find certain dependencies. To resolve this issue, re-brew pkg-config with: brew rm pkg-config && brew install pkg-config Error: You have a non-Homebrew 'pkg-config' in your PATH: /opt/sm/pkg/active/bin/pkg-config ./configure may have problems finding brew-installed packages using this other pkg-config. Error: Your Xcode is configured with an invalid path. You should change it to the correct path. Please note that there is no correct path at this time if you have only installed the Command Line Tools for Xcode. If your Xcode is pre-4.3 or you installed the whole of Xcode 4.3 then one of these is (probably) what you want: sudo xcode-select -switch /Developer sudo xcode-select -switch /Applications/Xcode.app/Contents/Developer DO NOT SET / OR EVERYTHING BREAKS!

    Read the article

  • Compat Wireless Drivers Centrino N-2230

    - by user2699451
    So I am using linux and am having trouble installing the Compat Wireless drivers Hardware: Intel Centrino N-2230 OS: Linux Mint 64bit (kernel 13.08-generic) I followed this link http://www.mathyvanhoef.com/2012/09/compat-wireless-injection-patch-for.html Output: apt-get install linux-headers-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done linux-headers-3.8.0-19-generic is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 19 not upgraded. charles-W55xEU compat-wireless-2010-10-16 # cd ~ charles-W55xEU ~ # dir adt-bundle-linux-x86_64-20130917.zip Desktop known_hosts_backup charles-W55xEU ~ # wget http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.6/compat-wireless-3.6.2-1-snp.tar.bz2 --2013-10-29 10:28:23-- http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.6/compat-wireless-3.6.2-1-snp.tar.bz2 Resolving www.orbit-lab.org (www.orbit-lab.org)... 128.6.192.131 Connecting to www.orbit-lab.org (www.orbit-lab.org)|128.6.192.131|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4443700 (4,2M) [application/x-bzip2] Saving to: ‘compat-wireless-3.6.2-1-snp.tar.bz2’ 100%[======================================>] 4 443 700 13,5KB/s in 11m 3s 2013-10-29 10:39:27 (6,55 KB/s) - ‘compat-wireless-3.6.2-1-snp.tar.bz2’ saved [4443700/4443700] charles-W55xEU ~ # tar -xf compat-wireless-3.6.2-1-snp.tar.bz2 charles-W55xEU ~ # cd compat-wireless-3.6-rc6-1 bash: cd: compat-wireless-3.6-rc6-1: No such file or directory charles-W55xEU ~ # dir adt-bundle-linux-x86_64-20130917.zip Desktop compat-wireless-3.6.2-1-snp known_hosts_backup compat-wireless-3.6.2-1-snp.tar.bz2 charles-W55xEU ~ # cd compat-wireless-3.6.2-1-snp/ charles-W55xEU compat-wireless-3.6.2-1-snp # dir code-metrics.txt defconfigs linux-next-pending pending-stable compat drivers MAINTAINERS README config.mk enable-older-kernels Makefile scripts COPYRIGHT include net udev crap linux-next-cherry-picks patches charles-W55xEU compat-wireless-3.6.2-1-snp # wget http://patches.aircrack-ng.org/mac80211.compat08082009.wl_frag+ack_v1.patch --2013-10-29 10:40:52-- http://patches.aircrack-ng.org/mac80211.compat08082009.wl_frag+ack_v1.patch Resolving patches.aircrack-ng.org (patches.aircrack-ng.org)... 213.186.33.2, 2001:41d0:1:1b00:213:186:33:2 Connecting to patches.aircrack-ng.org (patches.aircrack-ng.org)|213.186.33.2|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1049 (1,0K) [text/plain] Saving to: ‘mac80211.compat08082009.wl_frag+ack_v1.patch’ 100%[======================================>] 1 049 --.-K/s in 0s 2013-10-29 10:40:56 (180 MB/s) - ‘mac80211.compat08082009.wl_frag+ack_v1.patch’ saved [1049/1049] charles-W55xEU compat-wireless-3.6.2-1-snp # patch -p1 < mac80211.compat08082009.wl_frag+ack_v1.patch patching file net/mac80211/tx.c Hunk #1 succeeded at 792 (offset 115 lines). charles-W55xEU compat-wireless-3.6.2-1-snp # wget -Ocompatwireless_chan_qos_frag.patch http://pastie.textmate.org/pastes/4882675/download --2013-10-29 10:43:18-- http://pastie.textmate.org/pastes/4882675/download Resolving pastie.textmate.org (pastie.textmate.org)... 178.79.137.125 Connecting to pastie.textmate.org (pastie.textmate.org)|178.79.137.125|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://pastie.org/pastes/4882675/download [following] --2013-10-29 10:43:20-- http://pastie.org/pastes/4882675/download Resolving pastie.org (pastie.org)... 96.126.119.119 Connecting to pastie.org (pastie.org)|96.126.119.119|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 2036 (2,0K) [application/octet-stream] Saving to: ‘compatwireless_chan_qos_frag.patch’ 100%[======================================>] 2 036 --.-K/s in 0,001s 2013-10-29 10:43:21 (3,35 MB/s) - ‘compatwireless_chan_qos_frag.patch’ saved [2036/2036] charles-W55xEU compat-wireless-3.6.2-1-snp # patch -p1 < compatwireless_chan_qos_frag.patch patching file drivers/net/wireless/rtl818x/rtl8187/dev.c patching file net/mac80211/tx.c Hunk #1 succeeded at 1495 (offset 8 lines). patching file net/wireless/chan.c charles-W55xEU compat-wireless-3.6.2-1-snp # make ./scripts/gen-compat-autoconf.sh /root/compat-wireless-3.6.2-1-snp/.config /root/compat-wireless-3.6.2-1-snp/config.mk > include/linux/compat_autoconf.h make -C /lib/modules/3.8.0-19-generic/build M=/root/compat-wireless-3.6.2-1-snp modules make[1]: Entering directory `/usr/src/linux-headers-3.8.0-19-generic' CC [M] /root/compat-wireless-3.6.2-1-snp/compat/main.o LD [M] /root/compat-wireless-3.6.2-1-snp/compat/compat.o CC [M] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:8:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_pci.h:217:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_pci_init’ In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:10:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_gmac_cmn.h:95:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_gmac_cmn_init’ In file included from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8:0: /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:25:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:152:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:17:21: warning: ‘bcma_bus_next_num’ defined but not used [-Wunused-variable] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:93:12: warning: ‘bcma_register_cores’ defined but not used [-Wunused-function] make[3]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o] Error 1 make[2]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma] Error 2 make[1]: *** [_module_/root/compat-wireless-3.6.2-1-snp] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.8.0-19-generic' make: *** [modules] Error 2 charles-W55xEU compat-wireless-3.6.2-1-snp # make install Warning: You may or may not need to update your initframfs, you should if any of the modules installed are part of your initramfs. To add support for your distribution to do this automatically send a patch against ./scripts/update-initramfs. If your distribution does not require this send a patch against the '/usr/bin/lsb_release -i -s': LinuxMint tag for your distribution to avoid this warning. make -C /lib/modules/3.8.0-19-generic/build M=/root/compat-wireless-3.6.2-1-snp modules make[1]: Entering directory `/usr/src/linux-headers-3.8.0-19-generic' CC [M] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:8:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_pci.h:217:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_pci_init’ In file included from /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma.h:10:0, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:8, from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8: /root/compat-wireless-3.6.2-1-snp/include/linux/bcma/bcma_driver_gmac_cmn.h:95:23: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_core_gmac_cmn_init’ In file included from /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:8:0: /root/compat-wireless-3.6.2-1-snp/drivers/bcma/bcma_private.h:25:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:152:15: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘bcma_bus_register’ /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:17:21: warning: ‘bcma_bus_next_num’ defined but not used [-Wunused-variable] /root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.c:93:12: warning: ‘bcma_register_cores’ defined but not used [-Wunused-function] make[3]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma/main.o] Error 1 make[2]: *** [/root/compat-wireless-3.6.2-1-snp/drivers/bcma] Error 2 make[1]: *** [_module_/root/compat-wireless-3.6.2-1-snp] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.8.0-19-generic' make: *** [modules] Error 2 charles-W55xEU compat-wireless-3.6.2-1-snp # It keeps giving errors, same with other sites, I get the same errors??? I am lost, help needed

    Read the article

  • IIS 7 Authentication: Certain users can't authenticate, while almost all others can.

    - by user35335
    I'm using IIS 7 Digest authentication to control access to a certain directory containing files. Users access the files through a department website from inside our network and outside. I've set NTFS permissions on the directory to allow a certain AD group to view the files. When I click a link to one of those files on the website I get prompted for a username and password. With most users everything works fine, but with a few of them it prompts for a password 3 times and then get: 401 - Unauthorized: Access is denied due to invalid credentials. But other users that are in the group can get in without a problem. If I switch it over to Windows Authentication, then the trouble users can log in fine. That directory is also shared, and users that can't log in through the website are able to browse to the share and view files in it, so I know that the permissions are ok. Here's the portion of the IIS log where I tried to download the file (/assets/files/secure/WWGNL.pdf): 2010-02-19 19:47:20 xxx.xxx.xxx.xxx GET /assets/images/bullet.gif - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 218 2010-02-19 19:47:20 xxx.xxx.xxx.xxx GET /assets/images/bgOFF.gif - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 218 2010-02-19 19:47:21 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 2 5 0 2010-02-19 19:47:36 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 0 2010-02-19 19:47:43 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 15 2010-02-19 19:47:46 xxx.xxx.xxx.xxx GET /manager/media/script/_session.gif 0.19665693119168282 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 203 2010-02-19 19:47:46 xxx.xxx.xxx.xxx POST /manager/index.php - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 200 0 0 296 2010-02-19 19:47:56 xxx.xxx.xxx.xxx GET /assets/files/secure/WWGNL.pdf - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 401 1 2148074252 15 2010-02-19 19:47:59 xxx.xxx.xxx.xxx GET /favicon.ico - 80 - 10.5.16.138 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/532.5+(KHTML,+like+Gecko)+Chrome/4.0.249.89+Safari/532.5 404 0 2 0 Here's the Failed Logon attempt in the Security Log: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 2/19/2010 11:47:43 AM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: WEB4.net.domain.org Description: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: jim.lastname Account Domain: net.domain.org Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc000006a Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: - Source Network Address: 10.5.16.138 Source Port: 50065 Detailed Authentication Information: Logon Process: WDIGEST Authentication Package: WDigest Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-a5ba-3e3b0328c30d}" /> <EventID>4625</EventID> <Version>0</Version> <Level>0</Level> <Task>12544</Task> <Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords> <TimeCreated SystemTime="2010-02-19T19:47:43.890Z" /> <EventRecordID>2276316</EventRecordID> <Correlation /> <Execution ProcessID="612" ThreadID="692" /> <Channel>Security</Channel> <Computer>WEB4.net.domain.org</Computer> <Security /> </System> <EventData> <Data Name="SubjectUserSid">S-1-0-0</Data> <Data Name="SubjectUserName">-</Data> <Data Name="SubjectDomainName">-</Data> <Data Name="SubjectLogonId">0x0</Data> <Data Name="TargetUserSid">S-1-0-0</Data> <Data Name="TargetUserName">jim.lastname</Data> <Data Name="TargetDomainName">net.domain.org</Data> <Data Name="Status">0xc000006d</Data> <Data Name="FailureReason">%%2313</Data> <Data Name="SubStatus">0xc000006a</Data> <Data Name="LogonType">3</Data> <Data Name="LogonProcessName">WDIGEST</Data> <Data Name="AuthenticationPackageName">WDigest</Data> <Data Name="WorkstationName">-</Data> <Data Name="TransmittedServices">-</Data> <Data Name="LmPackageName">-</Data> <Data Name="KeyLength">0</Data> <Data Name="ProcessId">0x0</Data> <Data Name="ProcessName">-</Data> <Data Name="IpAddress">10.5.16.138</Data> <Data Name="IpPort">50065</Data> </EventData> </Event>

    Read the article

  • Cannot SSH after resetting firewall on VPS

    - by Thomas Buckley
    I'm having trouble trying to SSH to my Debian 5 VPS with blacknight. It was working fine until I did the following: Logged into 'Parallels Infrastructure Manager' - Container - Firewall - Set to 'Normal Firewall settings'. It told me there was an error with the IPTables and offered the option again with a checkbox to 'reset' firewall settings, I selected this. I can see that that the default rules are been applied ( anything from anyone on any port and allowing anything to happen). Whenever I attempt to SSH I get the following debug info: thomas@localmachine:~/.ssh$ ssh -v thomas@hostname OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to hostname [***********] port 22. debug1: Connection established. debug1: identity file /home/thomas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-4096 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-4096 debug1: identity file /home/thomas/.ssh/id_rsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_dsa type -1 debug1: identity file /home/thomas/.ssh/id_dsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ************************************* debug1: Host 'hostname' is known and matches the RSA host key. debug1: Found key in /home/thomas/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/thomas/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /home/thomas/.ssh/id_dsa debug1: Trying private key: /home/thomas/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey). I had my public/private RSA keys set up and working fine before I reset the firewall settings. I had also made the following changes to my /etc/ssh/sshd_config file on the VPS: PermitRootLogin no PasswordAuthentication no X11Forwarding no UsePAM no UseDNS no AllowUsers thomas Could it be something to do with the SSH server & client having different versions between my local machine and VPS? Any help appreciated. Output with ssh -vvv thomas@localcomputer:~/.ssh$ ssh -vvv thomas@**************** OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ************ [*************] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/thomas/.ssh/id_rsa" as a RSA1 public key debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/thomas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-4096 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-4096 debug1: identity file /home/thomas/.ssh/id_rsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_dsa type -1 debug1: identity file /home/thomas/.ssh/id_dsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "*****************" from file "/home/thomas/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /home/thomas/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 127/256 debug2: bits set: 498/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA *********************************************************** debug3: load_hostkeys: loading entries for host "*********************" from file "/home/thomas/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /home/thomas/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug1: Host '****************' is known and matches the RSA host key. debug1: Found key in /home/thomas/.ssh/known_hosts:1 debug2: bits set: 516/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/thomas/.ssh/id_rsa (0x7fa7028b6010) debug2: key: /home/thomas/.ssh/id_dsa ((nil)) debug2: key: /home/thomas/.ssh/id_ecdsa ((nil)) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/thomas/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Trying private key: /home/thomas/.ssh/id_dsa debug3: no such identity: /home/thomas/.ssh/id_dsa debug1: Trying private key: /home/thomas/.ssh/id_ecdsa debug3: no such identity: /home/thomas/.ssh/id_ecdsa debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). sshd_config # Package generated configuration file # See the sshd(8) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) C hallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication no # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM no UseDNS no AllowUsers thomas Thanks

    Read the article

  • Apache2 mod_proxy to remote Tomcat7 - slow response

    - by 12N
    Been stuck with this one for a few days. Will try to provide as much information as possible, but please feel free to ask for extra detail. I have 2 VMs behind a NAT, 192.168.0.100 and 192.168.0.102, both running Ubuntu 11.04 x64. The first one is mapped to the exterior and is our webserver, has one Apache/2.2.17 install with several vhosts to serve static content, and there's also mod_jk for load balancing. The second one has a tomcat 7 install with several J2EE REST webservices but no apache - requests are expected to be passed directly from .100 apache to .102 tomcat. It is my intention to prepare a tomcat clustered environment. My problem: Requests reach to 192.168.0.100 with no trouble whatsoever, but then take about... 100 seconds for data to actually arrive to .102 - by that time apache has already timeouted, but tomcat receives and processes the request pretty normally. This happens both when using mod_jk, mod_proxy, or mod_ajp_proxy. No idea why, since there are no firewalls in either of the machines, both are pingable - more than that, there are NFS shares active working like a charm - and a mod_proxy experience shown that requests originating directly from .100 are processed normally. Also, to add insult to injury, a similar environment is set up at our office network. Everything works perfectly. -_- The only difference? We have no ip translation at the office and do everything by internal addresses - dunno if that's relevant in any way. Some configs: Apache vhost: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ ServerName www.example.com ProxyRequests Off <Proxy *> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Proxy> ProxyPass /bork http://192.168.0.102:8080/bork ProxyPassReverse /bork http://192.168.0.102:8080/bork LogLevel debug CustomLog ${APACHE_LOG_DIR}/api_access.log combined ErrorLog ${APACHE_LOG_DIR}/api_error.log </VirtualHost> Tomcat connectors <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" /> And a debug log from apache, from a test using mod_proxy_ajp. The behavior is pretty much the same in mod_proxy, at least regarding the delay. Please note that tomcat eventually receives and processes the request, more or less when the log starts being updated again: [Sun May 06 14:40:33 2012] [debug] proxy_util.c(1506): [client 188.81.234.2] proxy: ajp: found worker ajp://192.168.0.102:8008/bork for ajp://192.168.0.102:8008/bork/SSOIdentityProviderSoap [Sun May 06 14:40:33 2012] [debug] mod_proxy.c(1015): Running scheme ajp handler (attempt 0) [Sun May 06 14:40:33 2012] [debug] mod_proxy_ajp.c(661): proxy: AJP: serving URL ajp://192.168.0.102:8008/bork/SSOIdentityProviderSoap [Sun May 06 14:40:33 2012] [debug] proxy_util.c(2011): proxy: AJP: has acquired connection for (192.168.0.102) [Sun May 06 14:40:33 2012] [debug] proxy_util.c(2067): proxy: connecting ajp://192.168.0.102:8008/bork/SSOIdentityProviderSoap to 192.168.0.102:8008 [Sun May 06 14:40:33 2012] [debug] proxy_util.c(2193): proxy: connected /bork/SSOIdentityProviderSoap to 192.168.0.102:8008 [Sun May 06 14:40:33 2012] [debug] proxy_util.c(2444): proxy: AJP: fam 2 socket created to connect to 192.168.0.102 [Sun May 06 14:40:33 2012] [debug] ajp_header.c(224): Into ajp_marshal_into_msgb [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[0] [Accept-Encoding] = [gzip,deflate] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[1] [Content-Type] = [text/xml;charset=UTF-8] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[2] [SOAPAction] = [""] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[3] [User-Agent] = [Jakarta Commons-HttpClient/3.1] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[4] [Host] = [www.example.com] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[5] [Content-Length] = [520] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(450): ajp_marshal_into_msgb: Done [Sun May 06 14:40:33 2012] [debug] mod_proxy_ajp.c(267): proxy: APR_BUCKET_IS_EOS [Sun May 06 14:40:33 2012] [debug] mod_proxy_ajp.c(272): proxy: data to read (max 8186 at 4) [Sun May 06 14:40:33 2012] [debug] mod_proxy_ajp.c(287): proxy: got 520 bytes of data [Sun May 06 14:40:33 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 06 [Sun May 06 14:40:33 2012] [debug] ajp_header.c(697): ajp_parse_type: got 06 [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 2 in child 5916 for worker ajp://192.168.0.100:8008/coding [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.100:8008/coding already initialized [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 2 in child 5916 for (192.168.0.100) [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 3 in child 5916 for worker http://192.168.0.102:8080 [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1837): proxy: worker http://192.168.0.102:8080 already initialized [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 3 in child 5916 for (192.168.0.102) [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 4 in child 5916 for worker ajp://192.168.0.102:8008/bork [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.102:8008/bork already initialized [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 4 in child 5916 for (192.168.0.102) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 2 in child 5918 for (192.168.0.100) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 3 in child 5918 for worker http://192.168.0.102:8080 [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker http://192.168.0.102:8080 already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 3 in child 5918 for (192.168.0.102) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 4 in child 5918 for worker ajp://192.168.0.102:8008/bork [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.102:8008/bork already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 4 in child 5918 for (192.168.0.102) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 2 in child 5917 for worker ajp://192.168.0.100:8008/coding [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.100:8008/coding already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 2 in child 5917 for (192.168.0.100) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 3 in child 5917 for worker http://192.168.0.102:8080 [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker http://192.168.0.102:8080 already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 3 in child 5917 for (192.168.0.102) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 4 in child 5917 for worker ajp://192.168.0.102:8008/bork [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.102:8008/bork already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 4 in child 5917 for (192.168.0.102) [Sun May 06 14:42:09 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 04 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(697): ajp_parse_type: got 04 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(516): ajp_unmarshal_response: status = 200 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(537): ajp_unmarshal_response: Number of headers is = 1 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(599): ajp_unmarshal_response: Header[0] [Content-Type] = [text/xml;charset=utf-8] [Sun May 06 14:42:09 2012] [debug] ajp_header.c(609): ajp_unmarshal_response: ap_set_content_type done [Sun May 06 14:42:09 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 03 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(697): ajp_parse_type: got 03 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 03 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(697): ajp_parse_type: got 03 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 05 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(697): ajp_parse_type: got 05 [Sun May 06 14:42:09 2012] [debug] mod_deflate.c(615): [client 188.81.234.2] Zlib: Compressed 447 to 255 : URL /bork/SSOIdentityProviderSoap [Sun May 06 14:42:09 2012] [debug] mod_proxy_ajp.c(570): proxy: got response from (null) (192.168.0.102) [Sun May 06 14:42:09 2012] [debug] proxy_util.c(2029): proxy: AJP: has released connection for (192.168.0.102) [Sun May 06 14:42:09 2012] [info] [client 188.81.234.2] Request body read timeout Was wondering if any one could provide some advice, perhaps even point out any hideous, horrible configuration error? thanks in advance!

    Read the article

  • JSF Portlets in Liferay on JBoss

    - by JBirch
    I'm currently looking at working with and deploying JSF portlets into Liferay 6.0.5, sitting on JBoss 5.1.0. I ran into a lot of trouble trying to port some JSF-y/Seam-y/EJB-y stuff I had lying around, so I thought I'd start simple and work my way up. I could generate generic portlets using the NetBeans Maven archetype for Liferay portlets absolutely fine, but it's rather irrelevant because I wanted JSF portlets I took an example JSF portlet from http://www.liferay.com/downloads/liferay-portal/community-plugins/-/software_catalog/products/5546866 and attempted to deploy into a clean, vanilla installation of Liferay 6.0.5/JBoss 5.1.0 to no avail. The log messages are reproduced at the end of this. This particular example was actually tested for GlassFish and Tomcat, so it's not particularly helpful considering I'm deplying into JBoss. I tried ripping it apart and removing the jsf implementation contained within as there is a jsf implementation shipped with JBoss (Mojarra 1.2_12, in this case). 03:16:17,173 INFO [PortletAutoDeployListener] Copying portlets for /usr/local/[REDACTED]/liferay/liferay-portal-6.0.5/deploy/richfaces-sun-jsf1.2-facelets-portlet-1.2.war Expanding: /usr/local/[REDACTED]/liferay/liferay-portal-6.0.5/deploy/richfaces-sun-jsf1.2-facelets-portlet-1.2.war into /tmp/20110201031617188 Copying 1 file to /tmp/20110201031617188/WEB-INF Copying 1 file to /tmp/20110201031617188/WEB-INF/classes Copying 1 file to /tmp/20110201031617188/WEB-INF/classes Copying 47 files to /usr/local/[REDACTED]/liferay/liferay-portal-6.0.5/jboss-5.1.0/server/default/deploy/richfaces-sun-jsf1.2-facelets-portlet.war Copying 1 file to /usr/local/[REDACTED]/liferay/liferay-portal-6.0.5/jboss-5.1.0/server/default/deploy/richfaces-sun-jsf1.2-facelets-portlet.war Deleting directory /tmp/20110201031617188 03:16:20,075 INFO [PortletAutoDeployListener] Portlets for /usr/local/[REDACTED]/liferay/liferay-portal-6.0.5/deploy/richfaces-sun-jsf1.2-facelets-portlet-1.2.war copied successfully. Deployment will start in a few seconds. 03:16:23,632 INFO [TomcatDeployment] deploy, ctxPath=/richfaces-sun-jsf1.2-facelets-portlet 03:16:24,446 INFO [PortletHotDeployListener] Registering portlets for richfaces-sun-jsf1.2-facelets-portlet 03:16:24,492 INFO [faces] Init GenericFacesPortlet for portlet 1 03:16:24,495 INFO [faces] Bridge class name is org.jboss.portletbridge.AjaxPortletBridge 03:16:24,509 INFO [faces] The bridge does not support doHeaders method 03:16:24,510 INFO [faces] GenericFacesPortlet for portlet 1 initialized 03:16:24,555 INFO [PortletHotDeployListener] 1 portlet for richfaces-sun-jsf1.2-facelets-portlet is available for use 03:16:24,627 SEVERE [webapp] Initialization of the JSF runtime either failed or did not occurr. Review the server''s log for details. java.lang.InstantiationException: org.jboss.portletbridge.context.FacesContextFactoryImpl at java.lang.Class.newInstance0(Class.java:340) at java.lang.Class.newInstance(Class.java:308) at javax.faces.FactoryFinder.getImplGivenPreviousImpl(FactoryFinder.java:537) at javax.faces.FactoryFinder.getImplementationInstance(FactoryFinder.java:394) at javax.faces.FactoryFinder.access$400(FactoryFinder.java:135) at javax.faces.FactoryFinder$FactoryManager.getFactory(FactoryFinder.java:717) at javax.faces.FactoryFinder.getFactory(FactoryFinder.java:239) at javax.faces.webapp.FacesServlet.init(FacesServlet.java:164) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1048) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:950) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4122) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4421) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeployInternal(TomcatDeployment.java:310) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeploy(TomcatDeployment.java:142) at org.jboss.web.deployers.AbstractWarDeployment.start(AbstractWarDeployment.java:461) at org.jboss.web.deployers.WebModule.startModule(WebModule.java:118) at org.jboss.web.deployers.WebModule.start(WebModule.java:97) at sun.reflect.GeneratedMethodAccessor286.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668) at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:206) at $Proxy38.start(Unknown Source) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:42) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:37) at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:286) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.ServiceController.doChange(ServiceController.java:688) at org.jboss.system.ServiceController.start(ServiceController.java:460) at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:163) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:99) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:46) at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1178) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.scan(HDScanner.java:362) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.run(HDScanner.java:255) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) 03:16:24,629 INFO [2-facelets-portlet]] Marking servlet FacesServlet as unavailable 03:16:24,630 ERROR [2-facelets-portlet]] Servlet /richfaces-sun-jsf1.2-facelets-portlet threw load() exception javax.servlet.UnavailableException: Initialization of the JSF runtime either failed or did not occurr. Review the server''s log for details. at javax.faces.webapp.FacesServlet.init(FacesServlet.java:172) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1048) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:950) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4122) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4421) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeployInternal(TomcatDeployment.java:310) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeploy(TomcatDeployment.java:142) at org.jboss.web.deployers.AbstractWarDeployment.start(AbstractWarDeployment.java:461) at org.jboss.web.deployers.WebModule.startModule(WebModule.java:118) at org.jboss.web.deployers.WebModule.start(WebModule.java:97) at sun.reflect.GeneratedMethodAccessor286.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668) at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:206) at $Proxy38.start(Unknown Source) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:42) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:37) at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:286) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.ServiceController.doChange(ServiceController.java:688) at org.jboss.system.ServiceController.start(ServiceController.java:460) at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:163) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:99) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:46) at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1178) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.scan(HDScanner.java:362) at org.jboss.system.server.profileservice.hotdeploy.HDScanner.run(HDScanner.java:255) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Why does SQL 2005 SSIS component install fail?

    - by Ducain
    I am trying to install SSIS on our production SQL 2005 SP2 box. Each time I try, the install/setup screen results in failure, starting with the native client, and moving on down. Screen shots below show what I see: Here is the result of clicking on the status link to the right of the native client after the install failed: === Verbose logging started: 3/28/2012 16:38:08 Build type: SHIP UNICODE 3.01.4000.4042 Calling process: C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\setup.exe === MSI (c) (DC:00) [16:38:08:875]: Resetting cached policy values MSI (c) (DC:00) [16:38:08:875]: Machine policy value 'Debug' is 0 MSI (c) (DC:00) [16:38:08:875]: ******* RunEngine: ******* Product: {F9B3DD02-B0B3-42E9-8650-030DFF0D133D} ******* Action: ******* CommandLine: ********** MSI (c) (DC:00) [16:38:08:875]: Client-side and UI is none or basic: Running entire install on the server. MSI (c) (DC:00) [16:38:08:875]: Grabbed execution mutex. MSI (c) (DC:00) [16:38:08:875]: Cloaking enabled. MSI (c) (DC:00) [16:38:08:875]: Attempting to enable all disabled priveleges before calling Install on Server MSI (c) (DC:00) [16:38:08:875]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (90:F0) [16:38:08:875]: Grabbed execution mutex. MSI (s) (90:D4) [16:38:08:875]: Resetting cached policy values MSI (s) (90:D4) [16:38:08:875]: Machine policy value 'Debug' is 0 MSI (s) (90:D4) [16:38:08:875]: ******* RunEngine: ******* Product: {F9B3DD02-B0B3-42E9-8650-030DFF0D133D} ******* Action: ******* CommandLine: ********** MSI (s) (90:D4) [16:38:08:875]: Machine policy value 'DisableUserInstalls' is 0 MSI (s) (90:D4) [16:38:08:890]: Warning: Local cached package 'C:\WINDOWS\Installer\65eb99.msi' is missing. MSI (s) (90:D4) [16:38:08:890]: User policy value 'SearchOrder' is 'nmu' MSI (s) (90:D4) [16:38:08:890]: User policy value 'DisableMedia' is 0 MSI (s) (90:D4) [16:38:08:890]: Machine policy value 'AllowLockdownMedia' is 0 MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Media enabled only if package is safe. MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Looking for sourcelist for product {F9B3DD02-B0B3-42E9-8650-030DFF0D133D} MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Adding {F9B3DD02-B0B3-42E9-8650-030DFF0D133D}; to potential sourcelist list (pcode;disk;relpath). MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Now checking product {F9B3DD02-B0B3-42E9-8650-030DFF0D133D} MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Media is enabled for product. MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Attempting to use LastUsedSource from source list. MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Trying source C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\Cache\. MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Source is invalid due to invalid package code (product code doesn't match). MSI (s) (90:D4) [16:38:08:890]: Note: 1: 1706 2: -2147483646 3: sqlncli.msi MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Processing net source list. MSI (s) (90:D4) [16:38:08:890]: Note: 1: 1706 2: -2147483647 3: sqlncli.msi MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Processing media source list. MSI (s) (90:D4) [16:38:09:921]: SOURCEMGMT: Trying media source F:\. MSI (s) (90:D4) [16:38:09:921]: Note: 1: 2203 2: F:\sqlncli.msi 3: -2147287038 MSI (s) (90:D4) [16:38:09:921]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (90:D4) [16:38:09:921]: Note: 1: 1706 2: -2147483647 3: sqlncli.msi MSI (s) (90:D4) [16:38:09:921]: SOURCEMGMT: Processing URL source list. MSI (s) (90:D4) [16:38:09:921]: Note: 1: 1402 2: UNKNOWN\URL 3: 2 MSI (s) (90:D4) [16:38:09:921]: Note: 1: 1706 2: -2147483647 3: sqlncli.msi MSI (s) (90:D4) [16:38:09:921]: Note: 1: 1706 2: 3: sqlncli.msi MSI (s) (90:D4) [16:38:09:921]: SOURCEMGMT: Failed to resolve source MSI (s) (90:D4) [16:38:09:921]: MainEngineThread is returning 1612 MSI (c) (DC:00) [16:38:09:921]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (DC:00) [16:38:09:921]: MainEngineThread is returning 1612 === Verbose logging stopped: 3/28/2012 16:38:09 === Here is the log visible when I click the failed status for MSXML6: === Verbose logging started: 3/28/2012 16:38:12 Build type: SHIP UNICODE 3.01.4000.4042 Calling process: C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\setup.exe === MSI (c) (DC:58) [16:38:12:250]: Resetting cached policy values MSI (c) (DC:58) [16:38:12:250]: Machine policy value 'Debug' is 0 MSI (c) (DC:58) [16:38:12:250]: ******* RunEngine: ******* Product: {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA} ******* Action: ******* CommandLine: ********** MSI (c) (DC:58) [16:38:12:250]: Client-side and UI is none or basic: Running entire install on the server. MSI (c) (DC:58) [16:38:12:250]: Grabbed execution mutex. MSI (c) (DC:58) [16:38:12:250]: Cloaking enabled. MSI (c) (DC:58) [16:38:12:250]: Attempting to enable all disabled priveleges before calling Install on Server MSI (c) (DC:58) [16:38:12:250]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (90:58) [16:38:12:265]: Grabbed execution mutex. MSI (s) (90:DC) [16:38:12:265]: Resetting cached policy values MSI (s) (90:DC) [16:38:12:265]: Machine policy value 'Debug' is 0 MSI (s) (90:DC) [16:38:12:265]: ******* RunEngine: ******* Product: {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA} ******* Action: ******* CommandLine: ********** MSI (s) (90:DC) [16:38:12:265]: Machine policy value 'DisableUserInstalls' is 0 MSI (s) (90:DC) [16:38:12:265]: Warning: Local cached package 'C:\WINDOWS\Installer\ce6d56e.msi' is missing. MSI (s) (90:DC) [16:38:12:265]: User policy value 'SearchOrder' is 'nmu' MSI (s) (90:DC) [16:38:12:265]: User policy value 'DisableMedia' is 0 MSI (s) (90:DC) [16:38:12:265]: Machine policy value 'AllowLockdownMedia' is 0 MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Media enabled only if package is safe. MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Looking for sourcelist for product {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA} MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Adding {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA}; to potential sourcelist list (pcode;disk;relpath). MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Now checking product {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA} MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Media is enabled for product. MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Attempting to use LastUsedSource from source list. MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Trying source d:\2a2ac35788eea9066bae01\. MSI (s) (90:DC) [16:38:12:265]: Note: 1: 2203 2: d:\2a2ac35788eea9066bae01\msxml6.msi 3: -2147287037 MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (90:DC) [16:38:12:265]: Note: 1: 1706 2: -2147483647 3: msxml6.msi MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Processing net source list. MSI (s) (90:DC) [16:38:12:265]: Note: 1: 1706 2: -2147483647 3: msxml6.msi MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Processing media source list. MSI (s) (90:DC) [16:38:12:296]: SOURCEMGMT: Trying media source F:\. MSI (s) (90:DC) [16:38:12:296]: Note: 1: 2203 2: F:\msxml6.msi 3: -2147287038 MSI (s) (90:DC) [16:38:12:296]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (90:DC) [16:38:12:296]: Note: 1: 1706 2: -2147483647 3: msxml6.msi MSI (s) (90:DC) [16:38:12:296]: SOURCEMGMT: Processing URL source list. MSI (s) (90:DC) [16:38:12:296]: Note: 1: 1402 2: UNKNOWN\URL 3: 2 MSI (s) (90:DC) [16:38:12:296]: Note: 1: 1706 2: -2147483647 3: msxml6.msi MSI (s) (90:DC) [16:38:12:296]: Note: 1: 1706 2: 3: msxml6.msi MSI (s) (90:DC) [16:38:12:296]: SOURCEMGMT: Failed to resolve source MSI (s) (90:DC) [16:38:12:296]: MainEngineThread is returning 1612 MSI (c) (DC:58) [16:38:12:296]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (DC:58) [16:38:12:296]: MainEngineThread is returning 1612 === Verbose logging stopped: 3/28/2012 16:38:12 === When I click on the failed status for SSIS, no log file appears at all. To be honest, I'm not even sure where to start on this one - never guessed it would be so much trouble to add a component right from the disk. Any help or pointers whatsoever would be greatly appreciated. If any more details are needed, please ask - I'd be glad to add them.

    Read the article

  • Configuring OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never Why is it that I can connect to the same server using one version of JRE while I cannot with another ?

    Read the article

  • Cisco ASA: Allowing and Denying VPN Access based on membership to an AD group

    - by milkandtang
    I have a Cisco ASA 5505 connecting to an Active Directory server for VPN authentication. Usually we'd restrict this to a particular OU, but in this case users which need access are spread across multiple OUs. So, I'd like to use a group to specify which users have remote access. I've created the group and added the users, but I'm having trouble figuring out how to deny users which aren't in that group. Right now, if someone connects they get assigned the correct group policy "companynamera" if they are in that group, so the LDAP mapping is working. However, users who are not in that group still authenticate fine, and their group policy becomes the LDAP path of their first group, i.e. CN=Domain Users,CN=Users,DC=example,DC=com, and then are still allowed access. How do I add a filter so that I can map everything that isn't "companynamera" to no access? Config I'm using (with some stuff such as ACLs and mappings removed, since they are just noise here): gateway# show run : Saved : ASA Version 8.2(1) ! hostname gateway domain-name corp.company-name.com enable password gDZcqZ.aUC9ML0jK encrypted passwd gDZcqZ.aUC9ML0jK encrypted names name 192.168.0.2 dc5 description FTP Server name 192.168.0.5 dc2 description Everything server name 192.168.0.6 dc4 description File Server name 192.168.0.7 ts1 description Light Use Terminal Server name 192.168.0.8 ts2 description Heavy Use Terminal Server name 4.4.4.82 primary-frontier name 5.5.5.26 primary-eschelon name 172.21.18.5 dmz1 description Kerio Mail Server and FTP Server name 4.4.4.84 ts-frontier name 4.4.4.85 vpn-frontier name 5.5.5.28 ts-eschelon name 5.5.5.29 vpn-eschelon name 5.5.5.27 email-eschelon name 4.4.4.83 guest-frontier name 4.4.4.86 email-frontier dns-guard ! interface Vlan1 nameif inside security-level 100 ip address 192.168.0.254 255.255.255.0 ! interface Vlan2 description Frontier FiOS nameif outside security-level 0 ip address primary-frontier 255.255.255.0 ! interface Vlan3 description Eschelon T1 nameif backup security-level 0 ip address primary-eschelon 255.255.255.248 ! interface Vlan4 nameif dmz security-level 50 ip address 172.21.18.254 255.255.255.0 ! interface Vlan5 nameif guest security-level 25 ip address 172.21.19.254 255.255.255.0 ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 switchport access vlan 3 ! interface Ethernet0/2 switchport access vlan 4 ! interface Ethernet0/3 switchport access vlan 5 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive clock timezone PST -8 clock summer-time PDT recurring dns domain-lookup inside dns server-group DefaultDNS name-server dc2 domain-name corp.company-name.com same-security-traffic permit intra-interface access-list companyname_splitTunnelAcl standard permit 192.168.0.0 255.255.255.0 access-list companyname_splitTunnelAcl standard permit 172.21.18.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip any 172.21.20.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip any 172.21.18.0 255.255.255.0 access-list bypassingnat_dmz extended permit ip 172.21.18.0 255.255.255.0 192.168.0.0 255.255.255.0 pager lines 24 logging enable logging buffer-size 12288 logging buffered warnings logging asdm notifications mtu inside 1500 mtu outside 1500 mtu backup 1500 mtu dmz 1500 mtu guest 1500 ip local pool VPNpool 172.21.20.50-172.21.20.59 mask 255.255.255.0 no failover icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface global (outside) 2 email-frontier global (outside) 3 guest-frontier global (backup) 1 interface global (dmz) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 2 dc5 255.255.255.255 nat (inside) 1 192.168.0.0 255.255.255.0 nat (dmz) 0 access-list bypassingnat_dmz nat (dmz) 2 dmz1 255.255.255.255 nat (dmz) 1 172.21.18.0 255.255.255.0 access-group outside_access_in in interface outside access-group dmz_access_in in interface dmz route outside 0.0.0.0 0.0.0.0 4.4.4.1 1 track 1 route backup 0.0.0.0 0.0.0.0 5.5.5.25 254 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 ldap attribute-map RemoteAccessMap map-name memberOf IETF-Radius-Class map-value memberOf CN=RemoteAccess,CN=Users,DC=corp,DC=company-name,DC=com companynamera dynamic-access-policy-record DfltAccessPolicy aaa-server ActiveDirectory protocol ldap aaa-server ActiveDirectory (inside) host dc2 ldap-base-dn dc=corp,dc=company-name,dc=com ldap-scope subtree ldap-login-password * ldap-login-dn cn=administrator,ou=Admins,dc=corp,dc=company-name,dc=com server-type microsoft aaa-server ADRemoteAccess protocol ldap aaa-server ADRemoteAccess (inside) host dc2 ldap-base-dn dc=corp,dc=company-name,dc=com ldap-scope subtree ldap-login-password * ldap-login-dn cn=administrator,ou=Admins,dc=corp,dc=company-name,dc=com server-type microsoft ldap-attribute-map RemoteAccessMap aaa authentication enable console LOCAL aaa authentication ssh console LOCAL http server enable http 192.168.0.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart sla monitor 123 type echo protocol ipIcmpEcho 4.4.4.1 interface outside num-packets 3 frequency 10 sla monitor schedule 123 life forever start-time now crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-SHA crypto map outside_map 65535 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 ! track 1 rtr 123 reachability telnet timeout 5 ssh 192.168.0.0 255.255.255.0 inside ssh timeout 5 ssh version 2 console timeout 0 management-access inside dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy companynamera internal group-policy companynamera attributes wins-server value 192.168.0.5 dns-server value 192.168.0.5 vpn-tunnel-protocol IPSec password-storage enable split-tunnel-policy tunnelspecified split-tunnel-network-list value companyname_splitTunnelAcl default-domain value corp.company-name.com split-dns value corp.company-name.com group-policy companyname internal group-policy companyname attributes wins-server value 192.168.0.5 dns-server value 192.168.0.5 vpn-tunnel-protocol IPSec password-storage enable split-tunnel-policy tunnelspecified split-tunnel-network-list value companyname_splitTunnelAcl default-domain value corp.company-name.com split-dns value corp.company-name.com username admin password IhpSqtN210ZsNaH. encrypted privilege 15 tunnel-group companyname type remote-access tunnel-group companyname general-attributes address-pool VPNpool authentication-server-group ActiveDirectory LOCAL default-group-policy companyname tunnel-group companyname ipsec-attributes pre-shared-key * tunnel-group companynamera type remote-access tunnel-group companynamera general-attributes address-pool VPNpool authentication-server-group ADRemoteAccess LOCAL default-group-policy companynamera tunnel-group companynamera ipsec-attributes pre-shared-key * ! class-map type inspect ftp match-all ftp-inspection-map class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect ftp ftp-inspection-map parameters class ftp-inspection-map policy-map type inspect dns migrated_dns_map_1 parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns migrated_dns_map_1 inspect ftp inspect h323 h225 inspect h323 ras inspect http inspect ils inspect netbios inspect rsh inspect rtsp inspect skinny inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp inspect icmp inspect icmp error inspect esmtp inspect pptp ! service-policy global_policy global prompt hostname context Cryptochecksum:487525494a81c8176046fec475d17efe : end gateway# Thanks so much!

    Read the article

  • What to filter when providing very limited open WiFi to a small conference or meeting?

    - by Tim Farley
    Executive Summary The basic question is: if you have a very limited bandwidth WiFi to provide Internet for a small meeting of only a day or two, how do you set the filters on the router to avoid one or two users monopolizing all the available bandwidth? For folks who don't have the time to read the details below, I am NOT looking for any of these answers: Secure the router and only let a few trusted people use it Tell everyone to turn off unused services & generally police themselves Monitor the traffic with a sniffer and add filters as needed I am aware of all of that. None are appropriate for reasons that will become clear. ALSO NOTE: There is already a question concerning providing adequate WiFi at large (500 attendees) conferences here. This question concerns SMALL meetings of less than 200 people, typically with less than half that using the WiFi. Something that can be handled with a single home or small office router. Background I've used a 3G/4G router device to provide WiFi to small meetings in the past with some success. By small I mean single-room conferences or meetings on the order of a barcamp or Skepticamp or user group meeting. These meetings sometimes have technical attendees there, but not exclusively. Usually less than half to a third of the attendees will actually use the WiFi. Maximum meeting size I'm talking about is 100 to 200 people. I typically use a Cradlepoint MBR-1000 but many other devices exist, especially all-in-one units supplied by 3G and/or 4G vendors like Verizon, Sprint and Clear. These devices take a 3G or 4G internet connection and fan it out to multiple users using WiFi. One key aspect of providing net access this way is the limited bandwidth available over 3G/4G. Even with something like the Cradlepoint which can load-balance multiple radios, you are only going to achieve a few megabits of download speed and maybe a megabit or so of upload speed. That's a best case scenario. Often it is considerably slower. The goal in most of these meeting situations is to allow folks access to services like email, web, social media, chat services and so on. This is so they can live-blog or live-tweet the proceedings, or simply chat online or otherwise stay in touch (with both attendees and non-attendees) while the meeting proceeds. I would like to limit the services provided by the router to just those services that meet those needs. Problems In particular I have noticed a couple of scenarios where particular users end up abusing most of the bandwidth on the router, to the detriment of everyone. These boil into two areas: Intentional use. Folks looking at YouTube videos, downloading podcasts to their iPod, and otherwise using the bandwidth for things that really aren't appropriate in a meeting room where you should be paying attention to the speaker and/or interacting.At one meeting that we were live-streaming (over a separate, dedicated connection) via UStream, I noticed several folks in the room that had the UStream page up so they could interact with the meeting chat - apparently oblivious that they were wasting bandwidth streaming back video of something that was taking place right in front of them. Unintentional use. There are a variety of software utilities that will make extensive use of bandwidth in the background, that folks often have installed on their laptops and smartphones, perhaps without realizing.Examples: Peer to peer downloading programs such as Bittorrent that run in the background Automatic software update services. These are legion, as every major software vendor has their own, so one can easily have Microsoft, Apple, Mozilla, Adobe, Google and others all trying to download updates in the background. Security software that downloads new signatures such as anti-virus, anti-malware, etc. Backup software and other software that "syncs" in the background to cloud services. For some numbers on how much network bandwidth gets sucked up by these non-web, non-email type services, check out this recent Wired article. Apparently web, email and chat all together are less than one quarter of the Internet traffic now. If the numbers in that article are correct, by filtering out all the other stuff I should be able to increase the usefulness of the WiFi four-fold. Now, in some situations I've been able to control access using security on the router to limit it to a very small group of people (typically the organizers of the meeting). But that's not always appropriate. At an upcoming meeting I would like to run the WiFi without security and let anyone use it, because it happens at the meeting location the 4G coverage in my town is particularly excellent. In a recent test I got 10 Megabits down at the meeting site. The "tell people to police themselves" solution mentioned at top is not appropriate because of (a) a largely non-technical audience and (b) the unintentional nature of much of the usage as described above. The "run a sniffer and filter as needed" solution is not useful because these meetings typically only last a couple of days, often only one day, and have a very small volunteer staff. I don't have a person to dedicate to network monitoring, and by the time we got the rules tweaked completely the meeting will be over. What I've Got First thing, I figured I would use OpenDNS's domain filtering rules to filter out whole classes of sites. A number of video and peer-to-peer sites can be wiped out using this. (Yes, I am aware that filtering via DNS technically leaves the services accessible - remember, these are largely non-technical users attending a 2 day meeting. It's enough). I figured I would start with these selections in OpenDNS's UI: I figure I will probably also block DNS (port 53) to anything other than the router itself, so that folks can't bypass my DNS configuration. A savvy user could get around this, because I'm not going to put a lot of elaborate filters on the firewall, but I don't care too much. Because these meetings don't last very long, its probably not going to be worth the trouble. This should cover the bulk of the non-web traffic, i.e. peer-to-peer and video if that Wired article is correct. Please advise if you think there are severe limitations to the OpenDNS approach. What I Need Note that OpenDNS focuses on things that are "objectionable" in some context or another. Video, music, radio and peer-to-peer all get covered. I still need to cover a number of perfectly reasonable things that we just want to block because they aren't needed in a meeting. Most of these are utilities that upload or download legit things in the background. Specifically, I'd like to know port numbers or DNS names to filter in order to effectively disable the following services: Microsoft automatic updates Apple automatic updates Adobe automatic updates Google automatic updates Other major software update services Major virus/malware/security signature updates Major background backup services Other services that run in the background and can eat lots of bandwidth I also would like any other suggestions you might have that would be applicable. Sorry to be so verbose, but I find it helps to be very, very clear on questions of this nature, and I already have half a solution with the OpenDNS thing.

    Read the article

  • Network throughput issue (ARP-related)

    - by Joel Coel
    The small college where I work is having some very strange network issues. I'm looking for any advice or ideas here. We were fine over the summer, but the trouble began few days after students returned to campus in force for the fall term. Symptoms The main symptom is that internet access will work, but it's very slow... often to the point of timeouts. As an example, a typical result from Speedtest.net will return .4Mbps download, but allow 3 to 8 Mbps upload speed. Lesser symptoms may include severely limited performance transferring data to and from our file server, or even in some cases the inability to log in to the computer (cannot reach the domain controller). The issue crosses multiple vlans, and has effected devices on nearly every vlan we operate. The issue does not impact all machines on the network. An unaffected machine will typically see at least 11Mbps download from speedtest.net, and perhaps much more depending on larger campus traffic patterns at the time. There is one variation on the larger issue. We have one vlan where users were unable to log into nearly all of the machines at all. IT staff would log in using a local administrator account (or in some cases cached credentials), and from there a release/renew or pinging the gateway would allow the machine to work... for a while. Complicating this issue is that this vlan covers our computer labs, which use software called Deep Freeze to completely reset the hard drives after a reboot. It could just the same issue manifesting differently because of stale data on machines that have not permanently altered low-level info for weeks. We were able to solve this, however, by creating a new vlan and moving the labs over to the new vlan wholesale. Instigations Eventually we noticed that the effected machines all had recent dhcp leases. We can predict when a machine will become "slow" by watching when a dhcp lease comes up for renewal. We played with setting the lease time very short for a test vlan, but all that did was remove our ability to predict when the machine would become slow. Machines with static IPs have pretty much always worked normally. Manually releasing/renewing an address will never cause a machine to become slow. In fact, in some cases this process has fixed a machine in that state. Most of the time, though, it doesn't help. We also noticed that mobile machines like laptops are likely to become slow when they cross to new vlans. Wireless on campus is divided up into "zones", where each zone maps to a small set of buildings. Moving to a new building can place you in a zone, thereby causing you to get a new address. A machine resuming from sleep mode is also very likely to be slow. Mitigations Sometimes, but not always, clearing the arp cache on an effected machine will allow it to work normally again. As already mentioned, releasing/renewing a local machine's IP address can fix that machine, but it's not guaranteed. Pinging the default gateway can also sometimes help with a slow machine. What seems to help most to mitigate the issue is clearing the arp cache on our core layer-3 switch. This switch is used for our dhcp system as the default gateway on all vlans, and it handles inter-vlan routing. The model is a 3Com 4900SX. To try to mitigate the issue, we have the cache timeout set on the switch all the way down to the lowest possible time, but it hasn't helped. I also put together a script that runs every few minutes to automatically connect to the switch and reset the cache. Unfortunately, this does not always work, and can even cause some machines to end up in the slow state for a short time (though these seem to correct themselves after a few minutes). We currently have a scheduled job that runs every 10 minutes to force the core switch to clear it's ARP cache, but this is far from perfect or desirable. Reproduction We now have a test machine that we can force into the slow state at will. It is connected to a switch with ports set up for each of our vlans. We make the machine slow by connecting to different vlans, and after a new connection or two it will be slow. It's also worth noting in this section that this has happened before at the start of prior terms, but in the past the problem has gone away on it's own after a few days. It solved itself before we had a chance to do much diagnostic work... hence why we've allowed it to drag so long into the term this time 'round; the expectation was this would be a short-lived situation. Other Factors It's worth mentioning that we have had about half a dozen switches just outright fail over the last year. These are mainly 2003/2004-era 3Coms (mostly 4200's) that were all put in at about the same time. They should still be covered under warranty, buy HP has made getting service somewhat difficult. Mostly in power supplies that have failed, but in a couple cases we have used a power supply from a switch with a failed mainboard to bring a switch with a failed power supply back to life. We do have UPS devices on all but three of four switches now, but that was not the case when I started two and a half years ago. Severe budget constraints (we were on the Dept. of Ed's financially challenged institutions list a couple years back) have forced me to look to the likes of Netgear and TrendNet for replacements, but so far these low-end models seem to be holding their own. It's also worth mentioning that the big change on our network this summer was migrating from a single cross-campus wireless SSID to the zoned approach mentioned earlier. I don't think this is the source of the issue, as like I've said: we've seen this before. However, it's possible this is exacerbating the issue, and may be much of the reason it's been so hard to isolate. Diagnosis At first it seemed clear to us, given the timing and persistent nature of the problem, that the source of the issue was an infected (or malicious) student machine doing ARP cache poisoning. However, repeated attempts to isolate the source have failed. Those attempts include numerous wireshark packet traces, and even taking entire buildings offline for brief periods. We have not been able even to find a smoking gun bad ARP entry. My current best guess is an overloaded or failing core switch, but I'm not sure on how to test for this, and the cost of replacing it blindly is steep. Again, any ideas appreciated.

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Entity Framework Code-First, OData & Windows Phone Client

    - by Jon Galloway
    Entity Framework Code-First is the coolest thing since sliced bread, Windows  Phone is the hottest thing since Tickle-Me-Elmo and OData is just too great to ignore. As part of the Full Stack project, we wanted to put them together, which turns out to be pretty easy… once you know how.   EF Code-First CTP5 is available now and there should be very few breaking changes in the release edition, which is due early in 2011.  Note: EF Code-First evolved rapidly and many of the existing documents and blog posts which were written with earlier versions, may now be obsolete or at least misleading.   Code-First? With traditional Entity Framework you start with a database and from that you generate “entities” – classes that bridge between the relational database and your object oriented program. With Code-First (Magic-Unicorn) (see Hanselman’s write up and this later write up by Scott Guthrie) the Entity Framework looks at classes you created and says “if I had created these classes, the database would have to have looked like this…” and creates the database for you! By deriving your entity collections from DbSet and exposing them via a class that derives from DbContext, you "turn on" database backing for your POCO with a minimum of code and no hidden designer or configuration files. POCO == Plain Old CLR Objects Your entity objects can be used throughout your applications - in web applications, console applications, Silverlight and Windows Phone applications, etc. In our case, we'll want to read and update data from a Windows Phone client application, so we'll expose the entities through a DataService and hook the Windows Phone client application to that data via proxies.  Piece of Pie.  Easy as cake. The Demo Architecture To see this at work, we’ll create an ASP.NET/MVC application which will act as the host for our Data Service.  We’ll create an incredibly simple data layer using EF Code-First on top of SQLCE4 and we’ll expose the data in a WCF Data Service using the oData protocol.  Our Windows Phone 7 client will instantiate  the data context via a URI and load the data asynchronously. Setting up the Server project with MVC 3, EF Code First, and SQL CE 4 Create a new application of type ASP.NET MVC 3 and name it DeadSimpleServer.  We need to add the latest SQLCE4 and Entity Framework Code First CTP's to our project. Fortunately, NuGet makes that really easy. Open the Package Manager Console (View / Other Windows / Package Manager Console) and type in "Install-Package EFCodeFirst.SqlServerCompact" at the PM> command prompt. Since NuGet handles dependencies for you, you'll see that it installs everything you need to use Entity Framework Code First in your project. PM> install-package EFCodeFirst.SqlServerCompact 'SQLCE (= 4.0.8435.1)' not installed. Attempting to retrieve dependency from source... Done 'EFCodeFirst (= 0.8)' not installed. Attempting to retrieve dependency from source... Done 'WebActivator (= 1.0.0.0)' not installed. Attempting to retrieve dependency from source... Done You are downloading SQLCE from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'SQLCE 4.0.8435.1' You are downloading EFCodeFirst from Microsoft, the license agreement to which is available at http://go.microsoft.com/fwlink/?LinkID=206497. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst 0.8' Successfully installed 'WebActivator 1.0.0.0' You are downloading EFCodeFirst.SqlServerCompact from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst.SqlServerCompact 0.8' Successfully added 'SQLCE 4.0.8435.1' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst 0.8' to EfCodeFirst-CTP5 Successfully added 'WebActivator 1.0.0.0' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst.SqlServerCompact 0.8' to EfCodeFirst-CTP5 Note: We're using SQLCE 4 with Entity Framework here because they work really well together from a development scenario, but you can of course use Entity Framework Code First with other databases supported by Entity framework. Creating The Model using EF Code First Now we can create our model class. Right-click the Models folder and select Add/Class. Name the Class Person.cs and add the following code: using System.Data.Entity; namespace DeadSimpleServer.Models { public class Person { public int ID { get; set; } public string Name { get; set; } } public class PersonContext : DbContext { public DbSet<Person> People { get; set; } } } Notice that the entity class Person has no special interfaces or base class. There's nothing special needed to make it work - it's just a POCO. The context we'll use to access the entities in the application is called PersonContext, but you could name it anything you wanted. The important thing is that it inherits DbContext and contains one or more DbSet which holds our entity collections. Adding Seed Data We need some testing data to expose from our service. The simplest way to get that into our database is to modify the CreateCeDatabaseIfNotExists class in AppStart_SQLCEEntityFramework.cs by adding some seed data to the Seed method: protected virtual void Seed( TContext context ) { var personContext = context as PersonContext; personContext.People.Add( new Person { ID = 1, Name = "George Washington" } ); personContext.People.Add( new Person { ID = 2, Name = "John Adams" } ); personContext.People.Add( new Person { ID = 3, Name = "Thomas Jefferson" } ); personContext.SaveChanges(); } The CreateCeDatabaseIfNotExists class name is pretty self-explanatory - when our DbContext is accessed and the database isn't found, a new one will be created and populated with the data in the Seed method. There's one more step to make that work - we need to uncomment a line in the Start method at the top of of the AppStart_SQLCEEntityFramework class and set the context name, as shown here, public static class AppStart_SQLCEEntityFramework { public static void Start() { DbDatabase.DefaultConnectionFactory = new SqlCeConnectionFactory("System.Data.SqlServerCe.4.0"); // Sets the default database initialization code for working with Sql Server Compact databases // Uncomment this line and replace CONTEXT_NAME with the name of your DbContext if you are // using your DbContext to create and manage your database DbDatabase.SetInitializer(new CreateCeDatabaseIfNotExists<PersonContext>()); } } Now our database and entity framework are set up, so we can expose data via WCF Data Services. Note: This is a bare-bones implementation with no administration screens. If you'd like to see how those are added, check out The Full Stack screencast series. Creating the oData Service using WCF Data Services Add a new WCF Data Service to the project (right-click the project / Add New Item / Web / WCF Data Service). We’ll be exposing all the data as read/write.  Remember to reconfigure to control and minimize access as appropriate for your own application. Open the code behind for your service. In our case, the service was called PersonTestDataService.svc so the code behind class file is PersonTestDataService.svc.cs. using System.Data.Services; using System.Data.Services.Common; using System.ServiceModel; using DeadSimpleServer.Models; namespace DeadSimpleServer { [ServiceBehavior( IncludeExceptionDetailInFaults = true )] public class PersonTestDataService : DataService<PersonContext> { // This method is called only once to initialize service-wide policies. public static void InitializeService( DataServiceConfiguration config ) { config.SetEntitySetAccessRule( "*", EntitySetRights.All ); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; config.UseVerboseErrors = true; } } } We're enabling a few additional settings to make it easier to debug if you run into trouble. The ServiceBehavior attribute is set to include exception details in faults, and we're using verbose errors. You can remove both of these when your service is working, as your public production service shouldn't be revealing exception information. You can view the output of the service by running the application and browsing to http://localhost:[portnumber]/PersonTestDataService.svc/: <service xml:base="http://localhost:49786/PersonTestDataService.svc/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app"> <workspace> <atom:title>Default</atom:title> <collection href="People"> <atom:title>People</atom:title> </collection> </workspace> </service> This indicates that the service exposes one collection, which is accessible by browsing to http://localhost:[portnumber]/PersonTestDataService.svc/People <?xml version="1.0" encoding="iso-8859-1" standalone="yes"?> <feed xml:base=http://localhost:49786/PersonTestDataService.svc/ xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <title type="text">People</title> <id>http://localhost:49786/PersonTestDataService.svc/People</id> <updated>2010-12-29T01:01:50Z</updated> <link rel="self" title="People" href="People" /> <entry> <id>http://localhost:49786/PersonTestDataService.svc/People(1)</id> <title type="text"></title> <updated>2010-12-29T01:01:50Z</updated> <author> <name /> </author> <link rel="edit" title="Person" href="People(1)" /> <category term="DeadSimpleServer.Models.Person" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:ID m:type="Edm.Int32">1</d:ID> <d:Name>George Washington</d:Name> </m:properties> </content> </entry> <entry> ... </entry> </feed> Let's recap what we've done so far. But enough with services and XML - let's get this into our Windows Phone client application. Creating the DataServiceContext for the Client Use the latest DataSvcUtil.exe from http://odata.codeplex.com. As of today, that's in this download: http://odata.codeplex.com/releases/view/54698 You need to run it with a few options: /uri - This will point to the service URI. In this case, it's http://localhost:59342/PersonTestDataService.svc  Pick up the port number from your running server (e.g., the server formerly known as Cassini). /out - This is the DataServiceContext class that will be generated. You can name it whatever you'd like. /Version - should be set to 2.0 /DataServiceCollection - Include this flag to generate collections derived from the DataServiceCollection base, which brings in all the ObservableCollection goodness that handles your INotifyPropertyChanged events for you. Here's the console session from when we ran it: <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> Next, to keep things simple, change the Binding on the two TextBlocks within the DataTemplate to Name and ID, <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Margin="0,0,0,17" Width="432"> <TextBlock Text="{Binding Name}" TextWrapping="Wrap" Style="{StaticResource PhoneTextExtraLargeStyle}" /> <TextBlock Text="{Binding ID}" TextWrapping="Wrap" Margin="12,-6,12,0" Style="{StaticResource PhoneTextSubtleStyle}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Getting The Context In the code-behind you’ll first declare a member variable to hold the context from the Entity Framework. This is named using convention over configuration. The db type is Person and the context is of type PersonContext, You initialize it by providing the URI, in this case using the URL obtained from the Cassini web server, PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); Create a second member variable of type DataServiceCollection<Person> but do not initialize it, DataServiceCollection<Person> people; In the constructor you’ll initialize the DataServiceCollection using the PersonContext, public MainPage() { InitializeComponent(); people = new DataServiceCollection<Person>( context ); Finally, you’ll load the people collection using the LoadAsync method, passing in the fully specified URI for the People collection in the web service, people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); Note that this method runs asynchronously and when it is finished the people  collection is already populated. Thus, since we didn’t need or want to override any of the behavior we don’t implement the LoadCompleted. You can use the LoadCompleted event if you need to do any other UI updates, but you don't need to. The final code is as shown below: using System; using System.Data.Services.Client; using System.Windows; using System.Windows.Controls; using DeadSimpleServer.Models; using Microsoft.Phone.Controls; namespace WindowsPhoneODataTest { public partial class MainPage : PhoneApplicationPage { PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); DataServiceCollection<Person> people; // Constructor public MainPage() { InitializeComponent(); // Set the data context of the listbox control to the sample data // DataContext = App.ViewModel; people = new DataServiceCollection<Person>( context ); people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); DataContext = people; this.Loaded += new RoutedEventHandler( MainPage_Loaded ); } // Handle selection changed on ListBox private void MainListBox_SelectionChanged( object sender, SelectionChangedEventArgs e ) { // If selected index is -1 (no selection) do nothing if ( MainListBox.SelectedIndex == -1 ) return; // Navigate to the new page NavigationService.Navigate( new Uri( "/DetailsPage.xaml?selectedItem=" + MainListBox.SelectedIndex, UriKind.Relative ) ); // Reset selected index to -1 (no selection) MainListBox.SelectedIndex = -1; } // Load data for the ViewModel Items private void MainPage_Loaded( object sender, RoutedEventArgs e ) { if ( !App.ViewModel.IsDataLoaded ) { App.ViewModel.LoadData(); } } } } With people populated we can set it as the DataContext and run the application; you’ll find that the Name and ID are displayed in the list on the Mainpage. Here's how the pieces in the client fit together: Complete source code available here

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258  | Next Page >