Search Results

Search found 18272 results on 731 pages for 'ldap query'.

Page 640/731 | < Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >

  • Virtual host Alias not routing properly

    - by Jacob
    I apologize if this question has been asked many times in the past. I am not 100% sure of the exact cause of my issue and am out of google magic right now. Basically I have a virtual host file setup with an Alias record that points to a different directory other the document root. It basically looks like this <VirtualHost *:80> ServerName iBusinessCentral.com ServerAdmin [email protected] DocumentRoot /var/www/marketingsites/ ServerAlias iBusinessCentral.com *.iBusinessCentral.com Alias /unsub "/var/www/unsub/site_index/" </VirtualHost> When I navigate to ibusinesscentral.com/unsub/?randomquerystring I am directed to the correct folder. If I remove the query string and navigate to ibusinesscentral.com/unsub/ I am taken to the directory in the document root. The unsub directory is a zend application and I need to be able to navigate to different url paths like ibusinesscentral.com/unsub/unenroll?querystring I have tried using AliasMatch instead of Alias. I have also tried adding a slash after the unsub portion of the Alias record, and have not had any luck to this point. Thanks in advance for any assistance

    Read the article

  • Mysql: Working With 192 Trillion Records... (Yes, 192 Trillion)

    - by Sarah
    Here's the question... Considering 192 trillion records, what should my considerations be? My main concern is speed. Here's the table... CREATE TABLE `ref` ( `id` INTEGER(13) AUTO_INCREMENT DEFAULT NOT NULL, `rel_id` INTEGER(13) NOT NULL, `p1` INTEGER(13) NOT NULL, `p2` INTEGER(13) DEFAULT NULL, `p3` INTEGER(13) DEFAULT NULL, `s` INTEGER(13) NOT NULL, `p4` INTEGER(13) DEFAULT NULL, `p5` INTEGER(13) DEFAULT NULL, `p6` INTEGER(13) DEFAULT NULL, PRIMARY KEY (`id`), KEY (`s`), KEY (`rel_id`), KEY (`p3`), KEY (`p4`) ); Here's the queries... SELECT id, s FROM ref WHERE red_id="$rel_id" AND p3="$p3" AND p4="$p4" SELECT rel_id, p1, p2, p3, p4, p5, p6 FROM ref WHERE id="$id" INSERT INTO rel (rel_id, p1, p2, p3, s, p4, p5, p6) VALUES ("$rel_id", "$p1", "$p2", "$p3", "$s", "$p4", "$p5", "$p6") Here's some notes... The SELECT's will be done much more frequently than the INSERT. However, occasionally I want to add a few hundred records at a time. Load-wise, there will be nothing for hours then maybe a few thousand queries all at once. Don't think I can normalize any more (need the p values in a combination) The database as a whole is very relational. This will be the largest table by far (next largest is about 900k) UPDATE (08/11/2010) Interestingly, I've been given a second option... Instead of 192 trillion I could store 2.6*10^16 (15 zeros, meaning 26 Quadrillion)... But in this second option I would only need to store one bigint(18) as the index in a table. That's it - just the one column. So I would just be checking for the existence of a value. Occasionally adding records, never deleting them. So that makes me think there must be a better solution then mysql for simply storing numbers... Given this second option, should I take it or stick with the first... [edit] Just got news of some testing that's been done - 100 million rows with this setup returns the query in 0.0004 seconds [/edit]

    Read the article

  • How many VPS do I need for my website? [duplicate]

    - by michael
    This question already has an answer here: How do you do load testing and capacity planning for web sites? 3 answers I made a website which aims at simulating a trading market. There are a list of prices and corresponding volumes that people want to purchase. Users can purchase at any price any time. My website retrieves the prices and volumes from my database every 2 seconds (I have to update the user's browser frequently to allow them to see the current market). Users' database INSERT query can be sent any time if they purchase. I used ajax to post or get data from my database (sometimes nested ajax calls). So, every 2 seconds, each user will send or retrieve data by using more than 20 database queries (in order to show a users the current prices and volumes). Also, I may have 200 users at a time. I was not using VPS before, and I got banned because of using too much CPU resources on my host. Now, I've purchased VPS*2 from a hosting servers. I have: CPU Speed: 2000 Mhz Memory: 2048 MB Disk Space: 20000 MB Bandwidth: 2000 GB Connection: 40 Mb/s Dedicated IP's 2 IP's Is this enough for my 200 users? Also, which VPS OS is suitable for me? Thank you.

    Read the article

  • Debugging cucumber/gem dependencies

    - by mobmad
    How do you debug and fix gem errors like below? Although the below case is very specific, I'm also looking for solution to related problems like "gem already activated [...]", and resources to gem management/debugging. mycomputer:projectfolder username$ cucumber features Using the default profile... WARNING: No DRb server is running. Running features locally: /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement can't activate , already activated ruby-hmac-0.4.0 (Gem::Exception) /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:101:in `specification' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/locator.rb:81:in `plugins' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `inject' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/locator.rb:81:in `each' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/locator.rb:81:in `inject' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/locator.rb:81:in `plugins' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:109:in `locate_plugins' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:108:in `map' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:108:in `locate_plugins' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:32:in `all_plugins' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:22:in `plugins' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:53:in `add_plugin_load_paths' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:294:in `add_plugin_load_paths' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:136:in `process' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `send' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `run' /Users/username/Documents/projectfolder.0/sites/projectfolder/config/environment.rb:9 /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /Library/Ruby/Gems/1.8/gems/polyglot-0.2.9/lib/polyglot.rb:70:in `require' ./features/support/env.rb:12 /Library/Ruby/Gems/1.8/gems/spork-0.7.5/lib/spork.rb:23:in `prefork' ./features/support/env.rb:9 /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /Library/Ruby/Gems/1.8/gems/polyglot-0.2.9/lib/polyglot.rb:70:in `require' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/rb_support/rb_language.rb:124:in `load_code_file' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/step_mother.rb:84:in `load_code_file' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/step_mother.rb:76:in `load_code_files' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/step_mother.rb:75:in `each' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/step_mother.rb:75:in `load_code_files' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/cli/main.rb:47:in `execute!' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/cli/main.rb:24:in `execute' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/cucumber:8 /usr/bin/cucumber:19:in `load' /usr/bin/cucumber:19 And this is the output from gem list actionmailer (2.3.5, 2.2.2, 1.3.6) actionpack (2.3.5, 2.2.2, 1.13.6) actionwebservice (1.2.6) activerecord (2.3.5, 2.2.2, 1.15.6) activeresource (2.3.5, 2.2.2) activesupport (2.3.5, 2.2.2, 1.4.4) acts_as_ferret (0.4.4, 0.4.3) adamwiggins-rest-client (1.0.4) aslakhellesoy-webrat (0.4.4.1) aslakjo-comatose (2.0.5.12) authlogic (2.1.3) authlogic-oid (1.0.4) builder (2.1.2) capistrano (2.5.17, 2.5.2) cgi_multipart_eof_fix (2.5.0) configuration (1.1.0) cucumber (0.4.4) cucumber-rails (0.3.0) daemons (1.0.10) database_cleaner (0.5.0) diff-lcs (1.1.2) dnssd (1.3.1, 0.6.0) fakeweb (1.2.8) fastthread (1.0.7, 1.0.1) fcgi (0.8.8, 0.8.7) ferret (0.11.6) gem_plugin (0.2.3) gemcutter (0.4.1) heroku (1.8.0) highline (1.5.2, 1.5.0) hoe (2.5.0) hpricot (0.8.2, 0.6.164) json (1.2.2) json_pure (1.2.2) launchy (0.3.5) libxml-ruby (1.1.3, 1.1.2) linecache (0.43) log4r (1.1.5) mime-types (1.16) mongrel (1.1.5) mysql (2.8.1) needle (1.3.0) net-scp (1.0.2, 1.0.1) net-sftp (2.0.4, 2.0.1, 1.1.1) net-ssh (2.0.20, 2.0.4, 1.1.4) net-ssh-gateway (1.0.1, 1.0.0) nifty-generators (0.3.2) nokogiri (1.4.1) oauth (0.3.6) oniguruma (1.1.0) plist (3.1.0) polyglot (0.2.9) rack (1.1.0, 1.0.1) rack-test (0.5.3) rails (2.3.5, 2.2.2, 1.2.6) rake (0.8.7, 0.8.3) RedCloth (4.2.2, 4.1.1) rest-client (1.4.0) rspec (1.3.0) rspec-rails (1.3.2) ruby-activeldap (0.8.3.1) ruby-debug-base (0.10.3) ruby-debug-ide (0.4.9) ruby-hmac (0.4.0) ruby-net-ldap (0.0.4) ruby-openid (2.1.7, 2.1.2) ruby-yadis (0.3.4) rubyforge (2.0.4) rubygems-update (1.3.6) rubynode (0.1.5) rubyzip (0.9.4) sanitize (1.2.0) sequel (3.0.0) sinatra (0.9.2) spork (0.7.5) sqlite3-ruby (1.2.5, 1.2.4) taps (0.2.26) term-ansicolor (1.0.4) termios (0.9.4) textpow (0.10.1) thor (0.9.9) treetop (1.4.2) twitter4r (0.3.2, 0.3.1) ultraviolet (0.10.2) webrat (0.7.0) will_paginate (2.3.12) xmpp4r (0.5, 0.4)

    Read the article

  • how to use 3D map Actionscript class in mxml file for display map.

    - by nemade-vipin
    hello friends, I have created the application in which I have to use 3D map Action Script class in mxml file to display a map in form. that is in tab navigator last tab. My ActionScript 3D map class is(FlyingDirections):- package src.SBTSCoreObject { import src.SBTSCoreObject.JSONDecoder; import com.google.maps.InfoWindowOptions; import com.google.maps.LatLng; import com.google.maps.LatLngBounds; import com.google.maps.Map3D; import com.google.maps.MapEvent; import com.google.maps.MapOptions; import com.google.maps.MapType; import com.google.maps.MapUtil; import com.google.maps.View; import com.google.maps.controls.NavigationControl; import com.google.maps.geom.Attitude; import com.google.maps.interfaces.IPolyline; import com.google.maps.overlays.Marker; import com.google.maps.overlays.MarkerOptions; import com.google.maps.services.Directions; import com.google.maps.services.DirectionsEvent; import com.google.maps.services.Route; import flash.display.Bitmap; import flash.display.DisplayObject; import flash.display.DisplayObjectContainer; import flash.display.Loader; import flash.display.LoaderInfo; import flash.display.Sprite; import flash.events.Event; import flash.events.IOErrorEvent; import flash.events.MouseEvent; import flash.events.TimerEvent; import flash.filters.DropShadowFilter; import flash.geom.Point; import flash.net.URLLoader; import flash.net.URLRequest; import flash.net.navigateToURL; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.text.TextFormat; import flash.utils.Timer; import flash.utils.getTimer; public class FlyingDirections extends Map3D { /** * Panoramio home page. */ private static const PANORAMIO_HOME:String = "http://www.panoramio.com/"; /** * The icon for the car. */ [Embed("assets/car-icon-24px.png")] private static const Car:Class; /** * The Panoramio icon. */ [Embed("assets/iw_panoramio.png")] private static const PanoramioIcon:Class; /** * We animate a zoom in to the start the route before the car starts * to move. This constant sets the time in seconds over which this * zoom occurs. */ private static const LEAD_IN_DURATION:Number = 3; /** * Duration of the trip in seconds. */ private static const TRIP_DURATION:Number = 40; /** * Constants that define the geometry of the Panoramio image markers. */ private static const BORDER_T:Number = 3; private static const BORDER_L:Number = 10; private static const BORDER_R:Number = 10; private static const BORDER_B:Number = 3; private static const GAP_T:Number = 2; private static const GAP_B:Number = 1; private static const IMAGE_SCALE:Number = 1; /** * Trajectory that the camera follows over time. Each element is an object * containing properties used to generate parameter values for flyTo(..). * fraction = 0 corresponds to the start of the trip; fraction = 1 * correspondsto the end of the trip. */ private var FLY_TRAJECTORY:Array = [ { fraction: 0, zoom: 6, attitude: new Attitude(0, 0, 0) }, { fraction: 0.2, zoom: 8.5, attitude: new Attitude(30, 30, 0) }, { fraction: 0.5, zoom: 9, attitude: new Attitude(30, 40, 0) }, { fraction: 1, zoom: 8, attitude: new Attitude(50, 50, 0) }, { fraction: 1.1, zoom: 8, attitude: new Attitude(130, 50, 0) }, { fraction: 1.2, zoom: 8, attitude: new Attitude(220, 50, 0) }, ]; /** * Number of panaramio photos for which we load data. We&apos;ll select a * subset of these approximately evenly spaced along the route. */ private static const NUM_GEOTAGGED_PHOTOS:int = 50; /** * Number of panaramio photos that we actually show. */ private static const NUM_SHOWN_PHOTOS:int = 7; /** * Scaling between real trip time and animation time. */ private static const SCALE_TIME:Number = 0.001; /** * getTimer() value at the instant that we start the trip. If this is 0 then * we have not yet started the car moving. */ private var startTimer:int = 0; /** * The current route. */ private var route:Route; /** * The polyline for the route. */ private var polyline:IPolyline; /** * The car marker. */ private var marker:Marker; /** * The cumulative duration in seconds over each step in the route. * cumulativeStepDuration[0] is 0; cumulativeStepDuration[1] adds the * duration of step 0; cumulativeStepDuration[2] adds the duration * of step 1; etc. */ private var cumulativeStepDuration:/*Number*/Array = []; /** * The cumulative distance in metres over each vertex in the route polyline. * cumulativeVertexDistance[0] is 0; cumulativeVertexDistance[1] adds the * distance to vertex 1; cumulativeVertexDistance[2] adds the distance to * vertex 2; etc. */ private var cumulativeVertexDistance:Array; /** * Array of photos loaded from Panoramio. This array has the same format as * the &apos;photos&apos; property within the JSON returned by the Panoramio API * (see http://www.panoramio.com/api/), with additional properties added to * individual photo elements to hold the loader structures that fetch * the actual images. */ private var photos:Array = []; /** * Array of polyline vertices, where each element is in world coordinates. * Several computations can be faster if we can use world coordinates * instead of LatLng coordinates. */ private var worldPoly:/*Point*/Array; /** * Whether the start button has been pressed. */ private var startButtonPressed:Boolean = false; /** * Saved event from onDirectionsSuccess call. */ private var directionsSuccessEvent:DirectionsEvent = null; /** * Start button. */ private var startButton:Sprite; /** * Alpha value used for the Panoramio image markers. */ private var markerAlpha:Number = 0; /** * Index of the current driving direction step. Used to update the * info window content each time we progress to a new step. */ private var currentStepIndex:int = -1; /** * The fly directions map constructor. * * @constructor */ public function FlyingDirections() { key="ABQIAAAA7QUChpcnvnmXxsjC7s1fCxQGj0PqsCtxKvarsoS-iqLdqZSKfxTd7Xf-2rEc_PC9o8IsJde80Wnj4g"; super(); addEventListener(MapEvent.MAP_PREINITIALIZE, onMapPreinitialize); addEventListener(MapEvent.MAP_READY, onMapReady); } /** * Handles map preintialize. Initializes the map center and zoom level. * * @param event The map event. */ private function onMapPreinitialize(event:MapEvent):void { setInitOptions(new MapOptions({ center: new LatLng(-26.1, 135.1), zoom: 4, viewMode: View.VIEWMODE_PERSPECTIVE, mapType:MapType.PHYSICAL_MAP_TYPE })); } /** * Handles map ready and looks up directions. * * @param event The map event. */ private function onMapReady(event:MapEvent):void { enableScrollWheelZoom(); enableContinuousZoom(); addControl(new NavigationControl()); // The driving animation will be updated on every frame. addEventListener(Event.ENTER_FRAME, enterFrame); addStartButton(); // We start the directions loading now, so that we&apos;re ready to go when // the user hits the start button. var directions:Directions = new Directions(); directions.addEventListener( DirectionsEvent.DIRECTIONS_SUCCESS, onDirectionsSuccess); directions.addEventListener( DirectionsEvent.DIRECTIONS_FAILURE, onDirectionsFailure); directions.load("48 Pirrama Rd, Pyrmont, NSW to Byron Bay, NSW"); } /** * Adds a big blue start button. */ private function addStartButton():void { startButton = new Sprite(); startButton.buttonMode = true; startButton.addEventListener(MouseEvent.CLICK, onStartClick); startButton.graphics.beginFill(0x1871ce); startButton.graphics.drawRoundRect(0, 0, 150, 100, 10, 10); startButton.graphics.endFill(); var startField:TextField = new TextField(); startField.autoSize = TextFieldAutoSize.LEFT; startField.defaultTextFormat = new TextFormat("_sans", 20, 0xffffff, true); startField.text = "Start!"; startButton.addChild(startField); startField.x = 0.5 * (startButton.width - startField.width); startField.y = 0.5 * (startButton.height - startField.height); startButton.filters = [ new DropShadowFilter() ]; var container:DisplayObjectContainer = getDisplayObject() as DisplayObjectContainer; container.addChild(startButton); startButton.x = 0.5 * (container.width - startButton.width); startButton.y = 0.5 * (container.height - startButton.height); var panoField:TextField = new TextField(); panoField.autoSize = TextFieldAutoSize.LEFT; panoField.defaultTextFormat = new TextFormat("_sans", 11, 0x000000, true); panoField.text = "Photos provided by Panoramio are under the copyright of their owners."; container.addChild(panoField); panoField.x = container.width - panoField.width - 5; panoField.y = 5; } /** * Handles directions success. Starts flying the route if everything * is ready. * * @param event The directions event. */ private function onDirectionsSuccess(event:DirectionsEvent):void { directionsSuccessEvent = event; flyRouteIfReady(); } /** * Handles click on the start button. Starts flying the route if everything * is ready. */ private function onStartClick(event:MouseEvent):void { startButton.removeEventListener(MouseEvent.CLICK, onStartClick); var container:DisplayObjectContainer = getDisplayObject() as DisplayObjectContainer; container.removeChild(startButton); startButtonPressed = true; flyRouteIfReady(); } /** * If we have loaded the directions and the start button has been pressed * start flying the directions route. */ private function flyRouteIfReady():void { if (!directionsSuccessEvent || !startButtonPressed) { return; } var directions:Directions = directionsSuccessEvent.directions; // Extract the route. route = directions.getRoute(0); // Draws the polyline showing the route. polyline = directions.createPolyline(); addOverlay(directions.createPolyline()); // Creates a car marker that is moved along the route. var car:DisplayObject = new Car(); marker = new Marker(route.startGeocode.point, new MarkerOptions({ icon: car, iconOffset: new Point(-car.width / 2, -car.height) })); addOverlay(marker); transformPolyToWorld(); createCumulativeArrays(); // Load Panoramio data for the region covered by the route. loadPanoramioData(directions.bounds); var duration:Number = route.duration; // Start a timer that will trigger the car moving after the lead in time. var leadInTimer:Timer = new Timer(LEAD_IN_DURATION * 1000, 1); leadInTimer.addEventListener(TimerEvent.TIMER, onLeadInDone); leadInTimer.start(); var flyTime:Number = -LEAD_IN_DURATION; // Set up the camera flight trajectory. for each (var flyStep:Object in FLY_TRAJECTORY) { var time:Number = flyStep.fraction * duration; var center:LatLng = latLngAt(time); var scaledTime:Number = time * SCALE_TIME; var zoom:Number = flyStep.zoom; var attitude:Attitude = flyStep.attitude; var elapsed:Number = scaledTime - flyTime; flyTime = scaledTime; flyTo(center, zoom, attitude, elapsed); } } /** * Loads Panoramio data for the route bounds. We load data about more photos * than we need, then select a subset lying along the route. * @param bounds Bounds within which to fetch images. */ private function loadPanoramioData(bounds:LatLngBounds):void { var params:Object = { order: "popularity", set: "full", from: "0", to: NUM_GEOTAGGED_PHOTOS.toString(10), size: "small", minx: bounds.getWest(), miny: bounds.getSouth(), maxx: bounds.getEast(), maxy: bounds.getNorth() }; var loader:URLLoader = new URLLoader(); var request:URLRequest = new URLRequest( "http://www.panoramio.com/map/get_panoramas.php?" + paramsToString(params)); loader.addEventListener(Event.COMPLETE, onPanoramioDataLoaded); loader.addEventListener(IOErrorEvent.IO_ERROR, onPanoramioDataFailed); loader.load(request); } /** * Transforms the route polyline to world coordinates. */ private function transformPolyToWorld():void { var numVertices:int = polyline.getVertexCount(); worldPoly = new Array(numVertices); for (var i:int = 0; i < numVertices; ++i) { var vertex:LatLng = polyline.getVertex(i); worldPoly[i] = fromLatLngToPoint(vertex, 0); } } /** * Returns the time at which the route approaches closest to the * given point. * @param world Point in world coordinates. * @return Route time at which the closest approach occurs. */ private function getTimeOfClosestApproach(world:Point):Number { var minDistSqr:Number = Number.MAX_VALUE; var numVertices:int = worldPoly.length; var x:Number = world.x; var y:Number = world.y; var minVertex:int = 0; for (var i:int = 0; i < numVertices; ++i) { var dx:Number = worldPoly[i].x - x; var dy:Number = worldPoly[i].y - y; var distSqr:Number = dx * dx + dy * dy; if (distSqr < minDistSqr) { minDistSqr = distSqr; minVertex = i; } } return cumulativeVertexDistance[minVertex]; } /** * Returns the array index of the first element that compares greater than * the given value. * @param ordered Ordered array of elements. * @param value Value to use for comparison. * @return Array index of the first element that compares greater than * the given value. */ private function upperBound(ordered:Array, value:Number, first:int=0, last:int=-1):int { if (last < 0) { last = ordered.length; } var count:int = last - first; var index:int; while (count > 0) { var step:int = count >> 1; index = first + step; if (value >= ordered[index]) { first = index + 1; count -= step - 1; } else { count = step; } } return first; } /** * Selects up to a given number of photos approximately evenly spaced along * the route. * @param ordered Array of photos, each of which is an object with * a property &apos;closestTime&apos;. * @param number Number of photos to select. */ private function selectEvenlySpacedPhotos(ordered:Array, number:int):Array { var start:Number = cumulativeVertexDistance[0]; var end:Number = cumulativeVertexDistance[cumulativeVertexDistance.length - 2]; var closestTimes:Array = []; for each (var photo:Object in ordered) { closestTimes.push(photo.closestTime); } var selectedPhotos:Array = []; for (var i:int = 0; i < number; ++i) { var idealTime:Number = start + ((end - start) * (i + 0.5) / number); var index:int = upperBound(closestTimes, idealTime); if (index < 1) { index = 0; } else if (index >= ordered.length) { index = ordered.length - 1; } else { var errorToPrev:Number = Math.abs(idealTime - closestTimes[index - 1]); var errorToNext:Number = Math.abs(idealTime - closestTimes[index]); if (errorToPrev < errorToNext) { --index; } } selectedPhotos.push(ordered[index]); } return selectedPhotos; } /** * Handles completion of loading the Panoramio index data. Selects from the * returned photo indices a subset of those that lie along the route and * initiates load of each of these. * @param event Load completion event. */ private function onPanoramioDataLoaded(event:Event):void { var loader:URLLoader = event.target as URLLoader; var decoder:JSONDecoder = new JSONDecoder(loader.data as String); var allPhotos:Array = decoder.getValue().photos; for each (var photo:Object in allPhotos) { var latLng:LatLng = new LatLng(photo.latitude, photo.longitude); photo.closestTime = getTimeOfClosestApproach(fromLatLngToPoint(latLng, 0)); } allPhotos.sortOn("closestTime", Array.NUMERIC); photos = selectEvenlySpacedPhotos(allPhotos, NUM_SHOWN_PHOTOS); for each (photo in photos) { var photoLoader:Loader = new Loader(); // The images aren&apos;t on panoramio.com: we can&apos;t acquire pixel access // using "new LoaderContext(true)". photoLoader.load( new URLRequest(photo.photo_file_url)); photo.loader = photoLoader; // Save the loader info: we use this to find the original element when // the load completes. photo.loaderInfo = photoLoader.contentLoaderInfo; photoLoader.contentLoaderInfo.addEventListener( Event.COMPLETE, onPhotoLoaded); } } /** * Creates a MouseEvent listener function that will navigate to the given * URL in a new window. * @param url URL to which to navigate. */ private function createOnClickUrlOpener(url:String):Function { return function(event:MouseEvent):void { navigateToURL(new URLRequest(url)); }; } /** * Handles completion of loading an individual Panoramio image. * Adds a custom marker that displays the image. Initially this is made * invisible so that it can be faded in as needed. * @param event Load completion event. */ private function onPhotoLoaded(event:Event):void { var loaderInfo:LoaderInfo = event.target as LoaderInfo; // We need to find which photo element this image corresponds to. for each (var photo:Object in photos) { if (loaderInfo == photo.loaderInfo) { var imageMarker:Sprite = createImageMarker(photo.loader, photo.owner_name, photo.owner_url); var options:MarkerOptions = new MarkerOptions({ icon: imageMarker, hasShadow: true, iconAlignment: MarkerOptions.ALIGN_BOTTOM | MarkerOptions.ALIGN_LEFT }); var latLng:LatLng = new LatLng(photo.latitude, photo.longitude); var marker:Marker = new Marker(latLng, options); photo.marker = marker; addOverlay(marker); // A hack: we add the actual image after the overlay has been added, // which creates the shadow, so that the shadow is valid even if we // don&apos;t have security privileges to generate the shadow from the // image. marker.foreground.visible = false; marker.shadow.alpha = 0; var imageHolder:Sprite = new Sprite(); imageHolder.addChild(photo.loader); imageHolder.buttonMode = true; imageHolder.addEventListener( MouseEvent.CLICK, createOnClickUrlOpener(photo.photo_url)); imageMarker.addChild(imageHolder); return; } } trace("An image was loaded which could not be found in the photo array."); } /** * Creates a custom marker showing an image. */ private function createImageMarker(child:DisplayObject, ownerName:String, ownerUrl:String):Sprite { var content:Sprite = new Sprite(); var panoramioIcon:Bitmap = new PanoramioIcon(); var iconHolder:Sprite = new Sprite(); iconHolder.addChild(panoramioIcon); iconHolder.buttonMode = true; iconHolder.addEventListener(MouseEvent.CLICK, onPanoramioIconClick); panoramioIcon.x = BORDER_L; panoramioIcon.y = BORDER_T; content.addChild(iconHolder); // NOTE: we add the image as a child only after we&apos;ve added the marker // to the map. Currently the API requires this if it&apos;s to generate the // shadow for unprivileged content. // Shrink the image, so that it doesn&apos;t obcure too much screen space. // Ideally, we&apos;d subsample, but we don&apos;t have pixel level access. child.scaleX = IMAGE_SCALE; child.scaleY = IMAGE_SCALE; var imageW:Number = child.width; var imageH:Number = child.height; child.x = BORDER_L + 30; child.y = BORDER_T + iconHolder.height + GAP_T; var authorField:TextField = new TextField(); authorField.autoSize = TextFieldAutoSize.LEFT; authorField.defaultTextFormat = new TextFormat("_sans", 12); authorField.text = "author:"; content.addChild(authorField); authorField.x = BORDER_L; authorField.y = BORDER_T + iconHolder.height + GAP_T + imageH + GAP_B; var ownerField:TextField = new TextField(); ownerField.autoSize = TextFieldAutoSize.LEFT; var textFormat:TextFormat = new TextFormat("_sans", 14, 0x0e5f9a); ownerField.defaultTextFormat = textFormat; ownerField.htmlText = "<a href=\"" + ownerUrl + "\" target=\"_blank\">" + ownerName + "</a>"; content.addChild(ownerField); ownerField.x = BORDER_L + authorField.width; ownerField.y = BORDER_T + iconHolder.height + GAP_T + imageH + GAP_B; var totalW:Number = BORDER_L + Math.max(imageW, ownerField.width + authorField.width) + BORDER_R; var totalH:Number = BORDER_T + iconHolder.height + GAP_T + imageH + GAP_B + ownerField.height + BORDER_B; content.graphics.beginFill(0xffffff); content.graphics.drawRoundRect(0, 0, totalW, totalH, 10, 10); content.graphics.endFill(); var marker:Sprite = new Sprite(); marker.addChild(content); content.x = 30; content.y = 0; marker.graphics.lineStyle(); marker.graphics.beginFill(0xff0000); marker.graphics.drawCircle(0, totalH + 30, 3); marker.graphics.endFill(); marker.graphics.lineStyle(2, 0xffffff); marker.graphics.moveTo(30 + 10, totalH - 10); marker.graphics.lineTo(0, totalH + 30); return marker; } /** * Handles click on the Panoramio icon. */ private function onPanoramioIconClick(event:MouseEvent):void { navigateToURL(new URLRequest(PANORAMIO_HOME)); } /** * Handles failure of a Panoramio image load. */ private function onPanoramioDataFailed(event:IOErrorEvent):void { trace("Load of image failed: " + event); } /** * Returns a string containing cgi query parameters. * @param Associative array mapping query parameter key to value. * @return String containing cgi query parameters. */ private static function paramsToString(params:Object):String { var result:String = ""; var separator:String = ""; for (var key:String in params) { result += separator + encodeURIComponent(key) + "=" + encodeURIComponent(params[key]); separator = "&"; } return result; } /** * Called once the lead-in flight is done. Starts the car driving along * the route and starts a timer to begin fade in of the Panoramio * images in 1.5 seconds. */ private function onLeadInDone(event:Event):void { // Set startTimer non-zero so that the car starts to move. startTimer = getTimer(); // Start a timer that will fade in the Panoramio images. var fadeInTimer:Timer = new Timer(1500, 1); fadeInTimer.addEventListener(TimerEvent.TIMER, onFadeInTimer); fadeInTimer.start(); } /** * Handles the fade in timer&apos;s TIMER event. Sets markerAlpha above zero * which causes the frame enter handler to fade in the markers. */ private function onFadeInTimer(event:Event):void { markerAlpha = 0.01; } /** * The end time of the flight. */ private function get endTime():Number { if (!cumulativeStepDuration || cumulativeStepDuration.length == 0) { return startTimer; } return startTimer + cumulativeStepDuration[cumulativeStepDuration.length - 1]; } /** * Creates the cumulative arrays, cumulativeStepDuration and * cumulativeVertexDistance. */ private function createCumulativeArrays():void { cumulativeStepDuration = new Array(route.numSteps + 1); cumulativeVertexDistance = new Array(polyline.getVertexCount() + 1); var polylineTotal:Number = 0; var total:Number = 0; var numVertices:int = polyline.getVertexCount(); for (var stepIndex:int = 0; stepIndex < route.numSteps; ++stepIndex) { cumulativeStepDuration[stepIndex] = total; total += route.getStep(stepIndex).duration; var startVertex:int = stepIndex >= 0 ? route.getStep(stepIndex).polylineIndex : 0; var endVertex:int = stepIndex < (route.numSteps - 1) ? route.getStep(stepIndex + 1).polylineIndex : numVertices; var duration:Number = route.getStep(stepIndex).duration; var stepVertices:int = endVertex - startVertex; var latLng:LatLng = polyline.getVertex(startVertex); for (var vertex:int = startVertex; vertex < endVertex; ++vertex) { cumulativeVertexDistance[vertex] = polylineTotal; if (vertex < numVertices - 1) { var nextLatLng:LatLng = polyline.getVertex(vertex + 1); polylineTotal += nextLatLng.distanceFrom(latLng); } latLng = nextLatLng; } } cumulativeStepDuration[stepIndex] = total; } /** * Opens the info window above the car icon that details the given * step of the driving directions. * @param stepIndex Index of the current step. */ private function openInfoForStep(stepIndex:int):void { // Sets the content of the info window. var content:String; if (stepIndex >= route.numSteps) { content = "<b>" + route.endGeocode.address + "</b>" + "<br /><br />" + route.summaryHtml; } else { content = "<b>" + stepIndex + ".</b> " + route.getStep(stepIndex).descriptionHtml; } marker.openInfoWindow(new InfoWindowOptions({ contentHTML: content })); } /** * Displays the driving directions step appropriate for the given time. * Opens the info window showing the step instructions each time we * progress to a new step. * @param time Time for which to display the step. */ private function displayStepAt(time:Number):void { var stepIndex:int = upperBound(cumulativeStepDuration, time) - 1; var minStepIndex:int = 0; var maxStepIndex:int = route.numSteps - 1; if (stepIndex >= 0 && stepIndex <= maxStepIndex && currentStepIndex != stepIndex) { openInfoForStep(stepIndex); currentStepIndex = stepIndex; } } /** * Returns the LatLng at which the car should be positioned at the given * time. * @param time Time for which LatLng should be found. * @return LatLng. */ private function latLngAt(time:Number):LatLng { var stepIndex:int = upperBound(cumulativeStepDuration, time) - 1; var minStepIndex:int = 0; var maxStepIndex:int = route.numSteps - 1; if (stepIndex < minStepIndex) { return route.startGeocode.point; } else if (stepIndex > maxStepIndex) { return route.endGeocode.point; } var stepStart:Number = cumulativeStepDuration[stepIndex]; var stepEnd:Number = cumulativeStepDuration[stepIndex + 1]; var stepFraction:Number = (time - stepStart) / (stepEnd - stepStart); var startVertex:int = route.getStep(stepIndex).polylineIndex; var endVertex:int = (stepIndex + 1) < route.numSteps ? route.getStep(stepIndex + 1).polylineIndex : polyline.getVertexCount(); var stepVertices:int = endVertex - startVertex; var stepLeng

    Read the article

  • postfix: Temporary lookup failure

    - by mk_89
    I have followed the tutorials step by step for installing and configuring postfix https://help.ubuntu.com/community/Postfix https://help.ubuntu.com/community/PostfixBasicSetupHowto I am trying to test the services to whether Temporary lookup failure error telnet localhost 25 250 2.1.0 Ok rcpt to: fmaster@localhost 451 4.3.0 <fmaster@localhost>: Temporary lookup failure rcpt to: info@localhost 451 4.3.0 <info@localhost>: Temporary lookup failure I have tried searching the web but I have found no solutions, why am I getting this problem? mail.log Sep 24 01:03:05 bookcdb postfix/smtpd[21055]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <info@localhost>: Temporary lookup failure; from=<root@localhost> to=<info@localhost> proto=ESMTP helo=<localhost> Sep 24 01:03:19 bookcdb postfix/smtpd[21055]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <root@localhost>: Temporary lookup failure; from=<root@localhost> to=<root@localhost> proto=ESMTP helo=<localhost> Sep 24 01:08:19 bookcdb postfix/smtpd[21055]: timeout after RCPT from unknown[::1] Sep 24 01:08:19 bookcdb postfix/smtpd[21055]: disconnect from unknown[::1] Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:00:49 Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:00:49 Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max cache size 1 at Sep 24 01:00:49 Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: connect from unknown[::1] Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: transport_maps lookup failure Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: transport_maps lookup failure Sep 24 01:15:59 bookcdb postfix/smtpd[22175]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@localhost> to=<fmaster@localhost> proto=ESMTP helo=<localhost> Sep 24 01:16:30 postfix/smtpd[22175]: last message repeated 5 times Sep 24 01:16:30 bookcdb postfix/smtpd[22175]: disconnect from unknown[::1] Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:15:36 Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:15:36 Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max cache size 1 at Sep 24 01:15:36 Sep 24 01:20:32 bookcdb postfix/qmgr[21039]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 01:20:32 bookcdb postfix/qmgr[21039]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 01:20:32 bookcdb postfix/error[22402]: D0C596E0B34: to=<[email protected]>, relay=none, delay=5369, delays=5369/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: connect from unknown[::1] Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:25:14 bookcdb postfix/smtpd[22573]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<root@localhost> to=<[email protected]> proto=ESMTP helo=<localhost> Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 01:25:32 bookcdb postfix/error[22631]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=30203, delays=30203/0.01/0/0.1, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:25:32 bookcdb postfix/error[22632]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=30115, delays=30115/0.01/0/0.11, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:25:58 bookcdb postfix/smtpd[22573]: warning: non-SMTP command from unknown[::1]: subject: fdf Sep 24 01:25:58 bookcdb postfix/smtpd[22573]: disconnect from unknown[::1] Sep 24 01:26:01 bookcdb postfix/smtpd[22573]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:26:01 bookcdb postfix/smtpd[22573]: connect from unknown[::1] Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "root@locahost" Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:26:37 bookcdb postfix/smtpd[22573]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@locahost> to=<fmaster@localhost> proto=SMTP Sep 24 01:26:45 bookcdb postfix/smtpd[22573]: disconnect from unknown[::1] Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:24:16 Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:24:16 Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max cache size 1 at Sep 24 01:24:16 Sep 24 01:34:57 bookcdb dovecot: master: Dovecot v2.0.19 starting up (core dumps disabled) Sep 24 01:35:02 bookcdb amavis[1009]: starting. /usr/sbin/amavisd-new at mail.bookcdb.com amavisd-new-2.6.5 (20110407), Unicode aware Sep 24 01:35:02 bookcdb amavis[1009]: Perl version 5.014002 Sep 24 01:35:05 bookcdb amavis[1155]: Net::Server: Group Not Defined. Defaulting to EGID '114 114' Sep 24 01:35:05 bookcdb amavis[1155]: Net::Server: User Not Defined. Defaulting to EUID '108' Sep 24 01:35:05 bookcdb amavis[1155]: Module Amavis::Conf 2.208 Sep 24 01:35:05 bookcdb amavis[1155]: Module Archive::Zip 1.30 Sep 24 01:35:05 bookcdb amavis[1155]: Module BerkeleyDB 0.49 Sep 24 01:35:05 bookcdb amavis[1155]: Module Compress::Zlib 2.033 Sep 24 01:35:05 bookcdb amavis[1155]: Module Convert::TNEF 0.17 Sep 24 01:35:05 bookcdb amavis[1155]: Module Convert::UUlib 1.4 Sep 24 01:35:05 bookcdb amavis[1155]: Module Crypt::OpenSSL::RSA 0.27 Sep 24 01:35:05 bookcdb amavis[1155]: Module DB_File 1.821 Sep 24 01:35:05 bookcdb amavis[1155]: Module Digest::MD5 2.51 Sep 24 01:35:05 bookcdb amavis[1155]: Module Digest::SHA 5.61 Sep 24 01:35:05 bookcdb amavis[1155]: Module IO::Socket::INET6 2.69 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Entity 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Parser 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Tools 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::DKIM::Signer 0.39 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::DKIM::Verifier 0.39 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::Header 2.08 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::Internet 2.08 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::SPF v2.008 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::SpamAssassin 3.003002 Sep 24 01:35:05 bookcdb amavis[1155]: Module Net::DNS 0.66 Sep 24 01:35:05 bookcdb amavis[1155]: Module Net::Server 0.99 Sep 24 01:35:05 bookcdb amavis[1155]: Module NetAddr::IP 4.058 Sep 24 01:35:05 bookcdb amavis[1155]: Module Socket6 0.23 Sep 24 01:35:05 bookcdb amavis[1155]: Module Time::HiRes 1.972101 Sep 24 01:35:05 bookcdb amavis[1155]: Module URI 1.59 Sep 24 01:35:05 bookcdb amavis[1155]: Module Unix::Syslog 1.1 Sep 24 01:35:05 bookcdb amavis[1155]: Amavis::DB code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Amavis::Cache code loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL base code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL::Log code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL::Quarantine NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Lookup::SQL code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Lookup::LDAP code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: AM.PDP-in proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: SMTP-in proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Courier proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SMTP-out proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Pipe-out proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: BSMTP-out proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Local-out proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: OS_Fingerprint code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-VIRUS code loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM code loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-EXT code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-C code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-SA code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Unpackers code loaded Sep 24 01:35:05 bookcdb amavis[1155]: DKIM code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Tools code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Found $file at /usr/bin/file Sep 24 01:35:05 bookcdb amavis[1155]: No $altermime, not using it Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .mail Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .F Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .Z at /bin/uncompress Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .gz Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .bz2 at /bin/bzip2 -d Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .lzo tried: lzop -d Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .rpm tried: rpm2cpio.pl, rpm2cpio Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .cpio at /bin/pax Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .tar at /bin/pax Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .deb at /usr/bin/ar Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .zip Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .7z tried: 7zr, 7za, 7z Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .rar tried: unrar-free Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .arj tried: arj, unarj Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .arc tried: nomarch, arc Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .zoo tried: zoo Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .lha Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .doc tried: ripole Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .cab tried: cabextract Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .tnef Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .tnef Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .exe tried: unrar-free; arj, unarj Sep 24 01:35:05 bookcdb amavis[1155]: Using primary internal av scanner code for ClamAV-clamd Sep 24 01:35:05 bookcdb amavis[1155]: Found secondary av scanner ClamAV-clamscan at /usr/bin/clamscan Sep 24 01:35:05 bookcdb amavis[1155]: Creating db in /var/lib/amavis/db/; BerkeleyDB 0.49, libdb 5.1 Sep 24 01:35:05 bookcdb postgrey[1219]: Process Backgrounded Sep 24 01:35:05 bookcdb postgrey[1219]: 2012/09/24-01:35:05 postgrey (type Net::Server::Multiplex) starting! pid(1219) Sep 24 01:35:05 bookcdb postgrey[1219]: Using default listen value of 128 Sep 24 01:35:05 bookcdb postgrey[1219]: Binding to TCP port 10023 on host localhost#012 Sep 24 01:35:05 bookcdb postgrey[1219]: Setting gid to "116 116" Sep 24 01:35:05 bookcdb postgrey[1219]: Setting uid to "110" Sep 24 01:35:06 bookcdb spamd[1231]: logger: removing stderr method Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server started on port 783/tcp (running version 3.3.2) Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server pid: 1233 Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server successfully spawned child process, pid 1238 Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server successfully spawned child process, pid 1240 Sep 24 01:35:08 bookcdb spamd[1233]: prefork: child states: SI Sep 24 01:35:08 bookcdb spamd[1233]: prefork: child states: II Sep 24 01:35:15 bookcdb postfix/master[1729]: daemon started -- version 2.9.3, configuration /etc/postfix Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: connect from unknown[::1] Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:37:00 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:37:28 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2730, secured Sep 24 01:37:28 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=44/697 Sep 24 01:37:29 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2732, secured Sep 24 01:37:29 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=464/1303 Sep 24 01:37:29 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2734, secured Sep 24 01:37:29 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=117/1395 Sep 24 01:37:31 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2737, secured Sep 24 01:37:31 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=117/1395 Sep 24 01:37:49 bookcdb dovecot: imap-login: Login: user=<root>, method=PLAIN, rip=::1, lip=::1, mpid=2739, secured Sep 24 01:37:49 bookcdb dovecot: imap: Error: user root: Invalid settings in userdb: userdb returned 0 as uid Sep 24 01:37:49 bookcdb dovecot: imap: Error: Invalid user settings. Refer to server log for more information. Sep 24 01:37:54 bookcdb dovecot: imap-login: Login: user=<root>, method=PLAIN, rip=::1, lip=::1, mpid=2741, secured Sep 24 01:37:54 bookcdb dovecot: imap: Error: user root: Invalid settings in userdb: userdb returned 0 as uid Sep 24 01:37:54 bookcdb dovecot: imap: Error: Invalid user settings. Refer to server log for more information. Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2743, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=44/697 Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2745, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=464/1303 Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2747, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=117/1395 Sep 24 01:38:55 bookcdb postfix/smtpd[1995]: disconnect from unknown[::1] Sep 24 01:38:58 bookcdb postfix/smtpd[1995]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:38:58 bookcdb postfix/smtpd[1995]: connect from unknown[::1] Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "info@localhost" Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:39:37 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<info@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:39:47 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<info@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:40:13 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <info@localhost>: Temporary lookup failure; from=<info@localhost> to=<info@localhost> proto=SMTP Sep 24 01:43:08 bookcdb postfix/smtpd[1995]: disconnect from unknown[::1] Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:36:08 Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:36:08 Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max cache size 1 at Sep 24 01:36:08 Sep 24 01:48:05 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2805, secured Sep 24 01:48:05 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=85/681 Sep 24 01:51:30 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2815, secured Sep 24 01:51:30 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=117/1395 Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: 2EA006E0B32: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: E76996E09B2: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 02:05:15 bookcdb postfix/error[2842]: 2EA006E0B32: to=<[email protected]>, relay=none, delay=8391, delays=8391/0.05/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:05:16 bookcdb postfix/error[2843]: E76996E09B2: to=<[email protected]>, relay=none, delay=8416, delays=8416/0.03/0/0.11, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:30:15 bookcdb postfix/qmgr[1745]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 02:30:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:30:15 bookcdb postfix/error[2914]: D0C596E0B34: to=<[email protected]>, relay=none, delay=9551, delays=9551/0.01/0/0.08, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 02:35:15 bookcdb postfix/error[2926]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=34386, delays=34386/0.03/0/0.1, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:35:15 bookcdb postfix/error[2927]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=34299, delays=34298/0.02/0/0.12, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: 2EA006E0B32: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: E76996E09B2: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 03:15:15 bookcdb postfix/error[3025]: 2EA006E0B32: to=<[email protected]>, relay=none, delay=12590, delays=12590/0.01/0/0.07, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:15:15 bookcdb postfix/error[3026]: E76996E09B2: to=<[email protected]>, relay=none, delay=12616, delays=12616/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:40:15 bookcdb postfix/qmgr[1745]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 03:40:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:40:15 bookcdb postfix/error[3097]: D0C596E0B34: to=<[email protected]>, relay=none, delay=13752, delays=13752/0.01/0/0.07, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 03:45:15 bookcdb postfix/error[3129]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=38586, delays=38586/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:45:15 bookcdb postfix/error[3130]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=38498, delays=38498/0.01/0/0.08, dsn=4.3.0, status=deferred (mail transport unavailable) postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = server1.bookcdb.com, bookcdb.com, localhost.bookcdb.com, localho st, bookcdb.com myhostname = server1.bookcdb.com mynetworks = 127.0.0.0/8 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relay_domains = lists.bookcdb.com relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,rejec t_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • Using the West Wind Web Toolkit to set up AJAX and REST Services

    - by Rick Strahl
    I frequently get questions about which option to use for creating AJAX and REST backends for ASP.NET applications. There are many solutions out there to do this actually, but when I have a choice - not surprisingly - I fall back to my own tools in the West Wind West Wind Web Toolkit. I've talked a bunch about the 'in-the-box' solutions in the past so for a change in this post I'll talk about the tools that I use in my own and customer applications to handle AJAX and REST based access to service resources using the West Wind West Wind Web Toolkit. Let me preface this by saying that I like things to be easy. Yes flexible is very important as well but not at the expense of over-complexity. The goal I've had with my tools is make it drop dead easy, with good performance while providing the core features that I'm after, which are: Easy AJAX/JSON Callbacks Ability to return any kind of non JSON content (string, stream, byte[], images) Ability to work with both XML and JSON interchangeably for input/output Access endpoints via POST data, RPC JSON calls, GET QueryString values or Routing interface Easy to use generic JavaScript client to make RPC calls (same syntax, just what you need) Ability to create clean URLS with Routing Ability to use standard ASP.NET HTTP Stack for HTTP semantics It's all about options! In this post I'll demonstrate most of these features (except XML) in a few simple and short samples which you can download. So let's take a look and see how you can build an AJAX callback solution with the West Wind Web Toolkit. Installing the Toolkit Assemblies The easiest and leanest way of using the Toolkit in your Web project is to grab it via NuGet: West Wind Web and AJAX Utilities (Westwind.Web) and drop it into the project by right clicking in your Project and choosing Manage NuGet Packages from anywhere in the Project.   When done you end up with your project looking like this: What just happened? Nuget added two assemblies - Westwind.Web and Westwind.Utilities and the client ww.jquery.js library. It also added a couple of references into web.config: The default namespaces so they can be accessed in pages/views and a ScriptCompressionModule that the toolkit optionally uses to compress script resources served from within the assembly (namely ww.jquery.js and optionally jquery.js). Creating a new Service The West Wind Web Toolkit supports several ways of creating and accessing AJAX services, but for this post I'll stick to the lower level approach that works from any plain HTML page or of course MVC, WebForms, WebPages. There's also a WebForms specific control that makes this even easier but I'll leave that for another post. So, to create a new standalone AJAX/REST service we can create a new HttpHandler in the new project either as a pure class based handler or as a generic .ASHX handler. Both work equally well, but generic handlers don't require any web.config configuration so I'll use that here. In the root of the project add a Generic Handler. I'm going to call this one StockService.ashx. Once the handler has been created, edit the code and remove all of the handler body code. Then change the base class to CallbackHandler and add methods that have a [CallbackMethod] attribute. Here's the modified base handler implementation now looks like with an added HelloWorld method: using System; using Westwind.Web; namespace WestWindWebAjax { /// <summary> /// Handler implements CallbackHandler to provide REST/AJAX services /// </summary> public class SampleService : CallbackHandler { [CallbackMethod] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } } } Notice that the class inherits from CallbackHandler and that the HelloWorld service method is marked up with [CallbackMethod]. We're done here. Services Urlbased Syntax Once you compile, the 'service' is live can respond to requests. All CallbackHandlers support input in GET and POST formats, and can return results as JSON or XML. To check our fancy HelloWorld method we can now access the service like this: http://localhost/WestWindWebAjax/StockService.ashx?Method=HelloWorld&name=Rick which produces a default JSON response - in this case a string (wrapped in quotes as it's JSON): (note by default JSON will be downloaded by most browsers not displayed - various options are available to view JSON right in the browser) If I want to return the same data as XML I can tack on a &format=xml at the end of the querystring which produces: <string>Hello Rick. Time is: 11/1/2011 12:11:13 PM</string> Cleaner URLs with Routing Syntax If you want cleaner URLs for each operation you can also configure custom routes on a per URL basis similar to the way that WCF REST does. To do this you need to add a new RouteHandler to your application's startup code in global.asax.cs one for each CallbackHandler based service you create: protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); } With this code in place you can now add RouteUrl properties to any of your service methods. For the HelloWorld method that doesn't make a ton of sense but here is what a routed clean URL might look like in definition: [CallbackMethod(RouteUrl="stocks/HelloWorld/{name}")] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } The same URL I previously used now becomes a bit shorter and more readable with: http://localhost/WestWindWebAjax/HelloWorld/Rick It's an easy way to create cleaner URLs and still get the same functionality. Calling the Service with $.getJSON() Since the result produced is JSON you can now easily consume this data using jQuery's getJSON method. First we need a couple of scripts - jquery.js and ww.jquery.js in the page: <!DOCTYPE html> <html> <head> <link href="Css/Westwind.css" rel="stylesheet" type="text/css" /> <script src="scripts/jquery.min.js" type="text/javascript"></script> <script src="scripts/ww.jquery.min.js" type="text/javascript"></script> </head> <body> Next let's add a small HelloWorld example form (what else) that has a single textbox to type a name, a button and a div tag to receive the result: <fieldset> <legend>Hello World</legend> Please enter a name: <input type="text" name="txtHello" id="txtHello" value="" /> <input type="button" id="btnSayHello" value="Say Hello (POST)" /> <input type="button" id="btnSayHelloGet" value="Say Hello (GET)" /> <div id="divHelloMessage" class="errordisplay" style="display:none;width: 450px;" > </div> </fieldset> Then to call the HelloWorld method a little jQuery is used to hook the document startup and the button click followed by the $.getJSON call to retrieve the data from the server. <script type="text/javascript"> $(document).ready(function () { $("#btnSayHelloGet").click(function () { $.getJSON("SampleService.ashx", { Method: "HelloWorld", name: $("#txtHello").val() }, function (result) { $("#divHelloMessage") .text(result) .fadeIn(1000); }); });</script> .getJSON() expects a full URL to the endpoint of our service, which is the ASHX file. We can either provide a full URL (SampleService.ashx?Method=HelloWorld&name=Rick) or we can just provide the base URL and an object that encodes the query string parameters for us using an object map that has a property that matches each parameter for the server method. We can also use the clean URL routing syntax, but using the object parameter encoding actually is safer as the parameters will get properly encoded by jQuery. The result returned is whatever the result on the server method is - in this case a string. The string is applied to the divHelloMessage element and we're done. Obviously this is a trivial example, but it demonstrates the basics of getting a JSON response back to the browser. AJAX Post Syntax - using ajaxCallMethod() The previous example allows you basic control over the data that you send to the server via querystring parameters. This works OK for simple values like short strings, numbers and boolean values, but doesn't really work if you need to pass something more complex like an object or an array back up to the server. To handle traditional RPC type messaging where the idea is to map server side functions and results to a client side invokation, POST operations can be used. The easiest way to use this functionality is to use ww.jquery.js and the ajaxCallMethod() function. ww.jquery wraps jQuery's AJAX functions and knows implicitly how to call a CallbackServer method with parameters and parse the result. Let's look at another simple example that posts a simple value but returns something more interesting. Let's start with the service method: [CallbackMethod(RouteUrl="stocks/{symbol}")] public StockQuote GetStockQuote(string symbol) { Response.Cache.SetExpires(DateTime.UtcNow.Add(new TimeSpan(0, 2, 0))); StockServer server = new StockServer(); var quote = server.GetStockQuote(symbol); if (quote == null) throw new ApplicationException("Invalid Symbol passed."); return quote; } This sample utilizes a small StockServer helper class (included in the sample) that downloads a stock quote from Yahoo's financial site via plain HTTP GET requests and formats it into a StockQuote object. Lets create a small HTML block that lets us query for the quote and display it: <fieldset> <legend>Single Stock Quote</legend> Please enter a stock symbol: <input type="text" name="txtSymbol" id="txtSymbol" value="msft" /> <input type="button" id="btnStockQuote" value="Get Quote" /> <div id="divStockDisplay" class="errordisplay" style="display:none; width: 450px;"> <div class="label-left">Company:</div> <div id="stockCompany"></div> <div class="label-left">Last Price:</div> <div id="stockLastPrice"></div> <div class="label-left">Quote Time:</div> <div id="stockQuoteTime"></div> </div> </fieldset> The final result looks something like this:   Let's hook up the button handler to fire the request and fill in the data as shown: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").show().fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, HH:mm EST")); }, onPageError); }); So we point at SampleService.ashx and the GetStockQuote method, passing a single parameter of the input symbol value. Then there are two handlers for success and failure callbacks.  The success handler is the interesting part - it receives the stock quote as a result and assigns its values to various 'holes' in the stock display elements. The data that comes back over the wire is JSON and it looks like this: { "Symbol":"MSFT", "Company":"Microsoft Corpora", "OpenPrice":26.11, "LastPrice":26.01, "NetChange":0.02, "LastQuoteTime":"2011-11-03T02:00:00Z", "LastQuoteTimeString":"Nov. 11, 2011 4:20pm" } which is an object representation of the data. JavaScript can evaluate this JSON string back into an object easily and that's the reslut that gets passed to the success function. The quote data is then applied to existing page content by manually selecting items and applying them. There are other ways to do this more elegantly like using templates, but here we're only interested in seeing how the data is returned. The data in the object is typed - LastPrice is a number and QuoteTime is a date. Note about the date value: JavaScript doesn't have a date literal although the JSON embedded ISO string format used above  ("2011-11-03T02:00:00Z") is becoming fairly standard for JSON serializers. However, JSON parsers don't deserialize dates by default and return them by string. This is why the StockQuote actually returns a string value of LastQuoteTimeString for the same date. ajaxMethodCallback always converts dates properly into 'real' dates and the example above uses the real date value along with a .formatDate() data extension (also in ww.jquery.js) to display the raw date properly. Errors and Exceptions So what happens if your code fails? For example if I pass an invalid stock symbol to the GetStockQuote() method you notice that the code does this: if (quote == null) throw new ApplicationException("Invalid Symbol passed."); CallbackHandler automatically pushes the exception message back to the client so it's easy to pick up the error message. Regardless of what kind of error occurs: Server side, client side, protocol errors - any error will fire the failure handler with an error object parameter. The error is returned to the client via a JSON response in the error callback. In the previous examples I called onPageError which is a generic routine in ww.jquery that displays a status message on the bottom of the screen. But of course you can also take over the error handling yourself: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); }, function (error, xhr) { $("#divErrorDisplay").text(error.message).fadeIn(1000); }); }); The error object has a isCallbackError, message and  stackTrace properties, the latter of which is only populated when running in Debug mode, and this object is returned for all errors: Client side, transport and server side errors. Regardless of which type of error you get the same object passed (as well as the XHR instance optionally) which makes for a consistent error retrieval mechanism. Specifying HttpVerbs You can also specify HTTP Verbs that are allowed using the AllowedHttpVerbs option on the CallbackMethod attribute: [CallbackMethod(AllowedHttpVerbs=HttpVerbs.GET | HttpVerbs.POST)] public string HelloWorld(string name) { … } If you're building REST style API's this might be useful to force certain request semantics onto the client calling. For the above if call with a non-allowed HttpVerb the request returns a 405 error response along with a JSON (or XML) error object result. The default behavior is to allow all verbs access (HttpVerbs.All). Passing in object Parameters Up to now the parameters I passed were very simple. But what if you need to send something more complex like an object or an array? Let's look at another example now that passes an object from the client to the server. Keeping with the Stock theme here lets add a method called BuyOrder that lets us buy some shares for a stock. Consider the following service method that receives an StockBuyOrder object as a parameter: [CallbackMethod] public string BuyStock(StockBuyOrder buyOrder) { var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } public class StockBuyOrder { public string Symbol { get; set; } public int Quantity { get; set; } public DateTime BuyOn { get; set; } public StockBuyOrder() { BuyOn = DateTime.Now; } } This is a contrived do-nothing example that simply echoes back what was passed in, but it demonstrates how you can pass complex data to a callback method. On the client side we now have a very simple form that captures the three values on a form: <fieldset> <legend>Post a Stock Buy Order</legend> Enter a symbol: <input type="text" name="txtBuySymbol" id="txtBuySymbol" value="GLD" />&nbsp;&nbsp; Qty: <input type="text" name="txtBuyQty" id="txtBuyQty" value="10" style="width: 50px" />&nbsp;&nbsp; Buy on: <input type="text" name="txtBuyOn" id="txtBuyOn" value="<%= DateTime.Now.ToString("d") %>" style="width: 70px;" /> <input type="button" id="btnBuyStock" value="Buy Stock" /> <div id="divStockBuyMessage" class="errordisplay" style="display:none"></div> </fieldset> The completed form and demo then looks something like this:   The client side code that picks up the input values and assigns them to object properties and sends the AJAX request looks like this: $("#btnBuyStock").click(function () { // create an object map that matches StockBuyOrder signature var buyOrder = { Symbol: $("#txtBuySymbol").val(), Quantity: $("#txtBuyQty").val() * 1, // number Entered: new Date() } ajaxCallMethod("SampleService.ashx", "BuyStock", [buyOrder], function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError); }); The code creates an object and attaches the properties that match the server side object passed to the BuyStock method. Each property that you want to update needs to be included and the type must match (ie. string, number, date in this case). Any missing properties will not be set but also not cause any errors. Pass POST data instead of Objects In the last example I collected a bunch of values from form variables and stuffed them into object variables in JavaScript code. While that works, often times this isn't really helping - I end up converting my types on the client and then doing another conversion on the server. If lots of input controls are on a page and you just want to pick up the values on the server via plain POST variables - that can be done too - and it makes sense especially if you're creating and filling the client side object only to push data to the server. Let's add another method to the server that once again lets us buy a stock. But this time let's not accept a parameter but rather send POST data to the server. Here's the server method receiving POST data: [CallbackMethod] public string BuyStockPost() { StockBuyOrder buyOrder = new StockBuyOrder(); buyOrder.Symbol = Request.Form["txtBuySymbol"]; ; int qty; int.TryParse(Request.Form["txtBuyQuantity"], out qty); buyOrder.Quantity = qty; DateTime time; DateTime.TryParse(Request.Form["txtBuyBuyOn"], out time); buyOrder.BuyOn = time; // Or easier way yet //FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } Clearly we've made this server method take more code than it did with the object parameter. We've basically moved the parameter assignment logic from the client to the server. As a result the client code to call this method is now a bit shorter since there's no client side shuffling of values from the controls to an object. $("#btnBuyStockPost").click(function () { ajaxCallMethod("SampleService.ashx", "BuyStockPost", [], // Note: No parameters - function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError, // Force all page Form Variables to be posted { postbackMode: "Post" }); }); The client simply calls the BuyStockQuote method and pushes all the form variables from the page up to the server which parses them instead. The feature that makes this work is one of the options you can pass to the ajaxCallMethod() function: { postbackMode: "Post" }); which directs the function to include form variable POST data when making the service call. Other options include PostNoViewState (for WebForms to strip out WebForms crap vars), PostParametersOnly (default), None. If you pass parameters those are always posted to the server except when None is set. The above code can be simplified a bit by using the FormVariableBinder helper, which can unbind form variables directly into an object: FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); which replaces the manual Request.Form[] reading code. It receives the object to unbind into, a string of properties to skip, and an optional prefix which is stripped off form variables to match property names. The component is similar to the MVC model binder but it's independent of MVC. Returning non-JSON Data CallbackHandler also supports returning non-JSON/XML data via special return types. You can return raw non-JSON encoded strings like this: [CallbackMethod(ReturnAsRawString=true,ContentType="text/plain")] public string HelloWorldNoJSON(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } Calling this method results in just a plain string - no JSON encoding with quotes around the result. This can be useful if your server handling code needs to return a string or HTML result that doesn't fit well for a page or other UI component. Any string output can be returned. You can also return binary data. Stream, byte[] and Bitmap/Image results are automatically streamed back to the client. Notice that you should set the ContentType of the request either on the CallbackMethod attribute or using Response.ContentType. This ensures the Web Server knows how to display your binary response. Using a stream response makes it possible to return any of data. Streamed data can be pretty handy to return bitmap data from a method. The following is a method that returns a stock history graph for a particular stock over a provided number of years: [CallbackMethod(ContentType="image/png",RouteUrl="stocks/history/graph/{symbol}/{years}")] public Stream GetStockHistoryGraph(string symbol, int years = 2,int width = 500, int height=350) { if (width == 0) width = 500; if (height == 0) height = 350; StockServer server = new StockServer(); return server.GetStockHistoryGraph(symbol,"Stock History for " + symbol,width,height,years); } I can now hook this up into the JavaScript code when I get a stock quote. At the end of the process I can assign the URL to the service that returns the image into the src property and so force the image to display. Here's the changed code: $("#btnStockQuote").click(function () { var symbol = $("#txtSymbol").val(); ajaxCallMethod("SampleService.ashx", "GetStockQuote", [symbol], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); // display a stock chart $("#imgStockHistory").attr("src", "stocks/history/graph/" + symbol + "/2"); },onPageError); }); The resulting output then looks like this: The charting code uses the new ASP.NET 4.0 Chart components via code to display a bar chart of the 2 year stock data as part of the StockServer class which you can find in the sample download. The ability to return arbitrary data from a service is useful as you can see - in this case the chart is clearly associated with the service and it's nice that the graph generation can happen off a handler rather than through a page. Images are common resources, but output can also be PDF reports, zip files for downloads etc. which is becoming increasingly more common to be returned from REST endpoints and other applications. Why reinvent? Obviously the examples I've shown here are pretty basic in terms of functionality. But I hope they demonstrate the core features of AJAX callbacks that you need to work through in most applications which is simple: return data, send back data and potentially retrieve data in various formats. While there are other solutions when it comes down to making AJAX callbacks and servicing REST like requests, I like the flexibility my home grown solution provides. Simply put it's still the easiest solution that I've found that addresses my common use cases: AJAX JSON RPC style callbacks Url based access XML and JSON Output from single method endpoint XML and JSON POST support, querystring input, routing parameter mapping UrlEncoded POST data support on callbacks Ability to return stream/raw string data Essentially ability to return ANYTHING from Service and pass anything All these features are available in various solutions but not together in one place. I've been using this code base for over 4 years now in a number of projects both for myself and commercial work and it's served me extremely well. Besides the AJAX functionality CallbackHandler provides, it's also an easy way to create any kind of output endpoint I need to create. Need to create a few simple routines that spit back some data, but don't want to create a Page or View or full blown handler for it? Create a CallbackHandler and add a method or multiple methods and you have your generic endpoints.  It's a quick and easy way to add small code pieces that are pretty efficient as they're running through a pretty small handler implementation. I can have this up and running in a couple of minutes literally without any setup and returning just about any kind of data. Resources Download the Sample NuGet: Westwind Web and AJAX Utilities (Westwind.Web) ajaxCallMethod() Documentation Using the AjaxMethodCallback WebForms Control West Wind Web Toolkit Home Page West Wind Web Toolkit Source Code © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery  AJAX   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL Server 2012 - AlwaysOn

    - by Claus Jandausch
    Ich war nicht nur irritiert, ich war sogar regelrecht schockiert - und für einen kurzen Moment sprachlos (was nur selten der Fall ist). Gerade eben hatte mich jemand gefragt "Wann Oracle denn etwas Vergleichbares wie AlwaysOn bieten würde - und ob überhaupt?" War ich hier im falschen Film gelandet? Ich konnte nicht anders, als meinen Unmut kundzutun und zu erklären, dass die Fragestellung normalerweise anders herum läuft. Zugegeben - es mag vielleicht strittige Punkte geben im Vergleich zwischen Oracle und SQL Server - bei denen nicht unbedingt immer Oracle die Nase vorn haben muss - aber das Thema Clustering für Hochverfügbarkeit (HA), Disaster Recovery (DR) und Skalierbarkeit gehört mit Sicherheit nicht dazu. Dieses Erlebnis hakte ich am Nachgang als Einzelfall ab, der so nie wieder vorkommen würde. Bis ich kurz darauf eines Besseren belehrt wurde und genau die selbe Frage erneut zu hören bekam. Diesmal sogar im Exadata-Umfeld und einem Oracle Stretch Cluster. Einmal ist keinmal, doch zweimal ist einmal zu viel... Getreu diesem alten Motto war mir klar, dass man das so nicht länger stehen lassen konnte. Ich habe keine Ahnung, wie die Microsoft Marketing Abteilung es geschafft hat, unter dem AlwaysOn Brading eine innovative Technologie vermuten zu lassen - aber sie hat ihren Job scheinbar gut gemacht. Doch abgesehen von einem guten Marketing, stellt sich natürlich die Frage, was wirklich dahinter steckt und wie sich das Ganze mit Oracle vergleichen lässt - und ob überhaupt? Damit wären wir wieder bei der ursprünglichen Frage angelangt.  So viel zum Hintergrund dieses Blogbeitrags - von meiner Antwort handelt der restliche Blog. "Windows was the God ..." Um den wahren Unterschied zwischen Oracle und Microsoft verstehen zu können, muss man zunächst das bedeutendste Microsoft Dogma kennen. Es lässt sich schlicht und einfach auf den Punkt bringen: "Alles muss auf Windows basieren." Die Überschrift dieses Absatzes ist kein von mir erfundener Ausspruch, sondern ein Zitat. Konkret stammt es aus einem längeren Artikel von Kurt Eichenwald in der Vanity Fair aus dem August 2012. Er lautet Microsoft's Lost Decade und sei jedem ans Herz gelegt, der die "Microsoft-Maschinerie" unter Steve Ballmer und einige ihrer Kuriositäten besser verstehen möchte. "YOU TALKING TO ME?" Microsoft C.E.O. Steve Ballmer bei seiner Keynote auf der 2012 International Consumer Electronics Show in Las Vegas am 9. Januar   Manche Dinge in diesem Artikel mögen überspitzt dargestellt erscheinen - sind sie aber nicht. Vieles davon kannte ich bereits aus eigener Erfahrung und kann es nur bestätigen. Anderes hat sich mir erst so richtig erschlossen. Insbesondere die folgenden Passagen führten zum Aha-Erlebnis: “Windows was the god—everything had to work with Windows,” said Stone... “Every little thing you want to write has to build off of Windows (or other existing roducts),” one software engineer said. “It can be very confusing, …” Ich habe immer schon darauf hingewiesen, dass in einem SQL Server Failover Cluster die Microsoft Datenbank eigentlich nichts Nenneswertes zum Geschehen beiträgt, sondern sich voll und ganz auf das Windows Betriebssystem verlässt. Deshalb muss man auch die Windows Server Enterprise Edition installieren, soll ein Failover Cluster für den SQL Server eingerichtet werden. Denn hier werden die Cluster Services geliefert - nicht mit dem SQL Server. Er ist nur lediglich ein weiteres Server Produkt, für das Windows in Ausfallszenarien genutzt werden kann - so wie Microsoft Exchange beispielsweise, oder Microsoft SharePoint, oder irgendein anderes Server Produkt das auf Windows gehostet wird. Auch Oracle kann damit genutzt werden. Das Stichwort lautet hier: Oracle Failsafe. Nur - warum sollte man das tun, wenn gleichzeitig eine überlegene Technologie wie die Oracle Real Application Clusters (RAC) zur Verfügung steht, die dann auch keine Windows Enterprise Edition voraussetzen, da Oracle die eigene Clusterware liefert. Welche darüber hinaus für kürzere Failover-Zeiten sorgt, da diese Cluster-Technologie Datenbank-integriert ist und sich nicht auf "Dritte" verlässt. Wenn man sich also schon keine technischen Vorteile mit einem SQL Server Failover Cluster erkauft, sondern zusätzlich noch versteckte Lizenzkosten durch die Lizenzierung der Windows Server Enterprise Edition einhandelt, warum hat Microsoft dann in den vergangenen Jahren seit SQL Server 2000 nicht ebenfalls an einer neuen und innovativen Lösung gearbeitet, die mit Oracle RAC mithalten kann? Entwickler hat Microsoft genügend? Am Geld kann es auch nicht liegen? Lesen Sie einfach noch einmal die beiden obenstehenden Zitate und sie werden den Grund verstehen. Anders lässt es sich ja auch gar nicht mehr erklären, dass AlwaysOn aus zwei unterschiedlichen Technologien besteht, die beide jedoch wiederum auf dem Windows Server Failover Clustering (WSFC) basieren. Denn daraus ergeben sich klare Nachteile - aber dazu später mehr. Um AlwaysOn zu verstehen, sollte man sich zunächst kurz in Erinnerung rufen, was Microsoft bisher an HA/DR (High Availability/Desaster Recovery) Lösungen für SQL Server zur Verfügung gestellt hat. Replikation Basiert auf logischer Replikation und Pubisher/Subscriber Architektur Transactional Replication Merge Replication Snapshot Replication Microsoft's Replikation ist vergleichbar mit Oracle GoldenGate. Oracle GoldenGate stellt jedoch die umfassendere Technologie dar und bietet High Performance. Log Shipping Microsoft's Log Shipping stellt eine einfache Technologie dar, die vergleichbar ist mit Oracle Managed Recovery in Oracle Version 7. Das Log Shipping besitzt folgende Merkmale: Transaction Log Backups werden von Primary nach Secondary/ies geschickt Einarbeitung (z.B. Restore) auf jedem Secondary individuell Optionale dritte Server Instanz (Monitor Server) für Überwachung und Alarm Log Restore Unterbrechung möglich für Read-Only Modus (Secondary) Keine Unterstützung von Automatic Failover Database Mirroring Microsoft's Database Mirroring wurde verfügbar mit SQL Server 2005, sah aus wie Oracle Data Guard in Oracle 9i, war funktional jedoch nicht so umfassend. Für ein HA/DR Paar besteht eine 1:1 Beziehung, um die produktive Datenbank (Principle DB) abzusichern. Auf der Standby Datenbank (Mirrored DB) werden alle Insert-, Update- und Delete-Operationen nachgezogen. Modi Synchron (High-Safety Modus) Asynchron (High-Performance Modus) Automatic Failover Unterstützt im High-Safety Modus (synchron) Witness Server vorausgesetzt     Zur Frage der Kontinuität Es stellt sich die Frage, wie es um diesen Technologien nun im Zusammenhang mit SQL Server 2012 bestellt ist. Unter Fanfaren seinerzeit eingeführt, war Database Mirroring das erklärte Mittel der Wahl. Ich bin kein Produkt Manager bei Microsoft und kann hierzu nur meine Meinung äußern, aber zieht man den SQL AlwaysOn Team Blog heran, so sieht es nicht gut aus für das Database Mirroring - zumindest nicht langfristig. "Does AlwaysOn Availability Group replace Database Mirroring going forward?” “The short answer is we recommend that you migrate from the mirroring configuration or even mirroring and log shipping configuration to using Availability Group. Database Mirroring will still be available in the Denali release but will be phased out over subsequent releases. Log Shipping will continue to be available in future releases.” Damit wären wir endlich beim eigentlichen Thema angelangt. Was ist eine sogenannte Availability Group und was genau hat es mit der vielversprechend klingenden Bezeichnung AlwaysOn auf sich?   SQL Server 2012 - AlwaysOn Zwei HA-Features verstekcne sich hinter dem “AlwaysOn”-Branding. Einmal das AlwaysOn Failover Clustering aka SQL Server Failover Cluster Instances (FCI) - zum Anderen die AlwaysOn Availability Groups. Failover Cluster Instances (FCI) Entspricht ungefähr dem Stretch Cluster Konzept von Oracle Setzt auf Windows Server Failover Clustering (WSFC) auf Bietet HA auf Instanz-Ebene AlwaysOn Availability Groups (Verfügbarkeitsgruppen) Ähnlich der Idee von Consistency Groups, wie in Storage-Level Replikations-Software von z.B. EMC SRDF Abhängigkeiten zu Windows Server Failover Clustering (WSFC) Bietet HA auf Datenbank-Ebene   Hinweis: Verwechseln Sie nicht eine SQL Server Datenbank mit einer Oracle Datenbank. Und auch nicht eine Oracle Instanz mit einer SQL Server Instanz. Die gleichen Begriffe haben hier eine andere Bedeutung - nicht selten ein Grund, weshalb Oracle- und Microsoft DBAs schnell aneinander vorbei reden. Denken Sie bei einer SQL Server Datenbank eher an ein Oracle Schema, das kommt der Sache näher. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema. Wenn Sie die genauen Unterschiede kennen möchten, finden Sie eine detaillierte Beschreibung in meinem Buch "Oracle10g Release 2 für Windows und .NET", erhältich bei Lehmanns, Amazon, etc.   Windows Server Failover Clustering (WSFC) Wie man sieht, basieren beide AlwaysOn Technologien wiederum auf dem Windows Server Failover Clustering (WSFC), um einerseits Hochverfügbarkeit auf Ebene der Instanz zu gewährleisten und andererseits auf der Datenbank-Ebene. Deshalb nun eine kurze Beschreibung der WSFC. Die WSFC sind ein mit dem Windows Betriebssystem geliefertes Infrastruktur-Feature, um HA für Server Anwendungen, wie Microsoft Exchange, SharePoint, SQL Server, etc. zu bieten. So wie jeder andere Cluster, besteht ein WSFC Cluster aus einer Gruppe unabhängiger Server, die zusammenarbeiten, um die Verfügbarkeit einer Applikation oder eines Service zu erhöhen. Falls ein Cluster-Knoten oder -Service ausfällt, kann der auf diesem Knoten bisher gehostete Service automatisch oder manuell auf einen anderen im Cluster verfügbaren Knoten transferriert werden - was allgemein als Failover bekannt ist. Unter SQL Server 2012 verwenden sowohl die AlwaysOn Avalability Groups, als auch die AlwaysOn Failover Cluster Instances die WSFC als Plattformtechnologie, um Komponenten als WSFC Cluster-Ressourcen zu registrieren. Verwandte Ressourcen werden in eine Ressource Group zusammengefasst, die in Abhängigkeit zu anderen WSFC Cluster-Ressourcen gebracht werden kann. Der WSFC Cluster Service kann jetzt die Notwendigkeit zum Neustart der SQL Server Instanz erfassen oder einen automatischen Failover zu einem anderen Server-Knoten im WSFC Cluster auslösen.   Failover Cluster Instances (FCI) Eine SQL Server Failover Cluster Instanz (FCI) ist eine einzelne SQL Server Instanz, die in einem Failover Cluster betrieben wird, der aus mehreren Windows Server Failover Clustering (WSFC) Knoten besteht und so HA (High Availability) auf Ebene der Instanz bietet. Unter Verwendung von Multi-Subnet FCI kann auch Remote DR (Disaster Recovery) unterstützt werden. Eine weitere Option für Remote DR besteht darin, eine unter FCI gehostete Datenbank in einer Availability Group zu betreiben. Hierzu später mehr. FCI und WSFC Basis FCI, das für lokale Hochverfügbarkeit der Instanzen genutzt wird, ähnelt der veralteten Architektur eines kalten Cluster (Aktiv-Passiv). Unter SQL Server 2008 wurde diese Technologie SQL Server 2008 Failover Clustering genannt. Sie nutzte den Windows Server Failover Cluster. In SQL Server 2012 hat Microsoft diese Basistechnologie unter der Bezeichnung AlwaysOn zusammengefasst. Es handelt sich aber nach wie vor um die klassische Aktiv-Passiv-Konfiguration. Der Ablauf im Failover-Fall ist wie folgt: Solange kein Hardware-oder System-Fehler auftritt, werden alle Dirty Pages im Buffer Cache auf Platte geschrieben Alle entsprechenden SQL Server Services (Dienste) in der Ressource Gruppe werden auf dem aktiven Knoten gestoppt Die Ownership der Ressource Gruppe wird auf einen anderen Knoten der FCI transferriert Der neue Owner (Besitzer) der Ressource Gruppe startet seine SQL Server Services (Dienste) Die Connection-Anforderungen einer Client-Applikation werden automatisch auf den neuen aktiven Knoten mit dem selben Virtuellen Network Namen (VNN) umgeleitet Abhängig vom Zeitpunkt des letzten Checkpoints, kann die Anzahl der Dirty Pages im Buffer Cache, die noch auf Platte geschrieben werden müssen, zu unvorhersehbar langen Failover-Zeiten führen. Um diese Anzahl zu drosseln, besitzt der SQL Server 2012 eine neue Fähigkeit, die Indirect Checkpoints genannt wird. Indirect Checkpoints ähnelt dem Fast-Start MTTR Target Feature der Oracle Datenbank, das bereits mit Oracle9i verfügbar war.   SQL Server Multi-Subnet Clustering Ein SQL Server Multi-Subnet Failover Cluster entspricht vom Konzept her einem Oracle RAC Stretch Cluster. Doch dies ist nur auf den ersten Blick der Fall. Im Gegensatz zu RAC ist in einem lokalen SQL Server Failover Cluster jeweils nur ein Knoten aktiv für eine Datenbank. Für die Datenreplikation zwischen geografisch entfernten Sites verlässt sich Microsoft auf 3rd Party Lösungen für das Storage Mirroring.     Die Verbesserung dieses Szenario mit einer SQL Server 2012 Implementierung besteht schlicht darin, dass eine VLAN-Konfiguration (Virtual Local Area Network) nun nicht mehr benötigt wird, so wie dies bisher der Fall war. Das folgende Diagramm stellt dar, wie der Ablauf mit SQL Server 2012 gehandhabt wird. In Site A und Site B wird HA jeweils durch einen lokalen Aktiv-Passiv-Cluster sichergestellt.     Besondere Aufmerksamkeit muss hier der Konfiguration und dem Tuning geschenkt werden, da ansonsten völlig inakzeptable Failover-Zeiten resultieren. Dies liegt darin begründet, weil die Downtime auf Client-Seite nun nicht mehr nur von der reinen Failover-Zeit abhängt, sondern zusätzlich von der Dauer der DNS Replikation zwischen den DNS Servern. (Rufen Sie sich in Erinnerung, dass wir gerade von Multi-Subnet Clustering sprechen). Außerdem ist zu berücksichtigen, wie schnell die Clients die aktualisierten DNS Informationen abfragen. Spezielle Konfigurationen für Node Heartbeat, HostRecordTTL (Host Record Time-to-Live) und Intersite Replication Frequeny für Active Directory Sites und Services werden notwendig. Default TTL für Windows Server 2008 R2: 20 Minuten Empfohlene Einstellung: 1 Minute DNS Update Replication Frequency in Windows Umgebung: 180 Minuten Empfohlene Einstellung: 15 Minuten (minimaler Wert)   Betrachtet man diese Werte, muss man feststellen, dass selbst eine optimale Konfiguration die rigiden SLAs (Service Level Agreements) heutiger geschäftskritischer Anwendungen für HA und DR nicht erfüllen kann. Denn dies impliziert eine auf der Client-Seite erlebte Failover-Zeit von insgesamt 16 Minuten. Hierzu ein Auszug aus der SQL Server 2012 Online Dokumentation: Cons: If a cross-subnet failover occurs, the client recovery time could be 15 minutes or longer, depending on your HostRecordTTL setting and the setting of your cross-site DNS/AD replication schedule.    Wir sind hier an einem Punkt unserer Überlegungen angelangt, an dem sich erklärt, weshalb ich zuvor das "Windows was the God ..." Zitat verwendet habe. Die unbedingte Abhängigkeit zu Windows wird zunehmend zum Problem, da sie die Komplexität einer Microsoft-basierenden Lösung erhöht, anstelle sie zu reduzieren. Und Komplexität ist das Letzte, was sich CIOs heutzutage wünschen.  Zur Ehrenrettung des SQL Server 2012 und AlwaysOn muss man sagen, dass derart lange Failover-Zeiten kein unbedingtes "Muss" darstellen, sondern ein "Kann". Doch auch ein "Kann" kann im unpassenden Moment unvorhersehbare und kostspielige Folgen haben. Die Unabsehbarkeit ist wiederum Ursache vieler an der Implementierung beteiligten Komponenten und deren Abhängigkeiten, wie beispielsweise drei Cluster-Lösungen (zwei von Microsoft, eine 3rd Party Lösung). Wie man die Sache auch dreht und wendet, kommt man an diesem Fakt also nicht vorbei - ganz unabhängig von der Dauer einer Downtime oder Failover-Zeiten. Im Gegensatz zu AlwaysOn und der hier vorgestellten Version eines Stretch-Clusters, vermeidet eine entsprechende Oracle Implementierung eine derartige Komplexität, hervorgerufen duch multiple Abhängigkeiten. Den Unterschied machen Datenbank-integrierte Mechanismen, wie Fast Application Notification (FAN) und Fast Connection Failover (FCF). Für Oracle MAA Konfigurationen (Maximum Availability Architecture) sind Inter-Site Failover-Zeiten im Bereich von Sekunden keine Seltenheit. Wenn Sie dem Link zur Oracle MAA folgen, finden Sie außerdem eine Reihe an Customer Case Studies. Auch dies ist ein wichtiges Unterscheidungsmerkmal zu AlwaysOn, denn die Oracle Technologie hat sich bereits zigfach in höchst kritischen Umgebungen bewährt.   Availability Groups (Verfügbarkeitsgruppen) Die sogenannten Availability Groups (Verfügbarkeitsgruppen) sind - neben FCI - der weitere Baustein von AlwaysOn.   Hinweis: Bevor wir uns näher damit beschäftigen, sollten Sie sich noch einmal ins Gedächtnis rufen, dass eine SQL Server Datenbank nicht die gleiche Bedeutung besitzt, wie eine Oracle Datenbank, sondern eher einem Oracle Schema entspricht. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema.   Eine Verfügbarkeitsgruppe setzt sich zusammen aus einem Set mehrerer Benutzer-Datenbanken, die im Falle eines Failover gemeinsam als Gruppe behandelt werden. Eine Verfügbarkeitsgruppe unterstützt ein Set an primären Datenbanken (primäres Replikat) und einem bis vier Sets von entsprechenden sekundären Datenbanken (sekundäre Replikate).       Es können jedoch nicht alle SQL Server Datenbanken einer AlwaysOn Verfügbarkeitsgruppe zugeordnet werden. Der SQL Server Spezialist Michael Otey zählt in seinem SQL Server Pro Artikel folgende Anforderungen auf: Verfügbarkeitsgruppen müssen mit Benutzer-Datenbanken erstellt werden. System-Datenbanken können nicht verwendet werden Die Datenbanken müssen sich im Read-Write Modus befinden. Read-Only Datenbanken werden nicht unterstützt Die Datenbanken in einer Verfügbarkeitsgruppe müssen Multiuser Datenbanken sein Sie dürfen nicht das AUTO_CLOSE Feature verwenden Sie müssen das Full Recovery Modell nutzen und es muss ein vollständiges Backup vorhanden sein Eine gegebene Datenbank kann sich nur in einer einzigen Verfügbarkeitsgruppe befinden und diese Datenbank düerfen nicht für Database Mirroring konfiguriert sein Microsoft empfiehl außerdem, dass der Verzeichnispfad einer Datenbank auf dem primären und sekundären Server identisch sein sollte Wie man sieht, eignen sich Verfügbarkeitsgruppen nicht, um HA und DR vollständig abzubilden. Die Unterscheidung zwischen der Instanzen-Ebene (FCI) und Datenbank-Ebene (Availability Groups) ist von hoher Bedeutung. Vor kurzem wurde mir gesagt, dass man mit den Verfügbarkeitsgruppen auf Shared Storage verzichten könne und dadurch Kosten spart. So weit so gut ... Man kann natürlich eine Installation rein mit Verfügbarkeitsgruppen und ohne FCI durchführen - aber man sollte sich dann darüber bewusst sein, was man dadurch alles nicht abgesichert hat - und dies wiederum für Desaster Recovery (DR) und SLAs (Service Level Agreements) bedeutet. Kurzum, um die Kombination aus beiden AlwaysOn Produkten und der damit verbundene Komplexität kommt man wohl in der Praxis nicht herum.    Availability Groups und WSFC AlwaysOn hängt von Windows Server Failover Clustering (WSFC) ab, um die aktuellen Rollen der Verfügbarkeitsreplikate einer Verfügbarkeitsgruppe zu überwachen und zu verwalten, und darüber zu entscheiden, wie ein Failover-Ereignis die Verfügbarkeitsreplikate betrifft. Das folgende Diagramm zeigt de Beziehung zwischen Verfügbarkeitsgruppen und WSFC:   Der Verfügbarkeitsmodus ist eine Eigenschaft jedes Verfügbarkeitsreplikats. Synychron und Asynchron können also gemischt werden: Availability Modus (Verfügbarkeitsmodus) Asynchroner Commit-Modus Primäres replikat schließt Transaktionen ohne Warten auf Sekundäres Synchroner Commit-Modus Primäres Replikat wartet auf Commit von sekundärem Replikat Failover Typen Automatic Manual Forced (mit möglichem Datenverlust) Synchroner Commit-Modus Geplanter, manueller Failover ohne Datenverlust Automatischer Failover ohne Datenverlust Asynchroner Commit-Modus Nur Forced, manueller Failover mit möglichem Datenverlust   Der SQL Server kennt keinen separaten Switchover Begriff wie in Oracle Data Guard. Für SQL Server werden alle Role Transitions als Failover bezeichnet. Tatsächlich unterstützt der SQL Server keinen Switchover für asynchrone Verbindungen. Es gibt nur die Form des Forced Failover mit möglichem Datenverlust. Eine ähnliche Fähigkeit wie der Switchover unter Oracle Data Guard ist so nicht gegeben.   SQL Sever FCI mit Availability Groups (Verfügbarkeitsgruppen) Neben den Verfügbarkeitsgruppen kann eine zweite Failover-Ebene eingerichtet werden, indem SQL Server FCI (auf Shared Storage) mit WSFC implementiert wird. Ein Verfügbarkeitesreplikat kann dann auf einer Standalone Instanz gehostet werden, oder einer FCI Instanz. Zum Verständnis: Die Verfügbarkeitsgruppen selbst benötigen kein Shared Storage. Diese Kombination kann verwendet werden für lokale HA auf Ebene der Instanz und DR auf Datenbank-Ebene durch Verfügbarkeitsgruppen. Das folgende Diagramm zeigt dieses Szenario:   Achtung! Hier handelt es sich nicht um ein Pendant zu Oracle RAC plus Data Guard, auch wenn das Bild diesen Eindruck vielleicht vermitteln mag - denn alle sekundären Knoten im FCI sind rein passiv. Es existiert außerdem eine weitere und ernsthafte Einschränkung: SQL Server Failover Cluster Instanzen (FCI) unterstützen nicht das automatische AlwaysOn Failover für Verfügbarkeitsgruppen. Jedes unter FCI gehostete Verfügbarkeitsreplikat kann nur für manuelles Failover konfiguriert werden.   Lesbare Sekundäre Replikate Ein oder mehrere Verfügbarkeitsreplikate in einer Verfügbarkeitsgruppe können für den lesenden Zugriff konfiguriert werden, wenn sie als sekundäres Replikat laufen. Dies ähnelt Oracle Active Data Guard, jedoch gibt es Einschränkungen. Alle Abfragen gegen die sekundäre Datenbank werden automatisch auf das Snapshot Isolation Level abgebildet. Es handelt sich dabei um eine Versionierung der Rows. Microsoft versuchte hiermit die Oracle MVRC (Multi Version Read Consistency) nachzustellen. Tatsächlich muss man die SQL Server Snapshot Isolation eher mit Oracle Flashback vergleichen. Bei der Implementierung des Snapshot Isolation Levels handelt sich um ein nachträglich aufgesetztes Feature und nicht um einen inhärenten Teil des Datenbank-Kernels, wie im Falle Oracle. (Ich werde hierzu in Kürze einen weiteren Blogbeitrag verfassen, wenn ich mich mit der neuen SQL Server 2012 Core Lizenzierung beschäftige.) Für die Praxis entstehen aus der Abbildung auf das Snapshot Isolation Level ernsthafte Restriktionen, derer man sich für den Betrieb in der Praxis bereits vorab bewusst sein sollte: Sollte auf der primären Datenbank eine aktive Transaktion zu dem Zeitpunkt existieren, wenn ein lesbares sekundäres Replikat in die Verfügbarkeitsgruppe aufgenommen wird, werden die Row-Versionen auf der korrespondierenden sekundären Datenbank nicht sofort vollständig verfügbar sein. Eine aktive Transaktion auf dem primären Replikat muss zuerst abgeschlossen (Commit oder Rollback) und dieser Transaktions-Record auf dem sekundären Replikat verarbeitet werden. Bis dahin ist das Isolation Level Mapping auf der sekundären Datenbank unvollständig und Abfragen sind temporär geblockt. Microsoft sagt dazu: "This is needed to guarantee that row versions are available on the secondary replica before executing the query under snapshot isolation as all isolation levels are implicitly mapped to snapshot isolation." (SQL Storage Engine Blog: AlwaysOn: I just enabled Readable Secondary but my query is blocked?)  Grundlegend bedeutet dies, dass ein aktives lesbares Replikat nicht in die Verfügbarkeitsgruppe aufgenommen werden kann, ohne das primäre Replikat vorübergehend stillzulegen. Da Leseoperationen auf das Snapshot Isolation Transaction Level abgebildet werden, kann die Bereinigung von Ghost Records auf dem primären Replikat durch Transaktionen auf einem oder mehreren sekundären Replikaten geblockt werden - z.B. durch eine lang laufende Abfrage auf dem sekundären Replikat. Diese Bereinigung wird auch blockiert, wenn die Verbindung zum sekundären Replikat abbricht oder der Datenaustausch unterbrochen wird. Auch die Log Truncation wird in diesem Zustant verhindert. Wenn dieser Zustand längere Zeit anhält, empfiehlt Microsoft das sekundäre Replikat aus der Verfügbarkeitsgruppe herauszunehmen - was ein ernsthaftes Downtime-Problem darstellt. Die Read-Only Workload auf den sekundären Replikaten kann eingehende DDL Änderungen blockieren. Obwohl die Leseoperationen aufgrund der Row-Versionierung keine Shared Locks halten, führen diese Operatioen zu Sch-S Locks (Schemastabilitätssperren). DDL-Änderungen durch Redo-Operationen können dadurch blockiert werden. Falls DDL aufgrund konkurrierender Lese-Workload blockiert wird und der Schwellenwert für 'Recovery Interval' (eine SQL Server Konfigurationsoption) überschritten wird, generiert der SQL Server das Ereignis sqlserver.lock_redo_blocked, welches Microsoft zum Kill der blockierenden Leser empfiehlt. Auf die Verfügbarkeit der Anwendung wird hierbei keinerlei Rücksicht genommen.   Keine dieser Einschränkungen existiert mit Oracle Active Data Guard.   Backups auf sekundären Replikaten  Über die sekundären Replikate können Backups (BACKUP DATABASE via Transact-SQL) nur als copy-only Backups einer vollständigen Datenbank, Dateien und Dateigruppen erstellt werden. Das Erstellen inkrementeller Backups ist nicht unterstützt, was ein ernsthafter Rückstand ist gegenüber der Backup-Unterstützung physikalischer Standbys unter Oracle Data Guard. Hinweis: Ein möglicher Workaround via Snapshots, bleibt ein Workaround. Eine weitere Einschränkung dieses Features gegenüber Oracle Data Guard besteht darin, dass das Backup eines sekundären Replikats nicht ausgeführt werden kann, wenn es nicht mit dem primären Replikat kommunizieren kann. Darüber hinaus muss das sekundäre Replikat synchronisiert sein oder sich in der Synchronisation befinden, um das Beackup auf dem sekundären Replikat erstellen zu können.   Vergleich von Microsoft AlwaysOn mit der Oracle MAA Ich komme wieder zurück auf die Eingangs erwähnte, mehrfach an mich gestellte Frage "Wann denn - und ob überhaupt - Oracle etwas Vergleichbares wie AlwaysOn bieten würde?" und meine damit verbundene (kurze) Irritation. Wenn Sie diesen Blogbeitrag bis hierher gelesen haben, dann kennen Sie jetzt meine darauf gegebene Antwort. Der eine oder andere Punkt traf dabei nicht immer auf Jeden zu, was auch nicht der tiefere Sinn und Zweck meiner Antwort war. Wenn beispielsweise kein Multi-Subnet mit im Spiel ist, sind alle diesbezüglichen Kritikpunkte zunächst obsolet. Was aber nicht bedeutet, dass sie nicht bereits morgen schon wieder zum Thema werden könnten (Sag niemals "Nie"). In manch anderes Fettnäpfchen tritt man wiederum nicht unbedingt in einer Testumgebung, sondern erst im laufenden Betrieb. Erst recht nicht dann, wenn man sich potenzieller Probleme nicht bewusst ist und keine dedizierten Tests startet. Und wer AlwaysOn erfolgreich positionieren möchte, wird auch gar kein Interesse daran haben, auf mögliche Schwachstellen und den besagten Teufel im Detail aufmerksam zu machen. Das ist keine Unterstellung - es ist nur menschlich. Außerdem ist es verständlich, dass man sich in erster Linie darauf konzentriert "was geht" und "was gut läuft", anstelle auf das "was zu Problemen führen kann" oder "nicht funktioniert". Wer will schon der Miesepeter sein? Für mich selbst gesprochen, kann ich nur sagen, dass ich lieber vorab von allen möglichen Einschränkungen wissen möchte, anstelle sie dann nach einer kurzen Zeit der heilen Welt schmerzhaft am eigenen Leib erfahren zu müssen. Ich bin davon überzeugt, dass es Ihnen nicht anders geht. Nachfolgend deshalb eine Zusammenfassung all jener Punkte, die ich im Vergleich zur Oracle MAA (Maximum Availability Architecture) als unbedingt Erwähnenswert betrachte, falls man eine Evaluierung von Microsoft AlwaysOn in Betracht zieht. 1. AlwaysOn ist eine komplexe Technologie Der SQL Server AlwaysOn Stack ist zusammengesetzt aus drei verschiedenen Technlogien: Windows Server Failover Clustering (WSFC) SQL Server Failover Cluster Instances (FCI) SQL Server Availability Groups (Verfügbarkeitsgruppen) Man kann eine derartige Lösung nicht als nahtlos bezeichnen, wofür auch die vielen von Microsoft dargestellten Einschränkungen sprechen. Während sich frühere SQL Server Versionen in Richtung eigener HA/DR Technologien entwickelten (wie Database Mirroring), empfiehlt Microsoft nun die Migration. Doch weshalb dieser Schwenk? Er führt nicht zu einem konsisten und robusten Angebot an HA/DR Technologie für geschäftskritische Umgebungen.  Liegt die Antwort in meiner These begründet, nach der "Windows was the God ..." noch immer gilt und man die Nachteile der allzu engen Kopplung mit Windows nicht sehen möchte? Entscheiden Sie selbst ... 2. Failover Cluster Instanzen - Kein RAC-Pendant Die SQL Server und Windows Server Clustering Technologie basiert noch immer auf dem veralteten Aktiv-Passiv Modell und führt zu einer Verschwendung von Systemressourcen. In einer Betrachtung von lediglich zwei Knoten erschließt sich auf Anhieb noch nicht der volle Mehrwert eines Aktiv-Aktiv Clusters (wie den Real Application Clusters), wie er von Oracle bereits vor zehn Jahren entwickelt wurde. Doch kennt man die Vorzüge der Skalierbarkeit durch einfaches Hinzufügen weiterer Cluster-Knoten, die dann alle gemeinsam als ein einziges logisches System zusammenarbeiten, versteht man was hinter dem Motto "Pay-as-you-Grow" steckt. In einem Aktiv-Aktiv Cluster geht es zwar auch um Hochverfügbarkeit - und ein Failover erfolgt zudem schneller, als in einem Aktiv-Passiv Modell - aber es geht eben nicht nur darum. An dieser Stelle sei darauf hingewiesen, dass die Oracle 11g Standard Edition bereits die Nutzung von Oracle RAC bis zu vier Sockets kostenfrei beinhaltet. Möchten Sie dazu Windows nutzen, benötigen Sie keine Windows Server Enterprise Edition, da Oracle 11g die eigene Clusterware liefert. Sie kommen in den Genuss von Hochverfügbarkeit und Skalierbarkeit und können dazu die günstigere Windows Server Standard Edition nutzen. 3. SQL Server Multi-Subnet Clustering - Abhängigkeit zu 3rd Party Storage Mirroring  Die SQL Server Multi-Subnet Clustering Architektur unterstützt den Aufbau eines Stretch Clusters, basiert dabei aber auf dem Aktiv-Passiv Modell. Das eigentlich Problematische ist jedoch, dass man sich zur Absicherung der Datenbank auf 3rd Party Storage Mirroring Technologie verlässt, ohne Integration zwischen dem Windows Server Failover Clustering (WSFC) und der darunterliegenden Mirroring Technologie. Wenn nun im Cluster ein Failover auf Instanzen-Ebene erfolgt, existiert keine Koordination mit einem möglichen Failover auf Ebene des Storage-Array. 4. Availability Groups (Verfügbarkeitsgruppen) - Vier, oder doch nur Zwei? Ein primäres Replikat erlaubt bis zu vier sekundäre Replikate innerhalb einer Verfügbarkeitsgruppe, jedoch nur zwei im Synchronen Commit Modus. Während dies zwar einen Vorteil gegenüber dem stringenten 1:1 Modell unter Database Mirroring darstellt, fällt der SQL Server 2012 damit immer noch weiter zurück hinter Oracle Data Guard mit bis zu 30 direkten Stanbdy Zielen - und vielen weiteren durch kaskadierende Ziele möglichen. Damit eignet sich Oracle Active Data Guard auch für die Bereitstellung einer Reader-Farm Skalierbarkeit für Internet-basierende Unternehmen. Mit AwaysOn Verfügbarkeitsgruppen ist dies nicht möglich. 5. Availability Groups (Verfügbarkeitsgruppen) - kein asynchrones Switchover  Die Technologie der Verfügbarkeitsgruppen wird auch als geeignetes Mittel für administrative Aufgaben positioniert - wie Upgrades oder Wartungsarbeiten. Man muss sich jedoch einem gravierendem Defizit bewusst sein: Im asynchronen Verfügbarkeitsmodus besteht die einzige Möglichkeit für Role Transition im Forced Failover mit Datenverlust! Um den Verlust von Daten durch geplante Wartungsarbeiten zu vermeiden, muss man den synchronen Verfügbarkeitsmodus konfigurieren, was jedoch ernstzunehmende Auswirkungen auf WAN Deployments nach sich zieht. Spinnt man diesen Gedanken zu Ende, kommt man zu dem Schluss, dass die Technologie der Verfügbarkeitsgruppen für geplante Wartungsarbeiten in einem derartigen Umfeld nicht effektiv genutzt werden kann. 6. Automatisches Failover - Nicht immer möglich Sowohl die SQL Server FCI, als auch Verfügbarkeitsgruppen unterstützen automatisches Failover. Möchte man diese jedoch kombinieren, wird das Ergebnis kein automatisches Failover sein. Denn ihr Zusammentreffen im Failover-Fall führt zu Race Conditions (Wettlaufsituationen), weshalb diese Konfiguration nicht länger das automatische Failover zu einem Replikat in einer Verfügbarkeitsgruppe erlaubt. Auch hier bestätigt sich wieder die tiefere Problematik von AlwaysOn, mit einer Zusammensetzung aus unterschiedlichen Technologien und der Abhängigkeit zu Windows. 7. Problematische RTO (Recovery Time Objective) Microsoft postioniert die SQL Server Multi-Subnet Clustering Architektur als brauchbare HA/DR Architektur. Bedenkt man jedoch die Problematik im Zusammenhang mit DNS Replikation und den möglichen langen Wartezeiten auf Client-Seite von bis zu 16 Minuten, sind strenge RTO Anforderungen (Recovery Time Objectives) nicht erfüllbar. Im Gegensatz zu Oracle besitzt der SQL Server keine Datenbank-integrierten Technologien, wie Oracle Fast Application Notification (FAN) oder Oracle Fast Connection Failover (FCF). 8. Problematische RPO (Recovery Point Objective) SQL Server ermöglicht Forced Failover (erzwungenes Failover), bietet jedoch keine Möglichkeit zur automatischen Übertragung der letzten Datenbits von einem alten zu einem neuen primären Replikat, wenn der Verfügbarkeitsmodus asynchron war. Oracle Data Guard hingegen bietet diese Unterstützung durch das Flush Redo Feature. Dies sichert "Zero Data Loss" und beste RPO auch in erzwungenen Failover-Situationen. 9. Lesbare Sekundäre Replikate mit Einschränkungen Aufgrund des Snapshot Isolation Transaction Level für lesbare sekundäre Replikate, besitzen diese Einschränkungen mit Auswirkung auf die primäre Datenbank. Die Bereinigung von Ghost Records auf der primären Datenbank, wird beeinflusst von lang laufenden Abfragen auf der lesabaren sekundären Datenbank. Die lesbare sekundäre Datenbank kann nicht in die Verfügbarkeitsgruppe aufgenommen werden, wenn es aktive Transaktionen auf der primären Datenbank gibt. Zusätzlich können DLL Änderungen auf der primären Datenbank durch Abfragen auf der sekundären blockiert werden. Und imkrementelle Backups werden hier nicht unterstützt.   Keine dieser Restriktionen existiert unter Oracle Data Guard.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Creating a thematic map

    - by jsharma
    This post describes how to create a simple thematic map, just a state population layer, with no underlying map tile layer. The map shows states color-coded by total population. The map is interactive with info-windows and can be panned and zoomed. The sample code demonstrates the following: Displaying an interactive vector layer with no background map tile layer (i.e. purpose and use of the Universe object) Using a dynamic (i.e. defined via the javascript client API) color bucket style Dynamically changing a layer's rendering style Specifying which attribute value to use in determining the bucket, and hence style, for a feature (FoI) The result is shown in the screenshot below. The states layer was defined, and stored in the user_sdo_themes view of the mvdemo schema, using MapBuilder. The underlying table is defined as SQL> desc states_32775  Name                                      Null?    Type ----------------------------------------- -------- ----------------------------  STATE                                              VARCHAR2(26)  STATE_ABRV                                         VARCHAR2(2) FIPSST                                             VARCHAR2(2) TOTPOP                                             NUMBER PCTSMPLD                                           NUMBER LANDSQMI                                           NUMBER POPPSQMI                                           NUMBER ... MEDHHINC NUMBER AVGHHINC NUMBER GEOM32775 MDSYS.SDO_GEOMETRY We'll use the TOTPOP column value in the advanced (color bucket) style for rendering the states layers. The predefined theme (US_STATES_BI) is defined as follows. SQL> select styling_rules from user_sdo_themes where name='US_STATES_BI'; STYLING_RULES -------------------------------------------------------------------------------- <?xml version="1.0" standalone="yes"?> <styling_rules highlight_style="C.CB_QUAL_8_CLASS_DARK2_1"> <hidden_info> <field column="STATE" name="Name"/> <field column="POPPSQMI" name="POPPSQMI"/> <field column="TOTPOP" name="TOTPOP"/> </hidden_info> <rule column="TOTPOP"> <features style="states_totpop"> </features> <label column="STATE_ABRV" style="T.BLUE_SERIF_10"> 1 </label> </rule> </styling_rules> SQL> The theme definition specifies that the state, poppsqmi, totpop, state_abrv, and geom columns will be queried from the states_32775 table. The state_abrv value will be used to label the state while the totpop value will be used to determine the color-fill from those defined in the states_totpop advanced style. The states_totpop style, which we will not use in our demo, is defined as shown below. SQL> select definition from user_sdo_styles where name='STATES_TOTPOP'; DEFINITION -------------------------------------------------------------------------------- <?xml version="1.0" ?> <AdvancedStyle> <BucketStyle> <Buckets default_style="C.S02_COUNTRY_AREA"> <RangedBucket seq="0" label="10K - 5M" low="10000" high="5000000" style="C.SEQ6_01" /> <RangedBucket seq="1" label="5M - 12M" low="5000001" high="1.2E7" style="C.SEQ6_02" /> <RangedBucket seq="2" label="12M - 20M" low="1.2000001E7" high="2.0E7" style="C.SEQ6_04" /> <RangedBucket seq="3" label="&gt; 20M" low="2.0000001E7" high="5.0E7" style="C.SEQ6_05" /> </Buckets> </BucketStyle> </AdvancedStyle> SQL> The demo defines additional advanced styles via the OM.style object and methods and uses those instead when rendering the states layer.   Now let's look at relevant snippets of code that defines the map extent and zoom levels (i.e. the OM.universe),  loads the states predefined vector layer (OM.layer), and sets up the advanced (color bucket) style. Defining the map extent and zoom levels. function initMap() {   //alert("Initialize map view");     // define the map extent and number of zoom levels.   // The Universe object is similar to the map tile layer configuration   // It defines the map extent, number of zoom levels, and spatial reference system   // well-known ones (like web mercator/google/bing or maps.oracle/elocation are predefined   // The Universe must be defined when there is no underlying map tile layer.   // When there is a map tile layer then that defines the map extent, srid, and zoom levels.      var uni= new OM.universe.Universe(     {         srid : 32775,         bounds : new OM.geometry.Rectangle(                         -3280000, 170000, 2300000, 3200000, 32775),         numberOfZoomLevels: 8     }); The srid specifies the spatial reference system which is Equal-Area Projection (United States). SQL> select cs_name from cs_srs where srid=32775 ; CS_NAME --------------------------------------------------- Equal-Area Projection (United States) The bounds defines the map extent. It is a Rectangle defined using the lower-left and upper-right coordinates and srid. Loading and displaying the states layer This is done in the states() function. The full code is at the end of this post, however here's the snippet which defines the states VectorLayer.     // States is a predefined layer in user_sdo_themes     var  layer2 = new OM.layer.VectorLayer("vLayer2",     {         def:         {             type:OM.layer.VectorLayer.TYPE_PREDEFINED,             dataSource:"mvdemo",             theme:"us_states_bi",             url: baseURL,             loadOnDemand: false         },         boundingTheme:true      }); The first parameter is a layer name, the second is an object literal for a layer config. The config object has two attributes: the first is the layer definition, the second specifies whether the layer is a bounding one (i.e. used to determine the current map zoom and center such that the whole layer is displayed within the map window) or not. The layer config has the following attributes: type - specifies whether is a predefined one, a defined via a SQL query (JDBC), or in a json-format file (DATAPACK) theme - is the predefined theme's name url - is the location of the mapviewer server loadOnDemand - specifies whether to load all the features or just those that lie within the current map window and load additional ones as needed on a pan or zoom The code snippet below dynamically defines an advanced style and then uses it, instead of the 'states_totpop' style, when rendering the states layer. // override predefined rendering style with programmatic one    var theRenderingStyle =      createBucketColorStyle('YlBr5', colorSeries, 'States5', true);   // specify which attribute is used in determining the bucket (i.e. color) to use for the state   // It can be an array because the style could be a chart type (pie/bar)   // which requires multiple attribute columns     // Use the STATE.TOTPOP column (aka attribute) value here    layer2.setRenderingStyle(theRenderingStyle, ["TOTPOP"]); The style itself is defined in the createBucketColorStyle() function. Dynamically defining an advanced style The advanced style used here is a bucket color style, i.e. a color style is associated with each bucket. So first we define the colors and then the buckets.     numClasses = colorSeries[colorName].classes;    // create Color Styles    for (var i=0; i < numClasses; i++)    {         theStyles[i] = new OM.style.Color(                      {fill: colorSeries[colorName].fill[i],                        stroke:colorSeries[colorName].stroke[i],                       strokeOpacity: useGradient? 0.25 : 1                      });    }; numClasses is the number of buckets. The colorSeries array contains the color fill and stroke definitions and is: var colorSeries = { //multi-hue color scheme #10 YlBl. "YlBl3": {   classes:3,                  fill: [0xEDF8B1, 0x7FCDBB, 0x2C7FB8],                  stroke:[0xB5DF9F, 0x72B8A8, 0x2872A6]   }, "YlBl5": {   classes:5,                  fill:[0xFFFFCC, 0xA1DAB4, 0x41B6C4, 0x2C7FB8, 0x253494],                  stroke:[0xE6E6B8, 0x91BCA2, 0x3AA4B0, 0x2872A6, 0x212F85]   }, //multi-hue color scheme #11 YlBr.  "YlBr3": {classes:3,                  fill:[0xFFF7BC, 0xFEC44F, 0xD95F0E],                  stroke:[0xE6DEA9, 0xE5B047, 0xC5360D]   }, "YlBr5": {classes:5,                  fill:[0xFFFFD4, 0xFED98E, 0xFE9929, 0xD95F0E, 0x993404],                  stroke:[0xE6E6BF, 0xE5C380, 0xE58A25, 0xC35663, 0x8A2F04]     }, etc. Next we create the bucket style.    bucketStyleDef = {       numClasses : colorSeries[colorName].classes, //      classification: 'custom',  //since we are supplying all the buckets //      buckets: theBuckets,       classification: 'logarithmic',  // use a logarithmic scale       styles: theStyles,       gradient:  useGradient? 'linear' : 'off' //      gradient:  useGradient? 'radial' : 'off'     };    theBucketStyle = new OM.style.BucketStyle(bucketStyleDef);    return theBucketStyle; A BucketStyle constructor takes a style definition as input. The style definition specifies the number of buckets (numClasses), a classification scheme (which can be equal-ranged, logarithmic scale, or custom), the styles for each bucket, whether to use a gradient effect, and optionally the buckets (required when using a custom classification scheme). The full source for the demo <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Oracle Maps V2 Thematic Map Demo</title> <script src="http://localhost:8080/mapviewer/jslib/v2/oraclemapsv2.js" type="text/javascript"> </script> <script type="text/javascript"> //var $j = jQuery.noConflict(); var baseURL="http://localhost:8080/mapviewer"; // location of mapviewer OM.gv.proxyEnabled =false; // no mvproxy needed OM.gv.setResourcePath(baseURL+"/jslib/v2/images/"); // location of resources for UI elements like nav panel buttons var map = null; // the client mapviewer object var statesLayer = null, stateCountyLayer = null; // The vector layers for states and counties in a state var layerName="States"; // initial map center and zoom var mapCenterLon = -20000; var mapCenterLat = 1750000; var mapZoom = 2; var mpoint = new OM.geometry.Point(mapCenterLon,mapCenterLat,32775); var currentPalette = null, currentStyle=null; // set an onchange listener for the color palette select list // initialize the map // load and display the states layer $(document).ready( function() { $("#demo-htmlselect").change(function() { var theColorScheme = $(this).val(); useSelectedColorScheme(theColorScheme); }); initMap(); states(); } ); /** * color series from ColorBrewer site (http://colorbrewer2.org/). */ var colorSeries = { //multi-hue color scheme #10 YlBl. "YlBl3": { classes:3, fill: [0xEDF8B1, 0x7FCDBB, 0x2C7FB8], stroke:[0xB5DF9F, 0x72B8A8, 0x2872A6] }, "YlBl5": { classes:5, fill:[0xFFFFCC, 0xA1DAB4, 0x41B6C4, 0x2C7FB8, 0x253494], stroke:[0xE6E6B8, 0x91BCA2, 0x3AA4B0, 0x2872A6, 0x212F85] }, //multi-hue color scheme #11 YlBr. "YlBr3": {classes:3, fill:[0xFFF7BC, 0xFEC44F, 0xD95F0E], stroke:[0xE6DEA9, 0xE5B047, 0xC5360D] }, "YlBr5": {classes:5, fill:[0xFFFFD4, 0xFED98E, 0xFE9929, 0xD95F0E, 0x993404], stroke:[0xE6E6BF, 0xE5C380, 0xE58A25, 0xC35663, 0x8A2F04] }, // single-hue color schemes (blues, greens, greys, oranges, reds, purples) "Purples5": {classes:5, fill:[0xf2f0f7, 0xcbc9e2, 0x9e9ac8, 0x756bb1, 0x54278f], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Blues5": {classes:5, fill:[0xEFF3FF, 0xbdd7e7, 0x68aed6, 0x3182bd, 0x18519C], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Greens5": {classes:5, fill:[0xedf8e9, 0xbae4b3, 0x74c476, 0x31a354, 0x116d2c], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Greys5": {classes:5, fill:[0xf7f7f7, 0xcccccc, 0x969696, 0x636363, 0x454545], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Oranges5": {classes:5, fill:[0xfeedde, 0xfdb385, 0xfd8d3c, 0xe6550d, 0xa63603], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Reds5": {classes:5, fill:[0xfee5d9, 0xfcae91, 0xfb6a4a, 0xde2d26, 0xa50f15], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] } }; function createBucketColorStyle( colorName, colorSeries, rangeName, useGradient) { var theBucketStyle; var bucketStyleDef; var theStyles = []; var theColors = []; var aBucket, aStyle, aColor, aRange; var numClasses ; numClasses = colorSeries[colorName].classes; // create Color Styles for (var i=0; i < numClasses; i++) { theStyles[i] = new OM.style.Color( {fill: colorSeries[colorName].fill[i], stroke:colorSeries[colorName].stroke[i], strokeOpacity: useGradient? 0.25 : 1 }); }; bucketStyleDef = { numClasses : colorSeries[colorName].classes, // classification: 'custom', //since we are supplying all the buckets // buckets: theBuckets, classification: 'logarithmic', // use a logarithmic scale styles: theStyles, gradient: useGradient? 'linear' : 'off' // gradient: useGradient? 'radial' : 'off' }; theBucketStyle = new OM.style.BucketStyle(bucketStyleDef); return theBucketStyle; } function initMap() { //alert("Initialize map view"); // define the map extent and number of zoom levels. // The Universe object is similar to the map tile layer configuration // It defines the map extent, number of zoom levels, and spatial reference system // well-known ones (like web mercator/google/bing or maps.oracle/elocation are predefined // The Universe must be defined when there is no underlying map tile layer. // When there is a map tile layer then that defines the map extent, srid, and zoom levels. var uni= new OM.universe.Universe( { srid : 32775, bounds : new OM.geometry.Rectangle( -3280000, 170000, 2300000, 3200000, 32775), numberOfZoomLevels: 8 }); map = new OM.Map( document.getElementById('map'), { mapviewerURL: baseURL, universe:uni }) ; var navigationPanelBar = new OM.control.NavigationPanelBar(); map.addMapDecoration(navigationPanelBar); } // end initMap function states() { //alert("Load and display states"); layerName = "States"; if(statesLayer) { // states were already visible but the style may have changed // so set the style to the currently selected one var theData = $('#demo-htmlselect').val(); setStyle(theData); } else { // States is a predefined layer in user_sdo_themes var layer2 = new OM.layer.VectorLayer("vLayer2", { def: { type:OM.layer.VectorLayer.TYPE_PREDEFINED, dataSource:"mvdemo", theme:"us_states_bi", url: baseURL, loadOnDemand: false }, boundingTheme:true }); // add drop shadow effect and hover style var shadowFilter = new OM.visualfilter.DropShadow({opacity:0.5, color:"#000000", offset:6, radius:10}); var hoverStyle = new OM.style.Color( {stroke:"#838383", strokeThickness:2}); layer2.setHoverStyle(hoverStyle); layer2.setHoverVisualFilter(shadowFilter); layer2.enableFeatureHover(true); layer2.enableFeatureSelection(false); layer2.setLabelsVisible(true); // override predefined rendering style with programmatic one var theRenderingStyle = createBucketColorStyle('YlBr5', colorSeries, 'States5', true); // specify which attribute is used in determining the bucket (i.e. color) to use for the state // It can be an array because the style could be a chart type (pie/bar) // which requires multiple attribute columns // Use the STATE.TOTPOP column (aka attribute) value here layer2.setRenderingStyle(theRenderingStyle, ["TOTPOP"]); currentPalette = "YlBr5"; var stLayerIdx = map.addLayer(layer2); //alert('State Layer Idx = ' + stLayerIdx); map.setMapCenter(mpoint); map.setMapZoomLevel(mapZoom) ; // display the map map.init() ; statesLayer=layer2; // add rt-click event listener to show counties for the state layer2.addListener(OM.event.MouseEvent.MOUSE_RIGHT_CLICK,stateRtClick); } // end if } // end states function setStyle(styleName) { // alert("Selected Style = " + styleName); // there may be a counties layer also displayed. // that wll have different bucket ranges so create // one style for states and one for counties var newRenderingStyle = null; if (layerName === "States") { if(/3/.test(styleName)) { newRenderingStyle = createBucketColorStyle(styleName, colorSeries, 'States3', false); currentStyle = createBucketColorStyle(styleName, colorSeries, 'Counties3', false); } else { newRenderingStyle = createBucketColorStyle(styleName, colorSeries, 'States5', false); currentStyle = createBucketColorStyle(styleName, colorSeries, 'Counties5', false); } statesLayer.setRenderingStyle(newRenderingStyle, ["TOTPOP"]); if (stateCountyLayer) stateCountyLayer.setRenderingStyle(currentStyle, ["TOTPOP"]); } } // end setStyle function stateRtClick(evt){ var foi = evt.feature; //alert('Rt-Click on State: ' + foi.attributes['_label_'] + // ' with pop ' + foi.attributes['TOTPOP']); // display another layer with counties info // layer may change on each rt-click so create and add each time. var countyByState = null ; // the _label_ attribute of a feature in this case is the state abbreviation // we will use that to query and get the counties for a state var sqlText = "select totpop,geom32775 from counties_32775_moved where state_abrv="+ "'"+foi.getAttributeValue('_label_')+"'"; // alert(sqlText); if (currentStyle === null) currentStyle = createBucketColorStyle('YlBr5', colorSeries, 'Counties5', false); /* try a simple style instead new OM.style.ColorStyle( { stroke: "#B8F4FF", fill: "#18E5F4", fillOpacity:0 } ); */ // remove existing layer if any if(stateCountyLayer) map.removeLayer(stateCountyLayer); countyByState = new OM.layer.VectorLayer("stCountyLayer", {def:{type:OM.layer.VectorLayer.TYPE_JDBC, dataSource:"mvdemo", sql:sqlText, url:baseURL}}); // url:baseURL}, // renderingStyle:currentStyle}); countyByState.setVisible(true); // specify which attribute is used in determining the bucket (i.e. color) to use for the state countyByState.setRenderingStyle(currentStyle, ["TOTPOP"]); var ctLayerIdx = map.addLayer(countyByState); // alert('County Layer Idx = ' + ctLayerIdx); //map.addLayer(countyByState); stateCountyLayer = countyByState; } // end stateRtClick function useSelectedColorScheme(theColorScheme) { if(map) { // code to update renderStyle goes here //alert('will try to change render style'); setStyle(theColorScheme); } else { // do nothing } } </script> </head> <body bgcolor="#b4c5cc" style="height:100%;font-family:Arial,Helvetica,Verdana"> <h3 align="center">State population thematic map </h3> <div id="demo" style="position:absolute; left:68%; top:44px; width:28%; height:100%"> <HR/> <p/> Choose Color Scheme: <select id="demo-htmlselect"> <option value="YlBl3"> YellowBlue3</option> <option value="YlBr3"> YellowBrown3</option> <option value="YlBl5"> YellowBlue5</option> <option value="YlBr5" selected="selected"> YellowBrown5</option> <option value="Blues5"> Blues</option> <option value="Greens5"> Greens</option> <option value="Greys5"> Greys</option> <option value="Oranges5"> Oranges</option> <option value="Purples5"> Purples</option> <option value="Reds5"> Reds</option> </select> <p/> </div> <div id="map" style="position:absolute; left:10px; top:50px; width:65%; height:75%; background-color:#778f99"></div> <div style="position:absolute;top:85%; left:10px;width:98%" class="noprint"> <HR/> <p> Note: This demo uses HTML5 Canvas and requires IE9+, Firefox 10+, or Chrome. No map will show up in IE8 or earlier. </p> </div> </body> </html>

    Read the article

  • Class Mapping Error: 'T' must be a non-abstract type with a public parameterless constructor

    - by Amit Ranjan
    Hi, While mapping class i am getting error 'T' must be a non-abstract type with a public parameterless constructor in order to use it as parameter 'T' in the generic type or method. Below is my SqlReaderBase Class public abstract class SqlReaderBase<T> : ConnectionProvider { #region Abstract Methods protected abstract string commandText { get; } protected abstract CommandType commandType { get; } protected abstract Collection<IDataParameter> GetParameters(IDbCommand command); **protected abstract MapperBase<T> GetMapper();** #endregion #region Non Abstract Methods /// <summary> /// Method to Execute Select Queries for Retrieveing List of Result /// </summary> /// <returns></returns> public Collection<T> ExecuteReader() { //Collection of Type on which Template is applied Collection<T> collection = new Collection<T>(); // initializing connection using (IDbConnection connection = GetConnection()) { try { // creates command for sql operations IDbCommand command = connection.CreateCommand(); // assign connection to command command.Connection = connection; // assign query command.CommandText = commandText; //state what type of query is used, text, table or Sp command.CommandType = commandType; // retrieves parameter from IDataParameter Collection and assigns it to command object foreach (IDataParameter param in GetParameters(command)) command.Parameters.Add(param); // Establishes connection with database server connection.Open(); // Since it is designed for executing Select statements that will return a list of results // so we will call command's execute reader method that return a Forward Only reader with // list of results inside. using (IDataReader reader = command.ExecuteReader()) { try { // Call to Mapper Class of the template to map the data to its // respective fields MapperBase<T> mapper = GetMapper(); collection = mapper.MapAll(reader); } catch (Exception ex) // catch exception { throw ex; // log errr } finally { reader.Close(); reader.Dispose(); } } } catch (Exception ex) { throw ex; } finally { connection.Close(); connection.Dispose(); } } return collection; } #endregion } What I am trying to do is , I am executine some command and filling my class dynamically. The class is given below: namespace FooZo.Core { public class Restaurant { #region Private Member Variables private int _restaurantId = 0; private string _email = string.Empty; private string _website = string.Empty; private string _name = string.Empty; private string _address = string.Empty; private string _phone = string.Empty; private bool _hasMenu = false; private string _menuImagePath = string.Empty; private int _cuisine = 0; private bool _hasBar = false; private bool _hasHomeDelivery = false; private bool _hasDineIn = false; private int _type = 0; private string _restaurantImagePath = string.Empty; private string _serviceAvailableTill = string.Empty; private string _serviceAvailableFrom = string.Empty; public string Name { get { return _name; } set { _name = value; } } public string Address { get { return _address; } set { _address = value; } } public int RestaurantId { get { return _restaurantId; } set { _restaurantId = value; } } public string Website { get { return _website; } set { _website = value; } } public string Email { get { return _email; } set { _email = value; } } public string Phone { get { return _phone; } set { _phone = value; } } public bool HasMenu { get { return _hasMenu; } set { _hasMenu = value; } } public string MenuImagePath { get { return _menuImagePath; } set { _menuImagePath = value; } } public string RestaurantImagePath { get { return _restaurantImagePath; } set { _restaurantImagePath = value; } } public int Type { get { return _type; } set { _type = value; } } public int Cuisine { get { return _cuisine; } set { _cuisine = value; } } public bool HasBar { get { return _hasBar; } set { _hasBar = value; } } public bool HasHomeDelivery { get { return _hasHomeDelivery; } set { _hasHomeDelivery = value; } } public bool HasDineIn { get { return _hasDineIn; } set { _hasDineIn = value; } } public string ServiceAvailableFrom { get { return _serviceAvailableFrom; } set { _serviceAvailableFrom = value; } } public string ServiceAvailableTill { get { return _serviceAvailableTill; } set { _serviceAvailableTill = value; } } #endregion public Restaurant() { } } } For filling my class properties dynamically i have another class called MapperBase Class with following methods: public abstract class MapperBase<T> where T : new() { protected T Map(IDataRecord record) { T instance = new T(); string fieldName; PropertyInfo[] properties = typeof(T).GetProperties(); for (int i = 0; i < record.FieldCount; i++) { fieldName = record.GetName(i); foreach (PropertyInfo property in properties) { if (property.Name == fieldName) { property.SetValue(instance, record[i], null); } } } return instance; } public Collection<T> MapAll(IDataReader reader) { Collection<T> collection = new Collection<T>(); while (reader.Read()) { collection.Add(Map(reader)); } return collection; } } There is another class which inherits the SqlreaderBaseClass called DefaultSearch. Code is below public class DefaultSearch: SqlReaderBase<Restaurant> { protected override string commandText { get { return "Select Name from vw_Restaurants"; } } protected override CommandType commandType { get { return CommandType.Text; } } protected override Collection<IDataParameter> GetParameters(IDbCommand command) { Collection<IDataParameter> parameters = new Collection<IDataParameter>(); parameters.Clear(); return parameters; } protected override MapperBase<Restaurant> GetMapper() { MapperBase<Restaurant> mapper = new RMapper(); return mapper; } } But whenever I tried to build , I am getting error 'T' must be a non-abstract type with a public parameterless constructor in order to use it as parameter 'T' in the generic type or method. Even T here is Restaurant has a Parameterless Public constructor.

    Read the article

  • Spring security - Reach users ID without passing it through every controller

    - by nilsi
    I have a design issue that I don't know how to solve. I'm using Spring 3.2.4 and Spring security 3.1.4. I have a Account table in my database that looks like this: create table Account (id identity, username varchar unique, password varchar not null, firstName varchar not null, lastName varchar not null, university varchar not null, primary key (id)); Until recently my username was just only a username but I changed it to be the email address instead since many users want to login with that instead. I have a header that I include on all my pages which got a link to the users profile like this: <a href="/project/users/<%= request.getUserPrincipal().getName()%>" class="navbar-link"><strong><%= request.getUserPrincipal().getName()%></strong></a> The problem is that <%= request.getUserPrincipal().getName()%> returns the email now, I don't want to link the user's with thier emails. Instead I want to use the id every user have to link to the profile. How do I reach the users id's from every page? I have been thinking of two solutions but I'm not sure: Change the principal to contain the id as well, don't know how to do this and having problem finding good information on the topic. Add a model attribute to all my controllers that contain the whole user but this would be really ugly, like this. Account account = entityManager.find(Account.class, email); model.addAttribute("account", account); There are more way's as well and I have no clue which one is to prefer. I hope it's clear enough and thank you for any help on this. ====== Edit according to answer ======= I edited Account to implement UserDetails, it now looks like this (will fix the auto generated stuff later): @Entity @Table(name="Account") public class Account implements UserDetails { @Id private int id; private String username; private String password; private String firstName; private String lastName; @ManyToOne private University university; public Account() { } public Account(String username, String password, String firstName, String lastName, University university) { this.username = username; this.password = password; this.firstName = firstName; this.lastName = lastName; this.university = university; } public String getUsername() { return username; } public String getPassword() { return password; } public String getFirstName() { return firstName; } public String getLastName() { return lastName; } public void setUsername(String username) { this.username = username; } public void setPassword(String password) { this.password = password; } public void setFirstName(String firstName) { this.firstName = firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public University getUniversity() { return university; } public void setUniversity(University university) { this.university = university; } public int getId() { return id; } public void setId(int id) { this.id = id; } @Override public Collection<? extends GrantedAuthority> getAuthorities() { // TODO Auto-generated method stub return null; } @Override public boolean isAccountNonExpired() { // TODO Auto-generated method stub return false; } @Override public boolean isAccountNonLocked() { // TODO Auto-generated method stub return false; } @Override public boolean isCredentialsNonExpired() { // TODO Auto-generated method stub return false; } @Override public boolean isEnabled() { // TODO Auto-generated method stub return true; } } I also added <%@ taglib prefix="sec" uri="http://www.springframework.org/security/tags" %> To my jsp files and trying to reach the id by <sec:authentication property="principal.id" /> This gives me the following org.springframework.beans.NotReadablePropertyException: Invalid property 'principal.id' of bean class [org.springframework.security.authentication.UsernamePasswordAuthenticationToken]: Bean property 'principal.id' is not readable or has an invalid getter method: Does the return type of the getter match the parameter type of the setter? ====== Edit 2 according to answer ======= I based my application on spring social samples and I never had to change anything until now. This are the files I think are relevant, please tell me if theres something you need to see besides this. AccountRepository.java public interface AccountRepository { void createAccount(Account account) throws UsernameAlreadyInUseException; Account findAccountByUsername(String username); } JdbcAccountRepository.java @Repository public class JdbcAccountRepository implements AccountRepository { private final JdbcTemplate jdbcTemplate; private final PasswordEncoder passwordEncoder; @Inject public JdbcAccountRepository(JdbcTemplate jdbcTemplate, PasswordEncoder passwordEncoder) { this.jdbcTemplate = jdbcTemplate; this.passwordEncoder = passwordEncoder; } @Transactional public void createAccount(Account user) throws UsernameAlreadyInUseException { try { jdbcTemplate.update( "insert into Account (firstName, lastName, username, university, password) values (?, ?, ?, ?, ?)", user.getFirstName(), user.getLastName(), user.getUsername(), user.getUniversity(), passwordEncoder.encode(user.getPassword())); } catch (DuplicateKeyException e) { throw new UsernameAlreadyInUseException(user.getUsername()); } } public Account findAccountByUsername(String username) { return jdbcTemplate.queryForObject("select username, firstName, lastName, university from Account where username = ?", new RowMapper<Account>() { public Account mapRow(ResultSet rs, int rowNum) throws SQLException { return new Account(rs.getString("username"), null, rs.getString("firstName"), rs.getString("lastName"), new University("test")); } }, username); } } security.xml <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans" xsi:schemaLocation="http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.1.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd"> <http pattern="/resources/**" security="none" /> <http pattern="/project/" security="none" /> <http use-expressions="true"> <!-- Authentication policy --> <form-login login-page="/signin" login-processing-url="/signin/authenticate" authentication-failure-url="/signin?error=bad_credentials" /> <logout logout-url="/signout" delete-cookies="JSESSIONID" /> <intercept-url pattern="/addcourse" access="isAuthenticated()" /> <intercept-url pattern="/courses/**/**/edit" access="isAuthenticated()" /> <intercept-url pattern="/users/**/edit" access="isAuthenticated()" /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider> <password-encoder ref="passwordEncoder" /> <jdbc-user-service data-source-ref="dataSource" users-by-username-query="select username, password, true from Account where username = ?" authorities-by-username-query="select username, 'ROLE_USER' from Account where username = ?"/> </authentication-provider> <authentication-provider> <user-service> <user name="admin" password="admin" authorities="ROLE_USER, ROLE_ADMIN" /> </user-service> </authentication-provider> </authentication-manager> </beans:beans> And this is my try of implementing a UserDetailsService public class RepositoryUserDetailsService implements UserDetailsService { private final AccountRepository accountRepository; @Autowired public RepositoryUserDetailsService(AccountRepository repository) { this.accountRepository = repository; } @Override public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException { Account user = accountRepository.findAccountByUsername(username); if (user == null) { throw new UsernameNotFoundException("No user found with username: " + username); } return user; } } Still gives me the same error, do I need to add the UserDetailsService somewhere? This is starting to be something else compared to my initial question, I should maybe start another question. Sorry for my lack of experience in this. I have to read up.

    Read the article

  • Many to Many Logic in ASP.NET MVC

    - by Jonathan Stowell
    Hi All, I will restrict this to the three tables I am trying to work with Problem, Communications, and ProbComms. The scenario is that a Student may have many Problems concurrently which may affect their studies. Lecturers may have future communications with a student after an initial problem is logged, however as a Student may have multiple Problems the Lecturer may decide that the discussion they had is related to more than one Problem. Here is a screenshot of the LINQ representation of my DB: LINQ Screenshot At the moment in my StudentController I have a StudentFormViewModel Class: // //ViewModel Class public class StudentFormViewModel { IProbCommRepository probCommRepository; // Properties public Student Student { get; private set; } public IEnumerable<ProbComm> ProbComm { get; private set; } // // Dependency Injection enabled constructors public StudentFormViewModel(Student student, IEnumerable<ProbComm> probComm) : this(new ProbCommRepository()) { this.Student = student; this.ProbComm = probComm; } public StudentFormViewModel(IProbCommRepository pRepository) { probCommRepository = pRepository; } } When I go to the Students Detail Page this runs: public ActionResult Details(string id) { StudentFormViewModel viewdata = new StudentFormViewModel(studentRepository.GetStudent(id), probCommRepository.FindAllProblemComms(id)); if (viewdata == null) return View("NotFound"); else return View(viewdata); } The GetStudent works fine and returns an instance of the student to output on the page, below the student I output all problems logged against them, but underneath these problems I want to show the communications related to the Problem. The LINQ I am using for ProbComms is This is located in the Model class ProbCommRepository, and accessed via a IProbCommRepository interface: public IQueryable<ProbComm> FindAllProblemComms(string studentEmail) { return (from p in db.ProbComms where p.Problem.StudentEmail.Equals(studentEmail) orderby p.Problem.ProblemDateTime select p); } However for example if I have this data in the ProbComms table: ProblemID CommunicationID 1 1 1 2 The query returns two rows so I assume I somehow have to groupby Problem or ProblemID but I am not too sure how to do this with the way I have built things as the return type has to be ProbComm for the query as thats what Model class its located in. When it comes to the view the Details.aspx calls two partial views each passing the relevant view data through, StudentDetails works fine page: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MitigatingCircumstances.Controllers.StudentFormViewModel>" %> <% Html.RenderPartial("StudentDetails", this.ViewData.Model.Student); %> <% Html.RenderPartial("StudentProblems", this.ViewData.Model.ProbComm); %> StudentProblems uses a foreach loop to loop through records in the Model and I am trying another foreach loop to output the communication details: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<IEnumerable<MitigatingCircumstances.Models.ProbComm>>" %> <script type="text/javascript" language="javascript"> $(document).ready(function() { $("DIV.ContainerPanel > DIV.collapsePanelHeader > DIV.ArrowExpand").toggle( function() { $(this).parent().next("div.Content").show("slow"); $(this).attr("class", "ArrowClose"); }, function() { $(this).parent().next("div.Content").hide("slow"); $(this).attr("class", "ArrowExpand"); }); }); </script> <div class="studentProblems"> <% var i = 0; foreach (var item in Model) { %> <div id="ContainerPanel<%= i = i + 1 %>" class="ContainerPanel"> <div id="header<%= i = i + 1 %>" class="collapsePanelHeader"> <div id="dvHeaderText<%= i = i + 1 %>" class="HeaderContent"><%= Html.Encode(String.Format("{0:dd/MM/yyyy}", item.Problem.ProblemDateTime))%></div> <div id="dvArrow<%= i = i + 1 %>" class="ArrowExpand"></div> </div> <div id="dvContent<%= i = i + 1 %>" class="Content" style="display: none"> <p> Type: <%= Html.Encode(item.Problem.CommunicationType.TypeName) %> </p> <p> Problem Outline: <%= Html.Encode(item.Problem.ProblemOutline)%> </p> <p> Mitigating Circumstance Form: <%= Html.Encode(item.Problem.MCF)%> </p> <p> Mitigating Circumstance Level: <%= Html.Encode(item.Problem.MitigatingCircumstanceLevel.MCLevel)%> </p> <p> Absent From: <%= Html.Encode(String.Format("{0:g}", item.Problem.AbsentFrom))%> </p> <p> Absent Until: <%= Html.Encode(String.Format("{0:g}", item.Problem.AbsentUntil))%> </p> <p> Requested Follow Up: <%= Html.Encode(String.Format("{0:g}", item.Problem.RequestedFollowUp))%> </p> <p>Problem Communications</p> <% foreach (var comm in Model) { %> <p> <% if (item.Problem.ProblemID == comm.ProblemID) { %> <%= Html.Encode(comm.ProblemCommunication.CommunicationOutline)%> <% } %> </p> <% } %> </div> </div> <br /> <% } %> </div> The issue is that using the example data before the Model has two records for the same problem as there are two communications for that problem, therefore duplicating the output. Any help with this would be gratefully appreciated. Thanks, Jon

    Read the article

  • Many to Many with LINQ-To-Sql and ASP.NET MVC

    - by Jonathan Stowell
    Hi All, I will restrict this to the three tables I am trying to work with Problem, Communications, and ProbComms. The scenario is that a Student may have many Problems concurrently which may affect their studies. Lecturers may have future communications with a student after an initial problem is logged, however as a Student may have multiple Problems the Lecturer may decide that the discussion they had is related to more than one Problem. Here is a screenshot of the LINQ-To-Sql representation of my DB: LINQ-To-Sql Screenshot At the moment in my StudentController I have a StudentFormViewModel Class: // //ViewModel Class public class StudentFormViewModel { IProbCommRepository probCommRepository; // Properties public Student Student { get; private set; } public IEnumerable<ProbComm> ProbComm { get; private set; } // // Dependency Injection enabled constructors public StudentFormViewModel(Student student, IEnumerable<ProbComm> probComm) : this(new ProbCommRepository()) { this.Student = student; this.ProbComm = probComm; } public StudentFormViewModel(IProbCommRepository pRepository) { probCommRepository = pRepository; } } When I go to the Students Detail Page this runs: public ActionResult Details(string id) { StudentFormViewModel viewdata = new StudentFormViewModel(studentRepository.GetStudent(id), probCommRepository.FindAllProblemComms(id)); if (viewdata == null) return View("NotFound"); else return View(viewdata); } The GetStudent works fine and returns an instance of the student to output on the page, below the student I output all problems logged against them, but underneath these problems I want to show the communications related to the Problem. The LINQ I am using for ProbComms is This is located in the Model class ProbCommRepository, and accessed via a IProbCommRepository interface: public IQueryable<ProbComm> FindAllProblemComms(string studentEmail) { return (from p in db.ProbComms where p.Problem.StudentEmail.Equals(studentEmail) orderby p.Problem.ProblemDateTime select p); } However for example if I have this data in the ProbComms table: ProblemID CommunicationID 1 1 1 2 The query returns two rows so I assume I somehow have to groupby Problem or ProblemID but I am not too sure how to do this with the way I have built things as the return type has to be ProbComm for the query as thats what Model class its located in. When it comes to the view the Details.aspx calls two partial views each passing the relevant view data through, StudentDetails works fine page: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MitigatingCircumstances.Controllers.StudentFormViewModel>" %> <% Html.RenderPartial("StudentDetails", this.ViewData.Model.Student); %> <% Html.RenderPartial("StudentProblems", this.ViewData.Model.ProbComm); %> StudentProblems uses a foreach loop to loop through records in the Model and I am trying another foreach loop to output the communication details: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<IEnumerable<MitigatingCircumstances.Models.ProbComm>>" %> <script type="text/javascript" language="javascript"> $(document).ready(function() { $("DIV.ContainerPanel > DIV.collapsePanelHeader > DIV.ArrowExpand").toggle( function() { $(this).parent().next("div.Content").show("slow"); $(this).attr("class", "ArrowClose"); }, function() { $(this).parent().next("div.Content").hide("slow"); $(this).attr("class", "ArrowExpand"); }); }); </script> <div class="studentProblems"> <% var i = 0; foreach (var item in Model) { %> <div id="ContainerPanel<%= i = i + 1 %>" class="ContainerPanel"> <div id="header<%= i = i + 1 %>" class="collapsePanelHeader"> <div id="dvHeaderText<%= i = i + 1 %>" class="HeaderContent"><%= Html.Encode(String.Format("{0:dd/MM/yyyy}", item.Problem.ProblemDateTime))%></div> <div id="dvArrow<%= i = i + 1 %>" class="ArrowExpand"></div> </div> <div id="dvContent<%= i = i + 1 %>" class="Content" style="display: none"> <p> Type: <%= Html.Encode(item.Problem.CommunicationType.TypeName) %> </p> <p> Problem Outline: <%= Html.Encode(item.Problem.ProblemOutline)%> </p> <p> Mitigating Circumstance Form: <%= Html.Encode(item.Problem.MCF)%> </p> <p> Mitigating Circumstance Level: <%= Html.Encode(item.Problem.MitigatingCircumstanceLevel.MCLevel)%> </p> <p> Absent From: <%= Html.Encode(String.Format("{0:g}", item.Problem.AbsentFrom))%> </p> <p> Absent Until: <%= Html.Encode(String.Format("{0:g}", item.Problem.AbsentUntil))%> </p> <p> Requested Follow Up: <%= Html.Encode(String.Format("{0:g}", item.Problem.RequestedFollowUp))%> </p> <p>Problem Communications</p> <% foreach (var comm in Model) { %> <p> <% if (item.Problem.ProblemID == comm.ProblemID) { %> <%= Html.Encode(comm.ProblemCommunication.CommunicationOutline)%> <% } %> </p> <% } %> </div> </div> <br /> <% } %> </div> The issue is that using the example data before the Model has two records for the same problem as there are two communications for that problem, therefore duplicating the output. Any help with this would be gratefully appreciated. Thanks, Jon

    Read the article

  • TSQL Conditionally Select Specific Value

    - by Dzejms
    This is a follow-up to #1644748 where I successfully answered my own question, but Quassnoi helped me to realize that it was the wrong question. He gave me a solution that worked for my sample data, but I couldn't plug it back into the parent stored procedure because I fail at SQL 2005 syntax. So here is an attempt to paint the broader picture and ask what I actually need. This is part of a stored procedure that returns a list of items in a bug tracking application I've inherited. There are are over 100 fields and 26 joins so I'm pulling out only the mostly relevant bits. SELECT tickets.ticketid, tickets.tickettype, tickets_tickettype_lu.tickettypedesc, tickets.stage, tickets.position, tickets.sponsor, tickets.dev, tickets.qa, DATEDIFF(DAY, ticket_history_assignment.savedate, GETDATE()) as 'daysinqueue' FROM dbo.tickets WITH (NOLOCK) LEFT OUTER JOIN dbo.tickets_tickettype_lu WITH (NOLOCK) ON tickets.tickettype = tickets_tickettype_lu.tickettypeid LEFT OUTER JOIN dbo.tickets_history_assignment WITH (NOLOCK) ON tickets_history_assignment.ticketid = tickets.ticketid AND tickets_history_assignment.historyid = ( SELECT MAX(historyid) FROM dbo.tickets_history_assignment WITH (NOLOCK) WHERE tickets_history_assignment.ticketid = tickets.ticketid GROUP BY tickets_history_assignment.ticketid ) WHERE tickets.sponsor = @sponsor The area of interest is the daysinqueue subquery mess. The tickets_history_assignment table looks roughly as follows declare @tickets_history_assignment table ( historyid int, ticketid int, sponsor int, dev int, qa int, savedate datetime ) insert into @tickets_history_assignment values (1521402, 92774,20,14, 20, '2009-10-27 09:17:59.527') insert into @tickets_history_assignment values (1521399, 92774,20,14, 42, '2009-08-31 12:07:52.917') insert into @tickets_history_assignment values (1521311, 92774,100,14, 42, '2008-12-08 16:15:49.887') insert into @tickets_history_assignment values (1521336, 92774,100,14, 42, '2009-01-16 14:27:43.577') Whenever a ticket is saved, the current values for sponsor, dev and qa are stored in the tickets_history_assignment table with the ticketid and a timestamp. So it is possible for someone to change the value for qa, but leave sponsor alone. What I want to know, based on all of these conditions, is the historyid of the record in the tickets_history_assignment table where the sponsor value was last changed so that I can calculate the value for daysinqueue. If a record is inserted into the history table, and only the qa value has changed, I don't want that record. So simply relying on MAX(historyid) won't work for me. Quassnoi came up with the following which seemed to work with my sample data, but I can't plug it into the larger query, SQL Manager bitches about the WITH statement. ;WITH rows AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY ticketid ORDER BY savedate DESC) AS rn FROM @Table ) SELECT rl.sponsor, ro.savedate FROM rows rl CROSS APPLY ( SELECT TOP 1 rc.savedate FROM rows rc JOIN rows rn ON rn.ticketid = rc.ticketid AND rn.rn = rc.rn + 1 AND rn.sponsor <> rc.sponsor WHERE rc.ticketid = rl.ticketid ORDER BY rc.rn ) ro WHERE rl.rn = 1 I played with it yesterday afternoon and got nowhere because I don't fundamentally understand what is going on here and how it should fit into the larger context. So, any takers? UPDATE Ok, here's the whole thing. I've been switching some of the table and column names in an attempt to simplify things so here's the full unedited mess. snip - old bad code Here are the errors: Msg 102, Level 15, State 1, Procedure usp_GetProjectRecordsByAssignment, Line 159 Incorrect syntax near ';'. Msg 102, Level 15, State 1, Procedure usp_GetProjectRecordsByAssignment, Line 179 Incorrect syntax near ')'. Line numbers are of course not correct but refer to ;WITH rows AS And the ')' char after the WHERE rl.rn = 1 ) Respectively Is there a tag for extra super long question? UPDATE #2 Here is the finished query for anyone who may need this: CREATE PROCEDURE [dbo].[usp_GetProjectRecordsByAssignment] ( @assigned numeric(18,0), @assignedtype numeric(18,0) ) AS SET NOCOUNT ON WITH rows AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY recordid ORDER BY savedate DESC) AS rn FROM projects_history_assignment ) SELECT projects_records.recordid, projects_records.recordtype, projects_recordtype_lu.recordtypedesc, projects_records.stage, projects_stage_lu.stagedesc, projects_records.position, projects_position_lu.positiondesc, CASE projects_records.clientrequested WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS clientrequested, projects_records.reportingmethod, projects_reportingmethod_lu.reportingmethoddesc, projects_records.clientaccess, projects_clientaccess_lu.clientaccessdesc, projects_records.clientnumber, projects_records.project, projects_lu.projectdesc, projects_records.version, projects_version_lu.versiondesc, projects_records.projectedversion, projects_version_lu_projected.versiondesc AS projectedversiondesc, projects_records.sitetype, projects_sitetype_lu.sitetypedesc, projects_records.title, projects_records.module, projects_module_lu.moduledesc, projects_records.component, projects_component_lu.componentdesc, projects_records.loginusername, projects_records.loginpassword, projects_records.assistedusername, projects_records.browsername, projects_browsername_lu.browsernamedesc, projects_records.browserversion, projects_records.osname, projects_osname_lu.osnamedesc, projects_records.osversion, projects_records.errortype, projects_errortype_lu.errortypedesc, projects_records.gsipriority, projects_gsipriority_lu.gsiprioritydesc, projects_records.clientpriority, projects_clientpriority_lu.clientprioritydesc, projects_records.scheduledstartdate, projects_records.scheduledcompletiondate, projects_records.projectedhours, projects_records.actualstartdate, projects_records.actualcompletiondate, projects_records.actualhours, CASE projects_records.billclient WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS billclient, projects_records.billamount, projects_records.status, projects_status_lu.statusdesc, CASE CAST(projects_records.assigned AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' WHEN '20000' THEN 'Client' WHEN '30000' THEN 'Tech Support' WHEN '40000' THEN 'LMI Tech Support' WHEN '50000' THEN 'Upload' WHEN '60000' THEN 'Spider' WHEN '70000' THEN 'DB Admin' ELSE rtrim(users_assigned.nickname) + ' ' + rtrim(users_assigned.lastname) END AS assigned, CASE CAST(projects_records.assigneddev AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assigneddev.nickname) + ' ' + rtrim(users_assigneddev.lastname) END AS assigneddev, CASE CAST(projects_records.assignedqa AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assignedqa.nickname) + ' ' + rtrim(users_assignedqa.lastname) END AS assignedqa, CASE CAST(projects_records.assignedsponsor AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assignedsponsor.nickname) + ' ' + rtrim(users_assignedsponsor.lastname) END AS assignedsponsor, projects_records.clientcreated, CASE projects_records.clientcreated WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS clientcreateddesc, CASE projects_records.clientcreated WHEN '1' THEN rtrim(clientusers_createuser.firstname) + ' ' + rtrim(clientusers_createuser.lastname) + ' (Client)' ELSE rtrim(users_createuser.nickname) + ' ' + rtrim(users_createuser.lastname) END AS createuser, projects_records.createdate, projects_records.savedate, projects_resolution.sitesaffected, projects_sitesaffected_lu.sitesaffecteddesc, DATEDIFF(DAY, projects_history_assignment.savedate, GETDATE()) as 'daysinqueue', projects_records.iOnHitList, projects_records.changetype FROM dbo.projects_records WITH (NOLOCK) LEFT OUTER JOIN dbo.projects_recordtype_lu WITH (NOLOCK) ON projects_records.recordtype = projects_recordtype_lu.recordtypeid LEFT OUTER JOIN dbo.projects_stage_lu WITH (NOLOCK) ON projects_records.stage = projects_stage_lu.stageid LEFT OUTER JOIN dbo.projects_position_lu WITH (NOLOCK) ON projects_records.position = projects_position_lu.positionid LEFT OUTER JOIN dbo.projects_reportingmethod_lu WITH (NOLOCK) ON projects_records.reportingmethod = projects_reportingmethod_lu.reportingmethodid LEFT OUTER JOIN dbo.projects_lu WITH (NOLOCK) ON projects_records.project = projects_lu.projectid LEFT OUTER JOIN dbo.projects_version_lu WITH (NOLOCK) ON projects_records.version = projects_version_lu.versionid LEFT OUTER JOIN dbo.projects_version_lu projects_version_lu_projected WITH (NOLOCK) ON projects_records.projectedversion = projects_version_lu_projected.versionid LEFT OUTER JOIN dbo.projects_sitetype_lu WITH (NOLOCK) ON projects_records.sitetype = projects_sitetype_lu.sitetypeid LEFT OUTER JOIN dbo.projects_module_lu WITH (NOLOCK) ON projects_records.module = projects_module_lu.moduleid LEFT OUTER JOIN dbo.projects_component_lu WITH (NOLOCK) ON projects_records.component = projects_component_lu.componentid LEFT OUTER JOIN dbo.projects_browsername_lu WITH (NOLOCK) ON projects_records.browsername = projects_browsername_lu.browsernameid LEFT OUTER JOIN dbo.projects_osname_lu WITH (NOLOCK) ON projects_records.osname = projects_osname_lu.osnameid LEFT OUTER JOIN dbo.projects_errortype_lu WITH (NOLOCK) ON projects_records.errortype = projects_errortype_lu.errortypeid LEFT OUTER JOIN dbo.projects_resolution WITH (NOLOCK) ON projects_records.recordid = projects_resolution.recordid LEFT OUTER JOIN dbo.projects_sitesaffected_lu WITH (NOLOCK) ON projects_resolution.sitesaffected = projects_sitesaffected_lu.sitesaffectedid LEFT OUTER JOIN dbo.projects_gsipriority_lu WITH (NOLOCK) ON projects_records.gsipriority = projects_gsipriority_lu.gsipriorityid LEFT OUTER JOIN dbo.projects_clientpriority_lu WITH (NOLOCK) ON projects_records.clientpriority = projects_clientpriority_lu.clientpriorityid LEFT OUTER JOIN dbo.projects_status_lu WITH (NOLOCK) ON projects_records.status = projects_status_lu.statusid LEFT OUTER JOIN dbo.projects_clientaccess_lu WITH (NOLOCK) ON projects_records.clientaccess = projects_clientaccess_lu.clientaccessid LEFT OUTER JOIN dbo.users users_assigned WITH (NOLOCK) ON projects_records.assigned = users_assigned.userid LEFT OUTER JOIN dbo.users users_assigneddev WITH (NOLOCK) ON projects_records.assigneddev = users_assigneddev.userid LEFT OUTER JOIN dbo.users users_assignedqa WITH (NOLOCK) ON projects_records.assignedqa = users_assignedqa.userid LEFT OUTER JOIN dbo.users users_assignedsponsor WITH (NOLOCK) ON projects_records.assignedsponsor = users_assignedsponsor.userid LEFT OUTER JOIN dbo.users users_createuser WITH (NOLOCK) ON projects_records.createuser = users_createuser.userid LEFT OUTER JOIN dbo.clientusers clientusers_createuser WITH (NOLOCK) ON projects_records.createuser = clientusers_createuser.userid LEFT OUTER JOIN dbo.projects_history_assignment WITH (NOLOCK) ON projects_history_assignment.recordid = projects_records.recordid AND projects_history_assignment.historyid = ( SELECT ro.historyid FROM rows rl CROSS APPLY ( SELECT TOP 1 rc.historyid FROM rows rc JOIN rows rn ON rn.recordid = rc.recordid AND rn.rn = rc.rn + 1 AND rn.assigned <> rc.assigned WHERE rc.recordid = rl.recordid ORDER BY rc.rn ) ro WHERE rl.rn = 1 AND rl.recordid = projects_records.recordid ) WHERE (@assignedtype='0' and projects_records.assigned = @assigned) OR (@assignedtype='1' and projects_records.assigneddev = @assigned) OR (@assignedtype='2' and projects_records.assignedqa = @assigned) OR (@assignedtype='3' and projects_records.assignedsponsor = @assigned) OR (@assignedtype='4' and projects_records.createuser = @assigned)

    Read the article

  • HttpClient POST fails to submit the form + resulting string is cut-off (incomplete)

    - by Jayomat
    Hi, I'm writing an app to check for the bus timetable's. Therefor I need to post some data to a html page, submit it, and parse the resulting page with htmlparser. Though it may be asked a lot, can some one help me identify if 1) this page does support post/get (I think it does) 2) which fields I need to use? 3) How to make the actual request? this is my code so far: String url = "http://busspur02.aseag.de/bs.exe?Cmd=RV&Karten=true&DatumT=30&DatumM=4&DatumJ=2010&ZeitH=&ZeitM=&Suchen=%28S%29uchen&GT0=&HT0=&GT1=&HT1="; String charset = "CP1252"; System.out.println("startFrom: "+start_from); System.out.println("goTo: "+destination); //String tag.v List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("HTO", start_from)); params.add(new BasicNameValuePair("HT1", destination)); params.add(new BasicNameValuePair("GTO", "Aachen")); params.add(new BasicNameValuePair("GT1", "Aachen")); params.add(new BasicNameValuePair("DatumT", day)); params.add(new BasicNameValuePair("DatumM", month)); params.add(new BasicNameValuePair("DatumJ", year)); params.add(new BasicNameValuePair("ZeitH", hour)); params.add(new BasicNameValuePair("ZeitM", min)); UrlEncodedFormEntity query = new UrlEncodedFormEntity(params, charset); HttpPost post = new HttpPost(url); post.setEntity(query); InputStream response = new DefaultHttpClient().execute(post).getEntity().getContent(); // Now do your thing with the facebook response. String source = readText(response,"CP1252"); Log.d(TAG_AVV,response.toString()); System.out.println("STREAM "+source); EDIT: This is my new code: try { HttpClient client = new DefaultHttpClient(); String getURL = "http://busspur02.aseag.de/bs.exe?SID=5FC39&ScreenX=1440&ScreenY=900&CMD=CR&Karten=true&DatumT="+day+"&DatumM="+month+"&DatumJ="+year+"&ZeitH="+hour+"&ZeitM="+min+"&Intervall=60&Suchen=(S)uchen&GT0=Aachen&T0=H&HT0="+start_from+"&GT1=Aachen&T0=H&HT1="+destination+""; HttpGet get = new HttpGet(getURL); HttpResponse responseGet = client.execute(get); HttpEntity resEntityGet = responseGet.getEntity(); if (resEntityGet != null) { //do something with the response Log.i("GET RESPONSE",EntityUtils.toString(resEntityGet)); } } catch (Exception e) { e.printStackTrace(); } But the output file is cut-off. If I do the same request in a browser I get like 14 different routes. Now the file suddenly stops and I only get 3 routes.... what's wrong? 04-30 12:19:12.362: INFO/GET RESPONSE(256): <!-- Ausgabebereich (automatisch erzeugt) --> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <div align="center"> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <p></p> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <p>Ihr Fahrplan für die Verbindung von Aachen, Kaiserplatz nach Aachen, Karlsgraben am Freitag, den 30.04.2010 (Koniginnedag), Abfahrten ab 12:19 Uhr</p> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <table class="Result"> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <tr> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <th class="fussnote">Fussnote</th> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <th class="fahrzeug">Fahrzeug</th> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <th class="abfahrt">Abfahrt</th> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <th class="haltestellean">Haltestelle</th> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <th class="linie">Linie</th> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <th class="haltestelleab">Haltestelle</th> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <th class="ankunft">Ankunft</th> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <th class="fahrzeit">Fahrzeit/Tarif</th> 04-30 12:19:12.362: INFO/GET RESPONSE(256): </tr> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <tr> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <td>&nbsp;</td><td>&nbsp;<img src="http://www.busspur.de/logos/efa-bus.gif" title="Niederflurbus"></td><td>12:23</td><td title="lc0">Aachen, Kaiserplatz [Heinrichsalle Ri. Hansemannplatz]&nbsp;<a href="http://download.avv.de/Umgebungsplaene/hlp_ac_kaiserplatz.pdf">Umgebungsplan</a></td><td>45</td><td title="lc0">Aachen, Karlsgraben&nbsp;<a href="http://download.avv.de/Umgebungsplaene/hlp_ac_karlsgraben.pdf">Umgebungsplan</a></td><td>12:34</td><td>00:11 /  1,00</td><td>&nbsp;</td> 04-30 12:19:12.362: INFO/GET RESPONSE(256): </tr> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <tr><td colspan="9"><a href="/bs.exe?RI=0&amp;SID=5FC39">Fahrtbegleiter</a>&nbsp;&nbsp;</td></tr> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <tr><td colspan="9"><hr></td></tr> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <tr> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <td>&nbsp;</td><td>&nbsp;<img src="http://www.busspur.de/logos/efa-bus.gif" title="Niederflurbus"></td><td>12:26</td><td title="lc0">Aachen, Kaiserplatz [Heinrichsalle Ri. Hansemannplatz]&nbsp;<a href="http://download.avv.de/Umgebungsplaene/hlp_ac_kaiserplatz.pdf">Umgebungsplan</a></td><td>22</td><td title="lc0">Aachen, Karlsgraben&nbsp;<a href="http://download.avv.de/Umgebungsplaene/hlp_ac_karlsgraben.pdf">Umgebungsplan</a></td><td>12:37</td><td>00:11 /  1,00</td><td>&nbsp;</td> 04-30 12:19:12.362: INFO/GET RESPONSE(256): </tr> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <tr><td colspan="9"><a href="/bs.exe?RI=1&amp;SID=5FC39">Fahrtbegleiter</a>&nbsp;&nbsp;</td></tr> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <tr><td colspan="9"><hr></td></tr> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <tr> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <td>&nbsp;</td><td>&nbsp;<img src="http://www.busspur.de/logos/efa-bus.gif" title="Niederflurbus"></td><td>12:28</td><td title="lc0">Aachen, Kaiserplatz [Heinrichsalle Ri. Hansemannplatz]&nbsp;<a href="http://download.avv.de/Umgebungsplaene/hlp_ac_kaiserplatz.pdf">Umgebungsplan</a></td><td>25</td><td title="lc0">Aachen, Karlsgraben&nbsp;<a href="http://download.avv.de/Umgebungsplaene/hlp_ac_karlsgraben.pdf">Umgebungsplan</a></td><td>12:39</td><td>00:11 /  1,00</td><td>&nbsp;</td> 04-30 12:19:12.362: INFO/GET RESPONSE(256): </tr> 04-30 12:19:12.362: INFO/GET RESPONSE(256): <tr><td colspan="9"><a href="/bs.exe?RI=2&amp;SID=5FC39">Fahrtbegl

    Read the article

  • c++ stl priority queue insert bad_alloc exception

    - by bsg
    Hi, I am working on a query processor that reads in long lists of document id's from memory and looks for matching id's. When it finds one, it creates a DOC struct containing the docid (an int) and the document's rank (a double) and pushes it on to a priority queue. My problem is that when the word(s) searched for has a long list, when I try to push the DOC on to the queue, I get the following exception: Unhandled exception at 0x7c812afb in QueryProcessor.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0012ee88.. When the word has a short list, it works fine. I tried pushing DOC's onto the queue in several places in my code, and they all work until a certain line; after that, I get the above error. I am completely at a loss as to what is wrong because the longest list read in is less than 1 MB and I free all memory that I allocate. Why should there suddenly be a bad_alloc exception when I try to push a DOC onto a queue that has a capacity to hold it (I used a vector with enough space reserved as the underlying data structure for the priority queue)? I know that questions like this are almost impossible to answer without seeing all the code, but it's too long to post here. I'm putting as much as I can and am anxiously hoping that someone can give me an answer, because I am at my wits' end. The NextGEQ function is too long to put here, but it reads a list of compressed blocks of docids block by block. That is, if it sees that the lastdocid in the block (in a separate list) is larger than the docid passed in, it decompresses the block and searches until it finds the right one. If it sees that it was already decompressed, it just searches. Below, when I call the function the first time, it decompresses a block and finds the docid; the push onto the queue after that works. The second time, it doesn't even need to decompress; that is, no new memory is allocated, but after that time, pushing on to the queue gives a bad_alloc error. struct DOC{ long int docid; long double rank; public: DOC() { docid = 0; rank = 0.0; } DOC(int num, double ranking) { docid = num; rank = ranking; } bool operator>( const DOC & d ) const { return rank > d.rank; } bool operator<( const DOC & d ) const { return rank < d.rank; } }; struct listnode{ int* metapointer; int* blockpointer; int docposition; int frequency; int numberdocs; int* iquery; listnode* nextnode; }; void QUERYMANAGER::SubmitQuery(char *query){ vector<DOC> docvec; docvec.reserve(20); DOC doct; //create a priority queue to use as a min-heap to store the documents and rankings; //although the priority queue uses the heap as its underlying data structure, //I found it easier to use the STL priority queue implementation priority_queue<DOC, vector<DOC>,std::greater<DOC>> q(docvec.begin(), docvec.end()); q.push(doct); //do some processing here; startlist is a pointer to a listnode struct that starts the //linked list cout << "Opening lists:" << endl; //point the linked list start pointer to the node returned by the OpenList method startlist = &OpenList(value); listnode* minpointer; q.push(doct); //more processing here; else{ //start by finding the first docid in the shortest list int i = 0; q.push(doct); num = NextGEQ(0, *startlist); q.push(doct); while(num != -1) cout << "finding nextGEQ from shortest list" << endl; q.push(doct); //the is where the problem starts - every previous q.push(doct) works; the one after //NextGEQ(num +1, *startlist) gives the bad_alloc error num = NextGEQ(num + 1, *startlist); q.push(doct); //if you didn't break out of the loop; i.e., all lists contain a matching docid, //calculate the document's rank; if it's one of the top 20, create a struct //containing the docid and the rank and add it to the priority queue if(!loop) { cout << "found match" << endl; if(num < 0) { cout << "reached end of list" << endl; //reached the end of the shortest list; close the list CloseList(startlist); break; } rank = calculateRanking(table, num); try{ //if the heap is not full, create a DOC struct with the docid and //rank and add it to the heap if(q.size() < 20) { doc.docid = num; doc.rank = rank; q.push(doct); q.push(doc); } } catch (exception& e) { cout << e.what() << endl; } } } Thank you very much, bsg.

    Read the article

  • PHP upload script

    - by Darkmage
    Using this upload script and it was working ok a week ago but when i checked it today it fails. I have checked writ privileges on the folder and it is set to 777 so don't think that is the problem. Anyone have a idea of what the problem can be? this is the error Warning: move_uploaded_file() [function.move-uploaded-file]: Unable to access replays/1275389246.ruse in /usr/home/web/wno159003/systemio.net/ruse.systemio.net/scripts/upload.php on line 95 my script is <?php require($_SERVER['DOCUMENT_ROOT'].'/xxxx/xxxx'); $connection = @mysql_connect($db_host, $db_user, $db_password) or die("error connecting"); mysql_select_db($db_name, $connection); $name = basename($_FILES['uploaded']['name']); $comment = $_POST["comment"]; $len = strlen($comment); $username = $_POST["username"]; $typekamp = $_POST["typekamp"]; $date = time(); $target = "replays/"; $target .= basename($_FILES['uploaded']['name']); $maxsize = 20971520; // 20mb Maximum size of the uploaded file in bytes // File extension control // Whilelisting takes preference over blacklisting, so if there is anything in the whilelist, the blacklist _will_ be ignored // Fill either array as you see fit - eg. Array("zip", "exe", "php") $fileextensionwhitelist = Array("ruse"); // Whilelist (allow only) $fileextensionblacklist = Array("zip", "exe", "php", "asp", "txt"); // Blacklist (deny) $ok = 1; if ($_FILES['uploaded']['error'] == 4) { echo "<html><head><title>php</title></head>"; echo '<body bgcolor="#413839" text="#ffffff"> <p><B>info</b></p>'; die("No file was uploaded"); } if ($_FILES['uploaded']['error'] !== 0) { echo "<html><head><title>php</title></head>"; echo '<body bgcolor="#413839" text="#ffffff"> <p><B>info</b></p>'; die("An unexpected upload error has occured."); } // This is our size condition if ($_FILES['uploaded']['size'] > $maxsize) { echo "<html><head><title>php</title></head>"; echo '<body bgcolor="#413839" text="#ffffff"> <p><B>info</b></p>'; echo "Your file is too large.<br />\n"; $ok = 0; } // This is our limit file type condition if ((!empty($fileextensionwhitelist) && !in_array(substr(strrchr($_FILES['uploaded']['name'], "."), 1), $fileextensionwhitelist)) || (empty($fileextensionwhitelist) && !empty($fileextensionblacklist) && in_array(substr(strrchr($_FILES['uploaded']['name'], "."), 1), $fileextensionblacklist))) { echo "<html><head><title>php</title></head>"; echo '<body bgcolor="#413839" text="#ffffff"> <p><B>info</b></p>'; echo "This type of file has been disallowed.<br />\n"; $ok = 0; } // Here we check that $ok was not set to 0 by an error if ($ok == 0) { echo "<html><head><title>php</title></head>"; echo '<body bgcolor="#413839" text="#ffffff"> <p><B>info</b></p>'; echo "Sorry, your file was not uploaded. Refer to the errors above."; } // If everything is ok we try to upload it else { if($len > 0) { $target = "replays/".time().'.'."ruse"; $name = time().'.'."ruse"; $query = "INSERT INTO RR_upload(ID, filename, username, comment, typekamp, date) VALUES (NULL, '$name', '$username','$comment', '$typekamp' ,'$date')"; if (file_exists($target)) { $target .= "_".time().'.'."ruse"; echo "<html><head><title>php</title></head>"; echo '<body bgcolor="#413839" text="#ffffff"> <p><B>info</b></p>'; echo "File already exists, will be uploaded as ".$target; } mysql_query($query, $connection) or die (mysql_error()); echo "<html><head><title>php</title></head>"; echo '<body bgcolor="#413839" text="#ffffff"> <p><B>info</b></p>'; echo (move_uploaded_file($_FILES['uploaded']['tmp_name'], $target)) ? "The file ".basename( $_FILES['uploaded']['name'])." has been uploaded. \n" : "Sorry, there was a problem uploading your file. <br>"; echo "<br>Variable filename: ".$name; echo "<br>Variable name: ".$username; echo "<br>Variables comment: ".$comment; echo "<br>Variables date: ".$date; echo "<br>Var typekamp; ".$typekamp; echo "<br>Var target; ".$target; } else { echo "<html><head><title>php</title></head>"; echo '<body bgcolor="#413839" text="#ffffff"> <p><B>info</b></p>'; echo"you have to put in comment/description"; } } ?>

    Read the article

  • I need some help with either my SQL or my PHP I do not know which...

    - by sico87
    Hello I am creating a CMS and some of the functionality of it that the images that are within the content are managable. I currently trying to display a table that shows the the content title and then the associated images, ideally I would like a layout similar to this, Content Title Image 1 Image 2 Image 3 Content Title 2 Image 1 Image 2 Content Title 3 Image 1 The SQL the returns the data is actually formed using Codeigniters Active Record class, function getAllContentImages() { $this->db->select('*'); $this->db->from('contentImagesTable'); $this->db->join('contentTable', 'contentTable.contentId = contentImagesTable.contentId'); $this->db->join('categoryTable', 'categoryTable.categoryId = contentTable.categoryId'); $query = $this->db->get(); return $query->result_array(); } The array that is returned is looks like this, I have cut the size down for readability. Array ( [0] => Array ( [contentImageId] => 25 [contentImageName] => green.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/2/green.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265222654 [contentId] => 2 [dashboardUserId] => 0 [contentTitle] => sadsadsadassss [contentAbstract] => <p>Pllllleeeeeeeaaaaasssssseeeeee Work</p> [contentBody] => <p>Please work :-( please</p> [contentOnline] => 0 [contentAllowComments] => 0 [contentDateCreated] => 1265124038 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [1] => Array ( [contentImageId] => 28 [contentImageName] => yellow.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/7/yellow.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265388055 [contentId] => 7 [dashboardUserId] => 0 [contentTitle] => Another Blog [contentAbstract] => <p>This is another blog and it is shit becuase this does not work</p> [contentBody] => <p>ioasfihfududfhdufhuishdfiudshfiudhsfiuhdsiufhusdhfuids</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265388034 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [2] => Array ( [contentImageId] => 33 [contentImageName] => portaski.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/11/portaski.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265714175 [contentId] => 11 [dashboardUserId] => 0 [contentTitle] => Portaski - new product and brand launch by Bang [contentAbstract] => <p>Bang's experience in new product development has helped launch PortaSki &ndash; the pocket-sized device which is set to revolutionise skiing.</p> [contentBody] => <p>After developing Portaski's brand identity and positioning, Bang re-designed the product and its packaging ahead of launch in late 2008.</p> <p>A media and PR strategy was devised and implemented using Bang's close relationship with two of the UK's most influential organisations in the Advertising and Media Buying industries. On-line advertising was supported with editorial reviews in the UK's leading broadsheets and tabloids, which combined with pin-point HTML direct mail to drive consumers to the new e-commerce site.</p> <p>Impressive month-on-month growth has been achieved since launch, and the direct marketing activity resulted in an unprecedented 2.71% of targets going on-line to purchase a PortaSki.</p> <p>For further information visit <a href="http://www.portaski.com" target="_blank">www.portaski.com</a></p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265718184 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [3] => Array ( [contentImageId] => 26 [contentImageName] => housingplus.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/5/housingplus.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265284989 [contentId] => 5 [dashboardUserId] => 0 [contentTitle] => Bang launches Housing Plus [contentAbstract] => <p>Bang has launched Housing Plus, the new brand for the Central Borders Housing Group, along with new sub-brands Property Care and SSHA.</p> [contentBody] => <p>The Midlands based Group, with turnover in excess of &pound;21M, appointed Bang in 2008 following an open pitch of over 40 agencies. Bang's work began with an extensive marketing research strategy that challenged the Group's former positioning and brand structure.</p> <p>The research unveiled that the housing sector demanded a values-led Group. This led Bang to develop the brave &lsquo;Together for the Right Reasons' positioning for Housing Plus.</p> <p>Chris Garratt, Marketing Director at Bang explained "The housing sector has witnessed wholesale change in recent years. Much to tenant's dismay, many associations and Groups appear to be losing touch with their roots, we wanted to develop a Group for associations who place principles at the heart of their corporate strategy".</p> <p>The repositioned sub-brands also play an important role in the Group's revised brand by highlighting Housing Plus' willingness to embrace and nurture individual identities. Chris Garratt continued "By adopting a &lsquo;house of brands' hierarchy from the outset, Housing Plus has sent out a strong message to prospective strategic partners".</p> <p>Bang handled all aspects of work for the redevelopment of the three brands, including research, brand creation, naming, positioning, internal branding and communications, advertising, the brand launches, building the brands' on-line presence and the creation of a powerful brand film &ndash; which is already attracting significant interest from across the sector.</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265285940 [categoryId] => 8 [categoryTitle] => News [categoryAbstract] => <p>The world at Bang Marketing moves fast, keep up to date w [categorySlug] => news [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1265283717 ) I need a way that I can get all the content images associated with the same content title in one group and then display under the content title. Can anyone help?

    Read the article

  • edit row in gridview

    - by user576998
    Hi.I would like to help me with my code. I have 2 gridviews. In the first gridview the user can choose with a checkbox every row he wants. These rows are transfered in the second gridview. All these my code does them well.Now, I want to edit the quantity column in second gridview to change the value but i don't know what i must write in edit box. Here is my code: using System; using System.Data; using System.Data.SqlClient; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Collections; public partial class ShowLand : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { BindPrimaryGrid(); BindSecondaryGrid(); } } private void BindPrimaryGrid() { string constr = ConfigurationManager.ConnectionStrings["conString"].ConnectionString; string query = "select * from Land"; SqlConnection con = new SqlConnection(constr); SqlDataAdapter sda = new SqlDataAdapter(query, con); DataTable dt = new DataTable(); sda.Fill(dt); gridview2.DataSource = dt; gridview2.DataBind(); } private void GetData() { DataTable dt; if (ViewState["SelectedRecords1"] != null) dt = (DataTable)ViewState["SelectedRecords1"]; else dt = CreateDataTable(); CheckBox chkAll = (CheckBox)gridview2.HeaderRow .Cells[0].FindControl("chkAll"); for (int i = 0; i < gridview2.Rows.Count; i++) { if (chkAll.Checked) { dt = AddRow(gridview2.Rows[i], dt); } else { CheckBox chk = (CheckBox)gridview2.Rows[i] .Cells[0].FindControl("chk"); if (chk.Checked) { dt = AddRow(gridview2.Rows[i], dt); } else { dt = RemoveRow(gridview2.Rows[i], dt); } } } ViewState["SelectedRecords1"] = dt; } private void SetData() { CheckBox chkAll = (CheckBox)gridview2.HeaderRow.Cells[0].FindControl("chkAll"); chkAll.Checked = true; if (ViewState["SelectedRecords1"] != null) { DataTable dt = (DataTable)ViewState["SelectedRecords1"]; for (int i = 0; i < gridview2.Rows.Count; i++) { CheckBox chk = (CheckBox)gridview2.Rows[i].Cells[0].FindControl("chk"); if (chk != null) { DataRow[] dr = dt.Select("id = '" + gridview2.Rows[i].Cells[1].Text + "'"); chk.Checked = dr.Length > 0; if (!chk.Checked) { chkAll.Checked = false; } } } } } private DataTable CreateDataTable() { DataTable dt = new DataTable(); dt.Columns.Add("id"); dt.Columns.Add("name"); dt.Columns.Add("price"); dt.Columns.Add("quantity"); dt.Columns.Add("total"); dt.AcceptChanges(); return dt; } private DataTable AddRow(GridViewRow gvRow, DataTable dt) { DataRow[] dr = dt.Select("id = '" + gvRow.Cells[1].Text + "'"); if (dr.Length <= 0) { dt.Rows.Add(); dt.Rows[dt.Rows.Count - 1]["id"] = gvRow.Cells[1].Text; dt.Rows[dt.Rows.Count - 1]["name"] = gvRow.Cells[2].Text; dt.Rows[dt.Rows.Count - 1]["price"] = gvRow.Cells[3].Text; dt.Rows[dt.Rows.Count - 1]["quantity"] = gvRow.Cells[4].Text; dt.Rows[dt.Rows.Count - 1]["total"] = gvRow.Cells[5].Text; dt.AcceptChanges(); } return dt; } private DataTable RemoveRow(GridViewRow gvRow, DataTable dt) { DataRow[] dr = dt.Select("id = '" + gvRow.Cells[1].Text + "'"); if (dr.Length > 0) { dt.Rows.Remove(dr[0]); dt.AcceptChanges(); } return dt; } protected void CheckBox_CheckChanged(object sender, EventArgs e) { GetData(); SetData(); BindSecondaryGrid(); } private void BindSecondaryGrid() { DataTable dt = (DataTable)ViewState["SelectedRecords1"]; gridview3.DataSource = dt; gridview3.DataBind(); } } and the source code is <asp:GridView ID="gridview2" runat="server" AutoGenerateColumns="False" DataKeyNames="id" DataSourceID="SqlDataSource5"> <Columns> <asp:TemplateField> <HeaderTemplate> <asp:CheckBox ID="chkAll" runat="server" onclick = "checkAll(this);" AutoPostBack = "true" OnCheckedChanged = "CheckBox_CheckChanged"/> </HeaderTemplate> <ItemTemplate> <asp:CheckBox ID="chk" runat="server" onclick = "Check_Click(this)" AutoPostBack = "true" OnCheckedChanged = "CheckBox_CheckChanged" /> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="id" HeaderText="id" InsertVisible="False" ReadOnly="True" SortExpression="id" /> <asp:BoundField DataField="name" HeaderText="name" SortExpression="name" /> <asp:BoundField DataField="price" HeaderText="price" SortExpression="price" /> <asp:BoundField DataField="quantity" HeaderText="quantity" SortExpression="quantity" /> <asp:BoundField DataField="total" HeaderText="total" SortExpression="total" /> </Columns> </asp:GridView> <asp:SqlDataSource ID="SqlDataSource5" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString %>" SelectCommand="SELECT * FROM [Land]"></asp:SqlDataSource> <br /> </div> <div> <asp:GridView ID="gridview3" runat="server" AutoGenerateColumns = "False" DataKeyNames="id" EmptyDataText = "No Records Selected" > <Columns> <asp:BoundField DataField = "id" HeaderText = "id" /> <asp:BoundField DataField = "name" HeaderText = "name" ReadOnly="True" /> <asp:BoundField DataField = "price" HeaderText = "price" DataFormatString="{0:c}" ReadOnly="True" /> <asp:TemplateField HeaderText="quantity"> <EditItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("quantity")%>'</asp:TextBox> </EditItemTemplate> <ItemTemplate> <asp:Label ID="Label1" runat="server" Text='<%# Bind("quantity") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField = "total" HeaderText = "total" DataFormatString="{0:c}" ReadOnly="True" /> <asp:CommandField ShowEditButton="True" /> </Columns> </asp:GridView> <asp:Label ID="totalLabel" runat="server"></asp:Label> <br /> </div> </form> </body> </html>

    Read the article

  • I get java.lang.NullPointerException when trying to get the contents of the database in Android

    - by ncountr
    I am using 8 EditText boxes from the NewCard.xml from which i am taking the values and when the save button is pressed i am storing the values into a database, in the same process of saving i am trying to get the values and present them into 8 different TextView boxes on the main.xml file and when i press the button i get an FC from the emulator and the resulting error is java.lang.NullPointerException. If Some 1 could help me that would be great, since i have never used databases and this is my first application for android and this is the only thing keepeng me to complete the whole thing and publish it on the market like a free app. Here's the full code from NewCard.java. public class NewCard extends Activity { private static String[] FROM = { _ID, FIRST_NAME, LAST_NAME, POSITION, POSTAL_ADDRESS, PHONE_NUMBER, FAX_NUMBER, MAIL_ADDRESS, WEB_ADDRESS}; private static String ORDER_BY = FIRST_NAME; private CardsData cards; EditText First_Name; EditText Last_Name; EditText Position; EditText Postal_Address; EditText Phone_Number; EditText Fax_Number; EditText Mail_Address; EditText Web_Address; Button New_Cancel; Button New_Save; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.newcard); cards = new CardsData(this); //Define the Cancel Button in NewCard Activity New_Cancel = (Button) this.findViewById(R.id.new_cancel_button); //Define the Cancel Button Activity/s New_Cancel.setOnClickListener ( new OnClickListener() { public void onClick(View arg0) { NewCancelDialog(); } } );//End of the Cancel Button Activity/s //Define the Save Button in NewCard Activity New_Save = (Button) this.findViewById(R.id.new_save_button); //Define the EditText Fields to Get Their Values Into the Database First_Name = (EditText) this.findViewById(R.id.new_first_name); Last_Name = (EditText) this.findViewById(R.id.new_last_name); Position = (EditText) this.findViewById(R.id.new_position); Postal_Address = (EditText) this.findViewById(R.id.new_postal_address); Phone_Number = (EditText) this.findViewById(R.id.new_phone_number); Fax_Number = (EditText) this.findViewById(R.id.new_fax_number); Mail_Address = (EditText) this.findViewById(R.id.new_mail_address); Web_Address = (EditText) this.findViewById(R.id.new_web_address); //Define the Save Button Activity/s New_Save.setOnClickListener ( new OnClickListener() { public void onClick(View arg0) { //Add Code For Saving The Attributes Into The Database try { addCard(First_Name.getText().toString(), Last_Name.getText().toString(), Position.getText().toString(), Postal_Address.getText().toString(), Integer.parseInt(Phone_Number.getText().toString()), Integer.parseInt(Fax_Number.getText().toString()), Mail_Address.getText().toString(), Web_Address.getText().toString()); Cursor cursor = getCard(); showCard(cursor); } finally { cards.close(); NewCard.this.finish(); } } } );//End of the Save Button Activity/s } //======================================================================================// //DATABASE FUNCTIONS private void addCard(String firstname, String lastname, String position, String postaladdress, int phonenumber, int faxnumber, String mailaddress, String webaddress) { // Insert a new record into the Events data source. // You would do something similar for delete and update. SQLiteDatabase db = cards.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(FIRST_NAME, firstname); values.put(LAST_NAME, lastname); values.put(POSITION, position); values.put(POSTAL_ADDRESS, postaladdress); values.put(PHONE_NUMBER, phonenumber); values.put(FAX_NUMBER, phonenumber); values.put(MAIL_ADDRESS, mailaddress); values.put(WEB_ADDRESS, webaddress); db.insertOrThrow(TABLE_NAME, null, values); } private Cursor getCard() { // Perform a managed query. The Activity will handle closing // and re-querying the cursor when needed. SQLiteDatabase db = cards.getReadableDatabase(); Cursor cursor = db.query(TABLE_NAME, FROM, null, null, null, null, ORDER_BY); startManagingCursor(cursor); return cursor; } private void showCard(Cursor cursor) { // Stuff them all into a big string long id = 0; String firstname = null; String lastname = null; String position = null; String postaladdress = null; long phonenumber = 0; long faxnumber = 0; String mailaddress = null; String webaddress = null; while (cursor.moveToNext()) { // Could use getColumnIndexOrThrow() to get indexes id = cursor.getLong(0); firstname = cursor.getString(1); lastname = cursor.getString(2); position = cursor.getString(3); postaladdress = cursor.getString(4); phonenumber = cursor.getLong(5); faxnumber = cursor.getLong(6); mailaddress = cursor.getString(7); webaddress = cursor.getString(8); } // Display on the screen add for each textView TextView ids = (TextView) findViewById(R.id.id); TextView fn = (TextView) findViewById(R.id.firstname); TextView ln = (TextView) findViewById(R.id.lastname); TextView pos = (TextView) findViewById(R.id.position); TextView pa = (TextView) findViewById(R.id.postaladdress); TextView pn = (TextView) findViewById(R.id.phonenumber); TextView fxn = (TextView) findViewById(R.id.faxnumber); TextView ma = (TextView) findViewById(R.id.mailaddress); TextView wa = (TextView) findViewById(R.id.webaddress); ids.setText(String.valueOf(id)); fn.setText(String.valueOf(firstname)); ln.setText(String.valueOf(lastname)); pos.setText(String.valueOf(position)); pa.setText(String.valueOf(postaladdress)); pn.setText(String.valueOf(phonenumber)); fxn.setText(String.valueOf(faxnumber)); ma.setText(String.valueOf(mailaddress)); wa.setText(String.valueOf(webaddress)); } //======================================================================================// //Define the Dialog that alerts you when you press the Cancel button private void NewCancelDialog() { new AlertDialog.Builder(this) .setMessage("Are you sure you want to cancel?") .setTitle("Cancel") .setCancelable(false) .setPositiveButton("Yes", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { NewCard.this.finish(); } }) .setNegativeButton("No", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { dialog.cancel(); } }) .show(); }//End of the Cancel Dialog }

    Read the article

  • Dojo - How to position tooltip close to text?

    - by user244394
    Like the title says i want to be able to display the tooltip close to the text, currently it is displayed far away in the cell. Tobe noted the tooltip positions correctly for large text, only fails for small text. In DOJO How can i position the tooltip close to the text? I have this bit of code snippet that display the tooltip in the grid cells. Screenshot attached, html <div class="some_app claro"></div> ... com.c.widget.EnhancedGrid = function ( aParent, options ) { var grid, options; this.theParentApp = aParent; dojo.require("dojox.grid.EnhancedGrid"); dojo.require("dojox.grid.enhanced.plugins.Menu"); dojo.require("dojox.grid.enhanced.plugins.Selector"); dojo.require("dojox.grid.enhanced.plugins.Pagination"); dojo.require("dojo.store.Memory"); dojo.require("dojo.data.ObjectStore"); dojo.require("dojo._base.xhr"); dojo.require("dojo.domReady!"); dojo.require("dojo.date.locale"); dojo.require("dojo._base.connect"); dojo.require("dojox.data.JsonRestStore"); dojo.require("dojo.data.ItemFileReadStore"); dojo.require("dijit.Menu"); dojo.require("dijit.MenuItem"); dojo.require('dijit.MenuSeparator'); dojo.require('dijit.CheckedMenuItem'); dojo.require('dijit.Tooltip'); dojo.require('dojo/query'); dojo.require("dojox.data.QueryReadStore"); // main initialization function this.init = function( options ) { var me = this; // default options var defaultOptions = { widgetName: ' Enhancedgrid', render: true, // immediately render the grid draggable: true, // disables column dragging containerNode: false, // the Node to hold the Grid (optional) mashupUrl: false, // the URL of the mashup (required) rowsPerPage: 20, //Default number of items per page columns: false, // columns (required) width: "100%", // width of grid height: "100%", // height of grid rowClass: function (rowData) {}, onClick: function () {}, headerMenu: false, // adding a menu pop-up for the header. selectedRegionMenu: false, // adding a menu pop-up for the rows. menusObject: false, //object to start-up the menus using the plug-in. sortInfo: false, // The column default sort infiniteScrolling: false //If true, the enhanced grid will have an infinite scrolling. }; // merge user provided options me.options = jQuery.extend( {}, defaultOptions, options ); // check we have minimum required options if ( ! me.options.mashupUrl ){ throw ("You must supply a mashupUrl"); } if ( ! me.options.columns ){ throw ("You must supply columns"); } // make the column for formatting based on its data type. me.preProcessColumns(); // create the Contextual Menu me.createMenu(); // create the grid object and return me.createGrid(); }; // Loading the data to the grid. this.loadData = function () { var me = this; if (!me.options.infiniteScrolling) { var xhrArgs = { url: me.options.mashupUrl, handleAs: "json", load: function( data ){ var store = new dojo.data.ItemFileReadStore({ data : {items : eval( "data."+me.options.dataRoot)}}); store.fetch({ onComplete : function(items, request) { if (me.grid.selection !== null) { me.grid.selection.clear(); } me.grid.setStore(store); }, onError : function(error) { me.onError(error); } }); }, error: function (error) { me.onError(error); } }; dojo.xhrGet(xhrArgs); } else { dojo.declare('NotificationQueryReadStore', dojox.data.QueryReadStore, { // // hacked -- override to map to proper data structure // from mashup // _xhrFetchHandler : function(data, request, fetchHandler, errorHandler) { // // TODO: need to have error handling here when // data has "error" data structure // // // remap data object before process by super method // var dataRoot = eval ("data."+me.options.dataRoot); var dataTotal = eval ("data."+me.options.dataTotal); data = { numRows : dataTotal, items : dataRoot }; // call to super method to process mapped data and // set rowcount // for proper display this.inherited(arguments); } }); var queryStore = new NotificationQueryReadStore({ url : me.options.mashupUrl, urlPreventCache: true, requestMethod : "get", onError: function (error) { me.onError(error); } }); me.grid.setStore(queryStore); } }; this.preProcessColumns = function () { var me = this; var options = me.options; for (i=0;i<this.options.columns.length;i++) { if (this.options.columns[i].formatter==null) { switch (this.options.columns[i].datatype) { case "string": this.options.columns[i].formatter = me.formatString; break; case "date": this.options.columns[i].formatter = me.formatDate; var todayDate = new Date(); var gmtTime = c.util.Date.parseDate(todayDate.toString()).toString(); var gmtval = gmtTime.substring(gmtTime.indexOf('GMT'),(gmtTime.indexOf('(')-1)); this.options.columns[i].name = this.options.columns[i].name + " ("+gmtval+")"; } } if (this.options.columns[i].sortDefault) { me.options.sortInfo = i+1; } } }; // create GRID object using supplied options this.createGrid = function () { var me = this; var options = me.options; // create a new grid this.grid = new dojox.grid.EnhancedGrid ({ width: options.width, height: options.height, query: { id: "*" }, keepSelection: true, formatterScope: this, structure: options.columns, columnReordering: options.draggable, rowsPerPage: options.rowsPerPage, //sortInfo: options.sortInfo, plugins : { menus: options.menusObject, selector: {"row":"multi", "cell": "disabled" }, }, //Allow the user to decide if a column is sortable by setting sortable = true / false canSort: function(col) { if (options.columns[Math.abs(col)-1].sortable) return true; else return false; }, //Change the row colors depending on severity column. onStyleRow: function (row) { var grid = me.grid; var item = grid.getItem(row.index); if (item && options.rowClass(item)) { row.customClasses += " " +options.rowClass(item); if (grid.selection.selectedIndex == row.index) { row.customClasses += " dojoxGridRowSelected"; } grid.focus.styleRow(row); grid.edit.styleRow(row); } }, onCellMouseOver: function (e){ // var pos = dojo.position(this, true); // alert(pos); console.log( e.rowIndex +" cell node :"+ e.cellNode.innerHTML); // var pos = dojo.position(this, true); console.log( " pos :"+ e.pos); if (e.cellNode.innerHTML!="") { dijit.showTooltip(e.cellNode.innerHTML, e.cellNode); } }, onCellMouseOut: function (e){ dijit.hideTooltip(e.cellNode); }, onHeaderCellMouseOver: function (e){ if (e.cellNode.innerHTML!="") { dijit.showTooltip(e.cellNode.innerHTML, e.cellNode); } }, onHeaderCellMouseOut: function (e){ dijit.hideTooltip(e.cellNode); }, }); // ADDED CODE FOR TOOLTIP var gridTooltip = new Tooltip({ connectId: "grid1", selector: "td", position: ["above"], getContent: function(matchedNode){ var childNode = matchedNode.childNodes[0]; if(childNode.nodeType == 1 && childNode.className == "user") { this.position = ["after"]; this.open(childNode); return false; } if(matchedNode.className && matchedNode.className == "user") { this.position = ["after"]; } else { this.position = ["above"]; } return matchedNode.textContent; } }); ... //Construct the grid this.buildGrid = function(){ var datagrid = new com.emc.widget.EnhancedGrid(this,{ Url: "/dge/api/-resultFormat=json&id="+encodeURIComponent(idUrl), dataRoot: "Root.ATrail", height: '100%', columns: [ { name: 'Time', field: 'Time', width: '20%', datatype: 'date', sortable: true, searchable: true, hidden: false}, { name: 'Type', field: 'Type', width: '20%', datatype: 'string', sortable: true, searchable: true, hidden: false}, { name: 'User ID', field: 'UserID', width: '20%', datatype: 'string', sortable: true, searchable: true, hidden: false } ] }); this.grid = datagrid; };

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • parseInt and viewflipper layout problems

    - by user1234167
    I have a problem with parseInt it throws the error: unable to parse 'null' as integer. My view flipper is also not working. Hopefully this is an easy enough question. Here is my activity: import javax.xml.parsers.SAXParser; import javax.xml.parsers.SAXParserFactory; import org.xml.sax.InputSource; import org.xml.sax.XMLReader; import android.app.Activity; import android.graphics.Color; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.LinearLayout; import android.widget.TextView; import android.widget.ViewFlipper; import xml.parser.dataset; public class XmlParserActivity extends Activity implements OnClickListener { private final String MY_DEBUG_TAG = "WeatherForcaster"; // private dataset myDataSet; private LinearLayout layout; private int temp= 0; /** Called when the activity is first created. */ //the ViewSwitcher private Button btn; private ViewFlipper flip; // private TextView tv; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); layout=(LinearLayout)findViewById(R.id.linearlayout1); btn=(Button)findViewById(R.id.btn); btn.setOnClickListener(this); flip=(ViewFlipper)findViewById(R.id.flip); //when a view is displayed flip.setInAnimation(this,android.R.anim.fade_in); //when a view disappears flip.setOutAnimation(this, android.R.anim.fade_out); // String postcode = null; // public String getPostcode { // return postcode; // } //URL newUrl = c; // myweather.setText(c.toString()); /* Create a new TextView to display the parsingresult later. */ TextView tv = new TextView(this); // run(0); //WeatherApplicationActivity postcode = new WeatherApplicationActivity(); try { /* Create a URL we want to load some xml-data from. */ URL url = new URL("http://new.myweather2.com/developer/forecast.ashx?uac=gcV3ynNdoV&output=xml&query=G41"); //String url = new String("http://new.myweather2.com/developer/forecast.ashx?uac=gcV3ynNdoV&output=xml&query="+WeatherApplicationActivity.postcode ); //URL url = new URL(url); //url.toString( ); //myString(url.toString() + WeatherApplicationActivity.getString(postcode)); // url + WeatherApplicationActivity.getString(postcode); /* Get a SAXParser from the SAXPArserFactory. */ SAXParserFactory spf = SAXParserFactory.newInstance(); SAXParser sp = spf.newSAXParser(); /* Get the XMLReader of the SAXParser we created. */ XMLReader xr = sp.getXMLReader(); /* Create a new ContentHandler and apply it to the XML-Reader*/ handler myHandler = new handler(); xr.setContentHandler(myHandler); /* Parse the xml-data from our URL. */ xr.parse(new InputSource(url.openStream())); /* Parsing has finished. */ /* Our ExampleHandler now provides the parsed data to us. */ dataset parsedDataSet = myHandler.getParsedData(); /* Set the result to be displayed in our GUI. */ tv.setText(parsedDataSet.toString()); } catch (Exception e) { /* Display any Error to the GUI. */ tv.setText("Error: " + e.getMessage()); Log.e(MY_DEBUG_TAG, "WeatherQueryError", e); } temp = Integer.parseInt(xml.parser.dataset.getTemp()); if(temp <0){ //layout.setBackgroundColor(Color.BLUE); //layout.setBackgroundColor(getResources().getColor(R.color.silver)); findViewById(R.id.flip).setBackgroundColor(Color.BLUE); } else if(temp > 0 && temp < 9) { //layout.setBackgroundColor(Color.GREEN); //layout.setBackgroundColor(getResources().getColor(R.color.silver)); findViewById(R.id.flip).setBackgroundColor(Color.GREEN); } else { //layout.setBackgroundColor(Color.YELLOW); //layout.setBackgroundColor(getResources().getColor(R.color.silver)); findViewById(R.id.flip).setBackgroundColor(Color.YELLOW); } /* Display the TextView. */ this.setContentView(tv); } @Override public void onClick(View arg0) { // TODO Auto-generated method stub onClick(View arg0) { // TODO Auto-generated method stub flip.showNext(); //specify flipping interval //flip.setFlipInterval(1000); //flip.startFlipping(); } } this is my dataset: package xml.parser; public class dataset { static String temp = null; // private int extractedInt = 0; public static String getTemp() { return temp; } public void setTemp(String temp) { this.temp = temp; } this is my handler: public void characters(char ch[], int start, int length) { if(this.in_temp){ String setTemp = new String(ch, start, length); // myParsedDataSet.setTempUnit(new String(ch, start, length)); // myParsedDataSet.setTemp; } the dataset and handler i only pasted the code that involves the temp as i no they r working when i take out the if statement. However even then my viewflipper wont work. This is my main xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:id="@+id/linearlayout1" > <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="25dip" android:text="Flip Example" /> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="25dip" android:id="@+id/tv" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:textSize="25dip" android:text="Flip" android:id="@+id/btn" android:onClick="ClickHandler" /> <ViewFlipper android:layout_width="fill_parent" android:layout_height="fill_parent" android:id="@+id/flip"> <LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="25dip" android:text="Item1a" /> </LinearLayout> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="25dip" android:id="@+id/tv2" /> </ViewFlipper> </LinearLayout> this is my logcat: 04-01 18:02:24.744: E/AndroidRuntime(7331): FATAL EXCEPTION: main 04-01 18:02:24.744: E/AndroidRuntime(7331): java.lang.RuntimeException: Unable to start activity ComponentInfo{xml.parser/xml.parser.XmlParserActivity}: java.lang.NumberFormatException: unable to parse 'null' as integer 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1830) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1851) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.access$1500(ActivityThread.java:132) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1038) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.os.Handler.dispatchMessage(Handler.java:99) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.os.Looper.loop(Looper.java:150) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.main(ActivityThread.java:4293) 04-01 18:02:24.744: E/AndroidRuntime(7331): at java.lang.reflect.Method.invokeNative(Native Method) 04-01 18:02:24.744: E/AndroidRuntime(7331): at java.lang.reflect.Method.invoke(Method.java:507) 04-01 18:02:24.744: E/AndroidRuntime(7331): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:849) 04-01 18:02:24.744: E/AndroidRuntime(7331): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:607) 04-01 18:02:24.744: E/AndroidRuntime(7331): at dalvik.system.NativeStart.main(Native Method) 04-01 18:02:24.744: E/AndroidRuntime(7331): Caused by: java.lang.NumberFormatException: unable to parse 'null' as integer 04-01 18:02:24.744: E/AndroidRuntime(7331): at java.lang.Integer.parseInt(Integer.java:356) 04-01 18:02:24.744: E/AndroidRuntime(7331): at java.lang.Integer.parseInt(Integer.java:332) 04-01 18:02:24.744: E/AndroidRuntime(7331): at xml.parser.XmlParserActivity.onCreate(XmlParserActivity.java:118) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1072) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1794) I hope I have given enough information about my problems. I will be extremely grateful if anyone can help me out.

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

< Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >