Search Results

Search found 15237 results on 610 pages for 'browser defaults'.

Page 191/610 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • web.xml not reloading in tomcat even after stop/start

    - by ajay
    This is in relation to:- http://stackoverflow.com/questions/2576514/basic-tomcat-servlet-error I changed my web.xml file, did ant compile , all, /etc/init.d/tomcat stop , start Even then my web.xml file in tomcat deployment is still unchanged. This is build.properties file:- app.name=hello catalina.home=/usr/local/tomcat manager.username=admin manager.password=admin This is my build.xml file. Is there something wrong with this:- <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- General purpose build script for web applications and web services, including enhanced support for deploying directly to a Tomcat 6 based server. This build script assumes that the source code of your web application is organized into the following subdirectories underneath the source code directory from which you execute the build script: docs Static documentation files to be copied to the "docs" subdirectory of your distribution. src Java source code (and associated resource files) to be compiled to the "WEB-INF/classes" subdirectory of your web applicaiton. web Static HTML, JSP, and other content (such as image files), including the WEB-INF subdirectory and its configuration file contents. $Id: build.xml.txt 562814 2007-08-05 03:52:04Z markt $ --> <!-- A "project" describes a set of targets that may be requested when Ant is executed. The "default" attribute defines the target which is executed if no specific target is requested, and the "basedir" attribute defines the current working directory from which Ant executes the requested task. This is normally set to the current working directory. --> <project name="My Project" default="compile" basedir="."> <!-- ===================== Property Definitions =========================== --> <!-- Each of the following properties are used in the build script. Values for these properties are set by the first place they are defined, from the following list: * Definitions on the "ant" command line (ant -Dfoo=bar compile). * Definitions from a "build.properties" file in the top level source directory of this application. * Definitions from a "build.properties" file in the developer's home directory. * Default definitions in this build.xml file. You will note below that property values can be composed based on the contents of previously defined properties. This is a powerful technique that helps you minimize the number of changes required when your development environment is modified. Note that property composition is allowed within "build.properties" files as well as in the "build.xml" script. --> <property file="build.properties"/> <property file="${user.home}/build.properties"/> <!-- ==================== File and Directory Names ======================== --> <!-- These properties generally define file and directory names (or paths) that affect where the build process stores its outputs. app.name Base name of this application, used to construct filenames and directories. Defaults to "myapp". app.path Context path to which this application should be deployed (defaults to "/" plus the value of the "app.name" property). app.version Version number of this iteration of the application. build.home The directory into which the "prepare" and "compile" targets will generate their output. Defaults to "build". catalina.home The directory in which you have installed a binary distribution of Tomcat 6. This will be used by the "deploy" target. dist.home The name of the base directory in which distribution files are created. Defaults to "dist". manager.password The login password of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) manager.url The URL of the "/manager" web application on the Tomcat installation to which we will deploy web applications and web services. manager.username The login username of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) --> <property name="app.name" value="myapp"/> <property name="app.path" value="/${app.name}"/> <property name="app.version" value="0.1-dev"/> <property name="build.home" value="${basedir}/build"/> <property name="catalina.home" value="../../../.."/> <!-- UPDATE THIS! --> <property name="dist.home" value="${basedir}/dist"/> <property name="docs.home" value="${basedir}/docs"/> <property name="manager.url" value="http://localhost:8080/manager"/> <property name="src.home" value="${basedir}/src"/> <property name="web.home" value="${basedir}/web"/> <!-- ==================== External Dependencies =========================== --> <!-- Use property values to define the locations of external JAR files on which your application will depend. In general, these values will be used for two purposes: * Inclusion on the classpath that is passed to the Javac compiler * Being copied into the "/WEB-INF/lib" directory during execution of the "deploy" target. Because we will automatically include all of the Java classes that Tomcat 6 exposes to web applications, we will not need to explicitly list any of those dependencies. You only need to worry about external dependencies for JAR files that you are going to include inside your "/WEB-INF/lib" directory. --> <!-- Dummy external dependency --> <!-- <property name="foo.jar" value="/path/to/foo.jar"/> --> <!-- ==================== Compilation Classpath =========================== --> <!-- Rather than relying on the CLASSPATH environment variable, Ant includes features that makes it easy to dynamically construct the classpath you need for each compilation. The example below constructs the compile classpath to include the servlet.jar file, as well as the other components that Tomcat makes available to web applications automatically, plus anything that you explicitly added. --> <path id="compile.classpath"> <!-- Include all JAR files that will be included in /WEB-INF/lib --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <!-- <pathelement location="${foo.jar}"/> --> <!-- Include all elements that Tomcat exposes to applications --> <fileset dir="${catalina.home}/bin"> <include name="*.jar"/> </fileset> <pathelement location="${catalina.home}/lib"/> <fileset dir="${catalina.home}/lib"> <include name="*.jar"/> </fileset> </path> <!-- ================== Custom Ant Task Definitions ======================= --> <!-- These properties define custom tasks for the Ant build tool that interact with the "/manager" web application installed with Tomcat 6. Before they can be successfully utilized, you must perform the following steps: - Copy the file "lib/catalina-ant.jar" from your Tomcat 6 installation into the "lib" directory of your Ant installation. - Create a "build.properties" file in your application's top-level source directory (or your user login home directory) that defines appropriate values for the "manager.password", "manager.url", and "manager.username" properties described above. For more information about the Manager web application, and the functionality of these tasks, see <http://localhost:8080/tomcat-docs/manager-howto.html>. --> <taskdef resource="org/apache/catalina/ant/catalina.tasks" classpathref="compile.classpath"/> <!-- ==================== Compilation Control Options ==================== --> <!-- These properties control option settings on the Javac compiler when it is invoked using the <javac> task. compile.debug Should compilation include the debug option? compile.deprecation Should compilation include the deprecation option? compile.optimize Should compilation include the optimize option? --> <property name="compile.debug" value="true"/> <property name="compile.deprecation" value="false"/> <property name="compile.optimize" value="true"/> <!-- ==================== All Target ====================================== --> <!-- The "all" target is a shortcut for running the "clean" target followed by the "compile" target, to force a complete recompile. --> <target name="all" depends="clean,compile" description="Clean build and dist directories, then compile"/> <!-- ==================== Clean Target ==================================== --> <!-- The "clean" target deletes any previous "build" and "dist" directory, so that you can be ensured the application can be built from scratch. --> <target name="clean" description="Delete old build and dist directories"> <delete dir="${build.home}"/> <delete dir="${dist.home}"/> </target> <!-- ==================== Compile Target ================================== --> <!-- The "compile" target transforms source files (from your "src" directory) into object files in the appropriate location in the build directory. This example assumes that you will be including your classes in an unpacked directory hierarchy under "/WEB-INF/classes". --> <target name="compile" depends="prepare" description="Compile Java sources"> <!-- Compile Java classes as necessary --> <mkdir dir="${build.home}/WEB-INF/classes"/> <javac srcdir="${src.home}" destdir="${build.home}/WEB-INF/classes" debug="${compile.debug}" deprecation="${compile.deprecation}" optimize="${compile.optimize}"> <classpath refid="compile.classpath"/> </javac> <!-- Copy application resources --> <copy todir="${build.home}/WEB-INF/classes"> <fileset dir="${src.home}" excludes="**/*.java"/> </copy> </target> <!-- ==================== Dist Target ===================================== --> <!-- The "dist" target creates a binary distribution of your application in a directory structure ready to be archived in a tar.gz or zip file. Note that this target depends on two others: * "compile" so that the entire web application (including external dependencies) will have been assembled * "javadoc" so that the application Javadocs will have been created --> <target name="dist" depends="compile,javadoc" description="Create binary distribution"> <!-- Copy documentation subdirectories --> <mkdir dir="${dist.home}/docs"/> <copy todir="${dist.home}/docs"> <fileset dir="${docs.home}"/> </copy> <!-- Create application JAR file --> <jar jarfile="${dist.home}/${app.name}-${app.version}.war" basedir="${build.home}"/> <!-- Copy additional files to ${dist.home} as necessary --> </target> <!-- ==================== Install Target ================================== --> <!-- The "install" target tells the specified Tomcat 6 installation to dynamically install this web application and make it available for execution. It does *not* cause the existence of this web application to be remembered across Tomcat restarts; if you restart the server, you will need to re-install all this web application. If you have already installed this application, and simply want Tomcat to recognize that you have updated Java classes (or the web.xml file), use the "reload" target instead. NOTE: This target will only succeed if it is run from the same server that Tomcat is running on. NOTE: This is the logical opposite of the "remove" target. --> <target name="install" depends="compile" description="Install application to servlet container"> <deploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}" localWar="file://${build.home}"/> </target> <!-- ==================== Javadoc Target ================================== --> <!-- The "javadoc" target creates Javadoc API documentation for the Java classes included in your application. Normally, this is only required when preparing a distribution release, but is available as a separate target in case the developer wants to create Javadocs independently. --> <target name="javadoc" depends="compile" description="Create Javadoc API documentation"> <mkdir dir="${dist.home}/docs/api"/> <javadoc sourcepath="${src.home}" destdir="${dist.home}/docs/api" packagenames="*"> <classpath refid="compile.classpath"/> </javadoc> </target> <!-- ====================== List Target =================================== --> <!-- The "list" target asks the specified Tomcat 6 installation to list the currently running web applications, either loaded at startup time or installed dynamically. It is useful to determine whether or not the application you are currently developing has been installed. --> <target name="list" description="List installed applications on servlet container"> <list url="${manager.url}" username="${manager.username}" password="${manager.password}"/> </target> <!-- ==================== Prepare Target ================================== --> <!-- The "prepare" target is used to create the "build" destination directory, and copy the static contents of your web application to it. If you need to copy static files from external dependencies, you can customize the contents of this task. Normally, this task is executed indirectly when needed. --> <target name="prepare"> <!-- Create build directories as needed --> <mkdir dir="${build.home}"/> <mkdir dir="${build.home}/WEB-INF"/> <mkdir dir="${build.home}/WEB-INF/classes"/> <!-- Copy static content of this web application --> <copy todir="${build.home}"> <fileset dir="${web.home}"/> </copy> <!-- Copy external dependencies as required --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <mkdir dir="${build.home}/WEB-INF/lib"/> <!-- <copy todir="${build.home}/WEB-INF/lib" file="${foo.jar}"/> --> <!-- Copy static files from external dependencies as needed --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> </target> <!-- ==================== Reload Target =================================== --> <!-- The "reload" signals the specified application Tomcat 6 to shut itself down and reload. This can be useful when the web application context is not reloadable and you have updated classes or property files in the /WEB-INF/classes directory or when you have added or updated jar files in the /WEB-INF/lib directory. NOTE: The /WEB-INF/web.xml web application configuration file is not reread on a reload. If you have made changes to your web.xml file you must stop then start the web application. --> <target name="reload" depends="compile" description="Reload application on servlet container"> <reload url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> <!-- ==================== Remove Target =================================== --> <!-- The "remove" target tells the specified Tomcat 6 installation to dynamically remove this web application from service. NOTE: This is the logical opposite of the "install" target. --> <target name="remove" description="Remove application on servlet container"> <undeploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> </project>

    Read the article

  • Cannot see the variable In my own JQuery plugin's function.

    - by qinHaiXiang
    I am writing one of my own JQuery plugin. And I got some strange which make me confused. I am using JQuery UI datepicker with my plugin. ;(function($){ var newMW = 1, mwZIndex = 0; // IgtoMW contructor Igtomw = function(elem , options){ var activePanel, lastPanel, daysWithRecords, sliding; // used to check the animation below is executed to the end. // used to access the plugin's default configuration this.opts = $.extend({}, $.fn.igtomw.defaults, options); // intial the model window this.intialMW(); }; $.extend(Igtomw.prototype, { // intial model window intialMW : function(){ this.sliding = false; //this.daysWithRecords = []; this.igtoMW = $('<div />',{'id':'igto'+newMW,'class':'igtoMW',}) .css({'z-index':mwZIndex}) // make it in front of all exist model window; .appendTo('body') .draggable({ containment: 'parent' , handle: '.dragHandle' , distance: 5 }); //var igtoWrapper = igtoMW.append($('<div />',{'class':'igtoWrapper'})); this.igtoWrapper = $('<div />',{'class':'igtoWrapper'}).appendTo(this.igtoMW); this.igtoOpacityBody = $('<div />',{'class':'igtoOpacityBody'}).appendTo(this.igtoMW); //var igtoHeaderInfo = igtoWrapper.append($('<div />',{'class':'igtoHeaderInfo dragHandle'})); this.igtoHeaderInfo = $('<div />',{'class':'igtoHeaderInfo dragHandle'}) .appendTo(this.igtoWrapper); this.igtoQuickNavigation = $('<div />',{'class':'igtoQuickNavigation'}) .css({'color':'#fff'}) .appendTo(this.igtoWrapper); this.igtoContentSlider = $('<div />',{'class':'igtoContentSlider'}) .appendTo(this.igtoWrapper); this.igtoQuickMenu = $('<div />',{'class':'igtoQuickMenu'}) .appendTo(this.igtoWrapper); this.igtoFooter = $('<div />',{'class':'igtoFooter dragHandle'}) .appendTo(this.igtoWrapper); // append to igtoHeaderInfo this.headTitle = this.igtoHeaderInfo.append($('<div />',{'class':'headTitle'})); // append to igtoQuickNavigation this.igQuickNav = $('<div />', {'class':'igQuickNav'}) .html('??') .appendTo(this.igtoQuickNavigation); // append to igtoContentSlider this.igInnerPanelTopMenu = $('<div />',{'class':'igInnerPanelTopMenu'}) .appendTo(this.igtoContentSlider); this.igInnerPanelTopMenu.append('<div class="igInnerPanelButtonPreWrapper"><a href="" class="igInnerPanelButton Pre" action="" style="background-image:url(images/igto/igInnerPanelTopMenu.bt.bg.png);"></a></div>'); this.igInnerPanelTopMenu.append('<div class="igInnerPanelSearch"><input type="text" name="igInnerSearch" /><a href="" class="igInnerSearch">??</a></div>' ); this.igInnerPanelTopMenu.append('<div class="igInnerPanelButtonNextWrapper"><a href="" class="igInnerPanelButton Next" action="sm" style="background-image:url(images/igto/igInnerPanelTopMenu.bt.bg.png); background-position:-272px"></a></div>' ); this.igInnerPanelBottomMenu = $('<div />',{'class':'igInnerPanelBottomMenu'}) .appendTo(this.igtoContentSlider); this.icWrapper = $('<div />',{'class':'icWrapper','id':'igto'+newMW+'Panel'}) .appendTo(this.igtoContentSlider); this.icWrapperCotentPre = $('<div class="slider pre"></div>').appendTo(this.icWrapper); this.icWrapperCotentShow = $('<div class="slider firstShow "></div>').appendTo(this.icWrapper); this.icWrapperCotentnext = $('<div class="slider next"></div>').appendTo(this.icWrapper); this.initialPanel(); this.initialQuickMenus(); console.log(this.leftPad(9)); newMW++; mwZIndex++; this.igtoMW.bind('mousedown',function(){ var $this = $(this); //alert($this.css('z-index') + ' '+mwZIndex); if( parseInt($this.css('z-index')) === (mwZIndex-1) ) return; $this.css({'z-index':mwZIndex}); mwZIndex++; //alert(mwZIndex); }); }, initialPanel : function(){ this.defaultPanelNum = this.opts.initialPanel; this.activePanel = this.defaultPanelNum; this.lastPanel = this.defaultPanelNum; this.defaultPanel = this.loadPanelContents(this.defaultPanelNum); $(this.defaultPanel).appendTo(this.icWrapperCotentShow); }, initialQuickMenus : function(){ // store the current element var obj = this; var defaultQM = this.opts.initialQuickMenu; var strMenu = ''; var marginFirstEle = '8'; $.each(defaultQM,function(key,value){ //alert(key+':'+value); if(marginFirstEle === '8'){ strMenu += '<a href="" class="btPanel" panel="'+key+'" style="margin-left: 8px;" >'+value+'</a>'; marginFirstEle = '4'; } else{ strMenu += '<a href="" class="btPanel" panel="'+key+'" style="margin-left: 4px;" >'+value+'</a>'; } }); // append to igtoQuickMenu this.igtoQMenu = $(strMenu).appendTo(this.igtoQuickMenu); this.igtoQMenu.bind('click',function(event){ event.preventDefault(); var element = $(this); if(element.is('.active')){ return; } else{ $(obj.igtoQMenu).removeClass('active'); element.addClass('active'); } var d = new Date(); var year = d.getFullYear(); var month = obj.leftPad( d.getMonth() ); var inst = null; if( obj.sliding === false){ console.log(obj.lastPanel); var currentPanelNum = parseInt(element.attr('panel')); obj.checkAvailability(); obj.getDays(year,month,inst,currentPanelNum); obj.slidePanel(currentPanelNum); obj.activePanel = currentPanelNum; console.log(obj.activePanel); obj.lastPanel = obj.activePanel; obj.icWrapper.find('input').val(obj.activePanel); } }); }, initialLoginPanel : function(){ var obj = this; this.igPanelLogin = $('<div />',{'class':"igPanelLogin"}); this.igEnterName = $('<div />',{'class':"igEnterName"}).appendTo(this.igPanelLogin); this.igInput = $('<input type="text" name="name" value="???" />').appendTo(this.igEnterName); this.igtoLoginBtWrap = $('<div />',{'class':"igButtons"}).appendTo(this.igPanelLogin); this.igtoLoginBt = $('<a href="" class="igtoLoginBt" action="OK" >??</a>\ <a href="" class="igtoLoginBt" action="CANCEL" >??</a>\ <a href="" class="igtoLoginBt" action="ADD" >????</a>').appendTo(this.igtoLoginBtWrap); this.igtoLoginBt.bind('click',function(event){ event.preventDefault(); var elem = $(this); var action = elem.attr('action'); var userName = obj.igInput.val(); obj.loadRootMenu(); }); return this.igPanelLogin; }, initialWatchHistory : function(){ var obj = this; // for thirt part plugin used if(this.sliding === false){ this.watchHistory = $('<div />',{'class':'igInnerPanelSlider'}).append($('<div />',{'class':'igInnerPanel_pre'}).addClass('igInnerPanel')) .append($('<div />',{'class':'igInnerPanel'}).datepicker({ dateFormat: 'yy-mm-dd',defaultDate: '2010-12-01' ,showWeek: true,firstDay: 1, //beforeShow:setDateStatistics(), onChangeMonthYear:function(year, month, inst) { var panelNum = 1; month = obj.leftPad(month); obj.getDays(year,month,inst,panelNum); } , beforeShowDay: obj.checkAvailability, onSelect: function(dateText, inst) { obj.checkAvailability(); } }).append($('<div />',{'class':'extraMenu'})) ) .append($('<div />',{'class':'igInnerPanel_next'}).addClass('igInnerPanel')); return this.watchHistory; } }, loadPanelContents : function(panelNum){ switch(panelNum){ case 1: alert('inside loadPanelContents') return this.initialWatchHistory(); break; case 2: return this.initialWatchHistory(); break; case 3: return this.initialWatchHistory(); break; case 4: return this.initialWatchHistory(); break; case 5: return this.initialLoginPanel(); break; } }, loadRootMenu : function(){ var obj = this; var mainMenuPanel = $('<div />',{'class':'igRootMenu'}); var currentMWId = this.igtoMW.attr('id'); this.activePanel = 0; $('#'+currentMWId+'Panel .pre'). queue(function(next){ $(this). html(mainMenuPanel). addClass('panelShow'). removeClass('pre'). attr('panelNum',0); next(); }). queue(function(next){ $('<div style="width:0;" class="slider pre"></div>'). prependTo('#'+currentMWId+'Panel').animate({width:348}, function(){ $('#'+currentMWId+'Panel .slider:last').remove() $('#'+currentMWId+'Panel .slider:last').replaceWith('<div class="slider next"></div>'); $('.btMenu').remove(); // remove bottom quick menu obj.sliding = false; $(this).removeAttr('style'); }); $('.igtoQuickMenu .active').removeClass('active'); next(); }); }, slidePanel : function(currentPanelNum){ var currentMWId = this.igtoMW.attr('id'); var obj = this; //alert(obj.loadPanelContents(currentPanelNum)); if( this.activePanel > currentPanelNum){ $('#'+currentMWId+'Panel .pre'). queue(function(next){ alert('inside slidePanel') //var initialDate = getPanelDateStatus(panelNum); //console.log('intial day in bigger panel '+initialDate) $(this). html(obj.loadPanelContents(currentPanelNum)). addClass('panelShow'). removeClass('pre'). attr('panelNum',currentPanelNum); $('#'+currentMWId+'Panel .next').remove(); next(); }). queue(function(next){ $('<div style="width:0;" class="slider pre"></div>'). prependTo('#'+currentMWId+'Panel').animate({width:348}, function(){ //$('#igto1Panel .slider:last').find(setPanel(currentPanelNum)).datepicker('destroy'); $('#'+currentMWId+'Panel .slider:last').empty().removeClass('panelShow').addClass('next').removeAttr('panelNum'); $('#'+currentMWId+'Panel .slider:last').replaceWith('<div class="slider next"></div>') obj.sliding = false;console.log('inuse inside animation: '+obj.sliding); $(this).removeAttr('style'); }); next(); }); } else{ ///// current panel num smaller than next $('#'+currentMWId+'Panel .next'). queue(function(next){ $(this). html(obj.loadPanelContents(currentPanelNum)). addClass('panelShow'). removeClass('next'). attr('panelNum',currentPanelNum); $('<div class="slider next">empty</div>').appendTo('#'+currentMWId+'Panel'); next(); }). queue(function(next){ $('#'+currentMWId+'Panel .pre').animate({width:0}, function(){ $(this).remove(); //$('#igto1Panel .slider:first').find(setPanel(currentPanelNum)).datepicker('destroy'); $('#'+currentMWId+'Panel .slider:first').empty().removeClass('panelShow').addClass('pre').removeAttr('panelNum').removeAttr('style'); $('#'+currentMWId+'Panel .slider:first').replaceWith('<div class="slider pre"></div>') obj.sliding = false; console.log('inuse inside animation: '+obj.sliding); }); next(); }); } }, getDays : function(year,month,inst,panelNum){ var obj = this; // depand on the mysql qurey condition var table_of_record = 'moviewh';//getTable(panelNum); var date_of_record = 'watching_date';//getTableDateCol(panelNum); var date_to_find = year+'-'+month; var node_of_xml_date_list = 'whDateRecords';//getXMLDateNode(panelNum); var user_id = '1';//getLoginUserId(); //var daysWithRecords = []; // empty array before asigning this.daysWithRecords.length = 0; $.ajax({ type: "GET", url: "include/get.date.list.process.php", data:({ table_of_record : table_of_record,date_of_record:date_of_record,date_to_find:date_to_find,user_id:user_id,node_of_xml_date_list:node_of_xml_date_list }), dataType: "json", cache: false, // force broser don't cache the xml file async: false, // using this option to prevent datepicker refresh ??NO success:function(data){ // had no date records if(data === null) return; obj.daysWithRecords = data; } }); //setPanelDateStatus(year,month,panelNum); console.log('call from getdays() ' + this.daysWithRecords); }, checkAvailability : function(availableDays) { // var i; var checkdate = $.datepicker.formatDate('yy-mm-dd', availableDays); //console.log( checkdate); // for(var i = 0; i < this.daysWithRecords.length; i++) { // // if(this.daysWithRecords[i] == checkdate){ // // return [true, "available"]; // } // } //console.log('inside check availablility '+ this.daysWithRecords); //return [true, "available"]; console.log(typeof this.daysWithRecords) for(i in this.daysWithRecords){ //if(this.daysWithRecords[i] == checkdate){ console.log(typeof this.daysWithRecords[i]); //return [true, "available"]; //} } return [true, "available"]; //return [false, ""]; }, leftPad : function(num) { return (num < 10) ? '0' + num : num; } }); $.fn.igtomw = function(options){ // Merge options passed in with global defaults var opt = $.extend({}, $.fn.igtomw.defaults , options); return this.each(function() { new Igtomw(this,opt); }); }; $.fn.igtomw.defaults = { // 0:mainMenu 1:whatchHistor 2:requestHistory 3:userManager // 4:shoppingCart 5:loginPanel initialPanel : 5, // default panel is LoginPanel initialQuickMenu : {'1':'whatchHIstory','2':'????','3':'????','4':'????'} // defalut quick menu }; })(jQuery); usage: $('.openMW').click(function(event){ event.preventDefault(); $('<div class="">').igtomw(); }) HTML code: <div id="taskBarAndStartMenu"> <div class="taskBarAndStartMenuM"> <a href="" class="openMW" >??IGTO</a> </div> <div class="taskBarAndStartMenuO"></div> </div> In my work flow: when I click the "whatchHistory" button, my plugin would load a panel with JQuery UI datepicker applied which days had been set to be availabled or not. I am using the function "getDays()" to get the available days list and stored the data inside daysWithRecords, and final the UI datepicker's function "beforeShowDay()" called the function "checkAvailability()" to set the days. the variable "daysWithRecords" was declared inside Igtomw = function(elem , options) and was initialized inside the function getDays() I am using the function "initialWatchHistory()" to initialization and render the JQuery UI datepicker in the web. My problem is the function "checkAvailability()" cannot see the variable "daysWithRecords".The firebug prompts me that "daysWithRecords" is "undefined". this is the first time I write my first plugin. So .... Thank you very much for any help!!

    Read the article

  • Reading data in from file

    - by user667430
    Hi Here is link if you want to download application: Simple banking app Text file with data to read I am trying to create a simple banking application that reads in data from a text file. So far i have managed to read in all the customers which there are 20 of them. However when reading in the accounts and transactions stuff it only reads in 20 but there is alot more in the text file. Here is what i have so far. I think it has something to do with the nested for loop in the getNextCustomer method. using System; using System.Collections; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.IO; using System.Linq; using System.Text; using System.Windows.Forms; namespace e_SOFT_Banking { public partial class Form1 : Form { public static ArrayList bankDetails = new ArrayList(); public static ArrayList accDetails = new ArrayList(); public static ArrayList tranDetails = new ArrayList(); string inputDataFile = @"C:\e-SOFT_v1.txt"; const int numCustItems = 14; const int numAccItems = 7; const int numTransItems = 5; public Form1() { InitializeComponent(); setUpBank(); } private void btnShowData_Click_1(object sender, EventArgs e) { showListsOfCust(); } private void setUpBank() { readData(); } private void showListsOfCust() { listBox1.Items.Clear(); foreach (Customer c in bankDetails) listBox1.Items.Add(c.getCustomerNumber() + " " + c.getCustomerTitle() + " " + c.getFirstName() + " " + c.getInitials() + " " + c.getSurname() + " " + c.getDateOfBirth() + " " + c.getHouseNameNumber() + " " + c.getStreetName() + " " + c.getArea() + " " + c.getCityTown() + " " + c.getCounty() + " " + c.getPostcode() + " " + c.getPassword() + " " + c.getNumberAccounts()); foreach (Account a in accDetails) listBox1.Items.Add(a.getAccSort() + " " + a.getAccNumber() + " " + a.getAccNick() + " " + a.getAccDate() + " " + a.getAccCurBal() + " " + a.getAccOverDraft() + " " + a.getAccNumTrans()); foreach (Transaction t in tranDetails) listBox1.Items.Add(t.getDate() + " " + t.getType() + " " + t.getDescription() + " " + t.getAmount() + " " + t.getBalAfter()); } private void readData() { StreamReader readerIn = null; Transaction curTrans; Account curAcc; Customer curCust; bool anyMoreData; string[] customerData = new string[numCustItems]; string[] accountData = new string[numAccItems]; string[] transactionData = new string[numTransItems]; if (readOK(inputDataFile, ref readerIn)) { anyMoreData = getNextCustomer(readerIn, customerData, accountData, transactionData); while (anyMoreData == true) { curCust = new Customer(customerData[0], customerData[1], customerData[2], customerData[3], customerData[4], customerData[5], customerData[6], customerData[7], customerData[8], customerData[9], customerData[10], customerData[11], customerData[12], customerData[13]); curAcc = new Account(accountData[0], accountData[1], accountData[2], accountData[3], accountData[4], accountData[5], accountData[6]); curTrans = new Transaction(transactionData[0], transactionData[1], transactionData[2], transactionData[3], transactionData[4]); bankDetails.Add(curCust); accDetails.Add(curAcc); tranDetails.Add(curTrans); anyMoreData = getNextCustomer(readerIn, customerData, accountData, transactionData); } if (readerIn != null) readerIn.Close(); } } private bool getNextCustomer(StreamReader inNext, string[] nextCustomerData, string[] nextAccountData, string[] nextTransactionData) { string nextLine; int numCItems = nextCustomerData.Count(); int numAItems = nextAccountData.Count(); int numTItems = nextTransactionData.Count(); for (int i = 0; i < numCItems; i++) { nextLine = inNext.ReadLine(); if (nextLine != null) { nextCustomerData[i] = nextLine; if (i == 13) { int cItems = Convert.ToInt32(nextCustomerData[13]); for (int q = 0; q < cItems; q++) { for (int a = 0; a < numAItems; a++) { nextLine = inNext.ReadLine(); nextAccountData[a] = nextLine; if (a == 6) { int aItems = Convert.ToInt32(nextAccountData[6]); for (int w = 0; w < aItems; w++) { for (int t = 0; t < numTItems; t++) { nextLine = inNext.ReadLine(); nextTransactionData[t] = nextLine; } } } } } } } else return false; } return true; } private bool readOK(string readFile, ref StreamReader readerIn) { try { readerIn = new StreamReader(readFile); return true; } catch (FileNotFoundException notFound) { MessageBox.Show("ERROR Opening file (when reading data in)" + " - File could not be found.\n" + notFound.Message); return false; } catch (Exception e) { MessageBox.Show("ERROR Opening File (when reading data in)" + "- Operation failed.\n" + e.Message); return false; } } } } I also have three classes one for customers, one for accounts and one for transactions, which follow in that order. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace e_SOFT_Banking { class Customer { private string customerNumber; private string customerTitle; private string firstName; private string initials; //not required - defaults to null private string surname; private string dateOfBirth; private string houseNameNumber; private string streetName; private string area; //not required - defaults to null private string cityTown; private string county; private string postcode; private string password; private int numberAccounts; public Customer(string theCustomerNumber, string theCustomerTitle, string theFirstName, string theInitials, string theSurname, string theDateOfBirth, string theHouseNameNumber, string theStreetName, string theArea, string theCityTown, string theCounty, string thePostcode, string thePassword, string theNumberAccounts) { customerNumber = theCustomerNumber; customerTitle = theCustomerTitle; firstName = theFirstName; initials = theInitials; surname = theSurname; dateOfBirth = theDateOfBirth; houseNameNumber = theHouseNameNumber; streetName = theStreetName; area = theArea; cityTown = theCityTown; county = theCounty; postcode = thePostcode; password = thePassword; setNumberAccounts(theNumberAccounts); } public string getCustomerNumber() { return customerNumber; } public string getCustomerTitle() { return customerTitle; } public string getFirstName() { return firstName; } public string getInitials() { return initials; } public string getSurname() { return surname; } public string getDateOfBirth() { return dateOfBirth; } public string getHouseNameNumber() { return houseNameNumber; } public string getStreetName() { return streetName; } public string getArea() { return area; } public string getCityTown() { return cityTown; } public string getCounty() { return county; } public string getPostcode() { return postcode; } public string getPassword() { return password; } public int getNumberAccounts() { return numberAccounts; } public void setCustomerNumber(string inCustomerNumber) { customerNumber = inCustomerNumber; } public void setCustomerTitle(string inCustomerTitle) { customerTitle = inCustomerTitle; } public void setFirstName(string inFirstName) { firstName = inFirstName; } public void setInitials(string inInitials) { initials = inInitials; } public void setSurname(string inSurname) { surname = inSurname; } public void setDateOfBirth(string inDateOfBirth) { dateOfBirth = inDateOfBirth; } public void setHouseNameNumber(string inHouseNameNumber) { houseNameNumber = inHouseNameNumber; } public void setStreetName(string inStreetName) { streetName = inStreetName; } public void setArea(string inArea) { area = inArea; } public void setCityTown(string inCityTown) { cityTown = inCityTown; } public void setCounty(string inCounty) { county = inCounty; } public void setPostcode(string inPostcode) { postcode = inPostcode; } public void setPassword(string inPassword) { password = inPassword; } public void setNumberAccounts(string inNumberAccounts) { try { numberAccounts = Convert.ToInt32(inNumberAccounts); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } } } Accounts: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace e_SOFT_Banking { class Account { private string accSort; private Int64 accNumber; private string accNick; private string accDate; //not required - defaults to null private double accCurBal; private double accOverDraft; private int accNumTrans; public Account(string theAccSort, string theAccNumber, string theAccNick, string theAccDate, string theAccCurBal, string theAccOverDraft, string theAccNumTrans) { accSort = theAccSort; setAccNumber(theAccNumber); accNick = theAccNick; accDate = theAccDate; setAccCurBal(theAccCurBal); setAccOverDraft(theAccOverDraft); setAccNumTrans(theAccNumTrans); } public string getAccSort() { return accSort; } public long getAccNumber() { return accNumber; } public string getAccNick() { return accNick; } public string getAccDate() { return accDate; } public double getAccCurBal() { return accCurBal; } public double getAccOverDraft() { return accOverDraft; } public int getAccNumTrans() { return accNumTrans; } public void setAccSort(string inAccSort) { accSort = inAccSort; } public void setAccNumber(string inAccNumber) { try { accNumber = Convert.ToInt64(inAccNumber); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } public void setAccNick(string inAccNick) { accNick = inAccNick; } public void setAccDate(string inAccDate) { accDate = inAccDate; } public void setAccCurBal(string inAccCurBal) { try { accCurBal = Convert.ToDouble(inAccCurBal); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } public void setAccOverDraft(string inAccOverDraft) { try { accOverDraft = Convert.ToDouble(inAccOverDraft); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } public void setAccNumTrans(string inAccNumTrans) { try { accNumTrans = Convert.ToInt32(inAccNumTrans); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } } } Transactions: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace e_SOFT_Banking { class Transaction { private string date; private string type; private string description; private double amount; //not required - defaults to null private double balAfter; public Transaction(string theDate, string theType, string theDescription, string theAmount, string theBalAfter) { date = theDate; type = theType; description = theDescription; setAmount(theAmount); setBalAfter(theBalAfter); } public string getDate() { return date; } public string getType() { return type; } public string getDescription() { return description; } public double getAmount() { return amount; } public double getBalAfter() { return balAfter; } public void setDate(string inDate) { date = inDate; } public void setType(string inType) { type = inType; } public void setDescription(string inDescription) { description = inDescription; } public void setAmount(string inAmount) { try { amount = Convert.ToDouble(inAmount); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } public void setBalAfter(string inBalAfter) { try { balAfter = Convert.ToDouble(inBalAfter); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } } } Any help greatly appreciated.

    Read the article

  • Creating STA COM compatible ASP.NET Applications

    - by Rick Strahl
    When building ASP.NET applications that interface with old school COM objects like those created with VB6 or Visual FoxPro (MTDLL), it's extremely important that the threads that are serving requests use Single Threaded Apartment Threading. STA is a COM built-in technology that allows essentially single threaded components to operate reliably in a multi-threaded environment. STA's guarantee that COM objects instantiated on a specific thread stay on that specific thread and any access to a COM object from another thread automatically marshals that thread to the STA thread. The end effect is that you can have multiple threads, but a COM object instance lives on a fixed never changing thread. ASP.NET by default uses MTA (multi-threaded apartment) threads which are truly free spinning threads that pay no heed to COM object marshaling. This is vastly more efficient than STA threading which has a bit of overhead in determining whether it's OK to run code on a given thread or whether some sort of thread/COM marshaling needs to occur. MTA COM components can be very efficient, but STA COM components in a multi-threaded environment always tend to have a fair amount of overhead. It's amazing how much COM Interop I still see today so while it seems really old school to be talking about this topic, it's actually quite apropos for me as I have many customers using legacy COM systems that need to interface with other .NET applications. In this post I'm consolidating some of the hacks I've used to integrate with various ASP.NET technologies when using STA COM Components. STA in ASP.NET Support for STA threading in the ASP.NET framework is fairly limited. Specifically only the original ASP.NET WebForms technology supports STA threading directly via its STA Page Handler implementation or what you might know as ASPCOMPAT mode. For WebForms running STA components is as easy as specifying the ASPCOMPAT attribute in the @Page tag:<%@ Page Language="C#" AspCompat="true" %> which runs the page in STA mode. Removing it runs in MTA mode. Simple. Unfortunately all other ASP.NET technologies built on top of the core ASP.NET engine do not support STA natively. So if you want to use STA COM components in MVC or with class ASMX Web Services, there's no automatic way like the ASPCOMPAT keyword available. So what happens when you run an STA COM component in an MTA application? In low volume environments - nothing much will happen. The COM objects will appear to work just fine as there are no simultaneous thread interactions and the COM component will happily run on a single thread or multiple single threads one at a time. So for testing running components in MTA environments may appear to work just fine. However as load increases and threads get re-used by ASP.NET COM objects will end up getting created on multiple different threads. This can result in crashes or hangs, or data corruption in the STA components which store their state in thread local storage on the STA thread. If threads overlap this global store can easily get corrupted which in turn causes problems. STA ensures that any COM object instance loaded always stays on the same thread it was instantiated on. What about COM+? COM+ is supposed to address the problem of STA in MTA applications by providing an abstraction with it's own thread pool manager for COM objects. It steps in to the COM instantiation pipeline and hands out COM instances from its own internally maintained STA Thread pool. This guarantees that the COM instantiation threads are STA threads if using STA components. COM+ works, but in my experience the technology is very, very slow for STA components. It adds a ton of overhead and reduces COM performance noticably in load tests in IIS. COM+ can make sense in some situations but for Web apps with STA components it falls short. In addition there's also the need to ensure that COM+ is set up and configured on the target machine and the fact that components have to be registered in COM+. COM+ also keeps components up at all times, so if a component needs to be replaced the COM+ package needs to be unloaded (same is true for IIS hosted components but it's more common to manage that). COM+ is an option for well established components, but native STA support tends to provide better performance and more consistent usability, IMHO. STA for non supporting ASP.NET Technologies As mentioned above only WebForms supports STA natively. However, by utilizing the WebForms ASP.NET Page handler internally it's actually possible to trick various other ASP.NET technologies and let them work with STA components. This is ugly but I've used each of these in various applications and I've had minimal problems making them work with FoxPro STA COM components which is about as dififcult as it gets for COM Interop in .NET. In this post I summarize several STA workarounds that enable you to use STA threading with these ASP.NET Technologies: ASMX Web Services ASP.NET MVC WCF Web Services ASP.NET Web API ASMX Web Services I start with classic ASP.NET ASMX Web Services because it's the easiest mechanism that allows for STA modification. It also clearly demonstrates how the WebForms STA Page Handler is the key technology to enable the various other solutions to create STA components. Essentially the way this works is to override the WebForms Page class and hijack it's init functionality for processing requests. Here's what this looks like for Web Services:namespace FoxProAspNet { public class WebServiceStaHandler : System.Web.UI.Page, IHttpAsyncHandler { protected override void OnInit(EventArgs e) { IHttpHandler handler = new WebServiceHandlerFactory().GetHandler( this.Context, this.Context.Request.HttpMethod, this.Context.Request.FilePath, this.Context.Request.PhysicalPath); handler.ProcessRequest(this.Context); this.Context.ApplicationInstance.CompleteRequest(); } public IAsyncResult BeginProcessRequest( HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } } public class AspCompatWebServiceStaHandlerWithSessionState : WebServiceStaHandler, IRequiresSessionState { } } This class overrides the ASP.NET WebForms Page class which has a little known AspCompatBeginProcessRequest() and AspCompatEndProcessRequest() method that is responsible for providing the WebForms ASPCOMPAT functionality. These methods handle routing requests to STA threads. Note there are two classes - one that includes session state and one that does not. If you plan on using ASP.NET Session state use the latter class, otherwise stick to the former. This maps to the EnableSessionState page setting in WebForms. This class simply hooks into this functionality by overriding the BeginProcessRequest and EndProcessRequest methods and always forcing it into the AspCompat methods. The way this works is that BeginProcessRequest() fires first to set up the threads and starts intializing the handler. As part of that process the OnInit() method is fired which is now already running on an STA thread. The code then creates an instance of the actual WebService handler factory and calls its ProcessRequest method to start executing which generates the Web Service result. Immediately after ProcessRequest the request is stopped with Application.CompletRequest() which ensures that the rest of the Page handler logic doesn't fire. This means that even though the fairly heavy Page class is overridden here, it doesn't end up executing any of its internal processing which makes this code fairly efficient. In a nutshell, we're highjacking the Page HttpHandler and forcing it to process the WebService process handler in the context of the AspCompat handler behavior. Hooking up the Handler Because the above is an HttpHandler implementation you need to hook up the custom handler and replace the standard ASMX handler. To do this you need to modify the web.config file (here for IIS 7 and IIS Express): <configuration> <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated-4.0" /> <add name="Asmx STA Web Service Handler" path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" precondition="integrated"/> </handlers> </system.webServer> </configuration> (Note: The name for the WebServiceHandlerFactory-Integrated-4.0 might be slightly different depending on your server version. Check the IIS Handler configuration in the IIS Management Console for the exact name or simply remove the handler from the list there which will propagate to your web.config). For IIS 5 & 6 (Windows XP/2003) or the Visual Studio Web Server use:<configuration> <system.web> <httpHandlers> <remove path="*.asmx" verb="*" /> <add path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" /> </httpHandlers> </system.web></configuration> To test, create a new ASMX Web Service and create a method like this: [WebService(Namespace = "http://foxaspnet.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class FoxWebService : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World. Threading mode is: " + System.Threading.Thread.CurrentThread.GetApartmentState(); } } Run this before you put in the web.config configuration changes and you should get: Hello World. Threading mode is: MTA Then put the handler mapping into Web.config and you should see: Hello World. Threading mode is: STA And you're on your way to using STA COM components. It's a hack but it works well! I've used this with several high volume Web Service installations with various customers and it's been fast and reliable. ASP.NET MVC ASP.NET MVC has quickly become the most popular ASP.NET technology, replacing WebForms for creating HTML output. MVC is more complex to get started with, but once you understand the basic structure of how requests flow through the MVC pipeline it's easy to use and amazingly flexible in manipulating HTML requests. In addition, MVC has great support for non-HTML output sources like JSON and XML, making it an excellent choice for AJAX requests without any additional tools. Unlike WebForms ASP.NET MVC doesn't support STA threads natively and so some trickery is needed to make it work with STA threads as well. MVC gets its handler implementation through custom route handlers using ASP.NET's built in routing semantics. To work in an STA handler requires working in the Page Handler as part of the Route Handler implementation. As with the Web Service handler the first step is to create a custom HttpHandler that can instantiate an MVC request pipeline properly:public class MvcStaThreadHttpAsyncHandler : Page, IHttpAsyncHandler, IRequiresSessionState { private RequestContext _requestContext; public MvcStaThreadHttpAsyncHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); _requestContext = requestContext; } public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } protected override void OnInit(EventArgs e) { var controllerName = _requestContext.RouteData.GetRequiredString("controller"); var controllerFactory = ControllerBuilder.Current.GetControllerFactory(); var controller = controllerFactory.CreateController(_requestContext, controllerName); if (controller == null) throw new InvalidOperationException("Could not find controller: " + controllerName); try { controller.Execute(_requestContext); } finally { controllerFactory.ReleaseController(controller); } this.Context.ApplicationInstance.CompleteRequest(); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } public override void ProcessRequest(HttpContext httpContext) { throw new NotSupportedException("STAThreadRouteHandler does not support ProcessRequest called (only BeginProcessRequest)"); } } This handler code figures out which controller to load and then executes the controller. MVC internally provides the information needed to route to the appropriate method and pass the right parameters. Like the Web Service handler the logic occurs in the OnInit() and performs all the processing in that part of the request. Next, we need a RouteHandler that can actually pick up this handler. Unlike the Web Service handler where we simply registered the handler, MVC requires a RouteHandler to pick up the handler. RouteHandlers look at the URL's path and based on that decide on what handler to invoke. The route handler is pretty simple - all it does is load our custom handler: public class MvcStaThreadRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); return new MvcStaThreadHttpAsyncHandler(requestContext); } } At this point you can instantiate this route handler and force STA requests to MVC by specifying a route. The following sets up the ASP.NET Default Route:Route mvcRoute = new Route("{controller}/{action}/{id}", new RouteValueDictionary( new { controller = "Home", action = "Index", id = UrlParameter.Optional }), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute);   To make this code a little easier to work with and mimic the behavior of the routes.MapRoute() functionality extension method that MVC provides, here is an extension method for MapMvcStaRoute(): public static class RouteCollectionExtensions { public static void MapMvcStaRoute(this RouteCollection routeTable, string name, string url, object defaults = null) { Route mvcRoute = new Route(url, new RouteValueDictionary(defaults), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute); } } With this the syntax to add  route becomes a little easier and matches the MapRoute() method:RouteTable.Routes.MapMvcStaRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); The nice thing about this route handler, STA Handler and extension method is that it's fully self contained. You can put all three into a single class file and stick it into your Web app, and then simply call MapMvcStaRoute() and it just works. Easy! To see whether this works create an MVC controller like this: public class ThreadTestController : Controller { public string ThreadingMode() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Try this test both with only the MapRoute() hookup in the RouteConfiguration in which case you should get MTA as the value. Then change the MapRoute() call to MapMvcStaRoute() leaving all the parameters the same and re-run the request. You now should see STA as the result. You're on your way using STA COM components reliably in ASP.NET MVC. WCF Web Services running through IIS WCF Web Services provide a more robust and wider range of services for Web Services. You can use WCF over HTTP, TCP, and Pipes, and WCF services support WS* secure services. There are many features in WCF that go way beyond what ASMX can do. But it's also a bit more complex than ASMX. As a basic rule if you need to serve straight SOAP Services over HTTP I 'd recommend sticking with the simpler ASMX services especially if COM is involved. If you need WS* support or want to serve data over non-HTTP protocols then WCF makes more sense. WCF is not my forte but I found a solution from Scott Seely on his blog that describes the progress and that seems to work well. I'm copying his code below so this STA information is all in one place and quickly explain. Scott's code basically works by creating a custom OperationBehavior which can be specified via an [STAOperation] attribute on every method. Using his attribute you end up with a class (or Interface if you separate the contract and class) that looks like this: [ServiceContract] public class WcfService { [OperationContract] public string HelloWorldMta() { return Thread.CurrentThread.GetApartmentState().ToString(); } // Make sure you use this custom STAOperationBehavior // attribute to force STA operation of service methods [STAOperationBehavior] [OperationContract] public string HelloWorldSta() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Pretty straight forward. The latter method returns STA while the former returns MTA. To make STA work every method needs to be marked up. The implementation consists of the attribute and OperationInvoker implementation. Here are the two classes required to make this work from Scott's post:public class STAOperationBehaviorAttribute : Attribute, IOperationBehavior { public void AddBindingParameters(OperationDescription operationDescription, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.ClientOperation clientOperation) { // If this is applied on the client, well, it just doesn’t make sense. // Don’t throw in case this attribute was applied on the contract // instead of the implementation. } public void ApplyDispatchBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation) { // Change the IOperationInvoker for this operation. dispatchOperation.Invoker = new STAOperationInvoker(dispatchOperation.Invoker); } public void Validate(OperationDescription operationDescription) { if (operationDescription.SyncMethod == null) { throw new InvalidOperationException("The STAOperationBehaviorAttribute " + "only works for synchronous method invocations."); } } } public class STAOperationInvoker : IOperationInvoker { IOperationInvoker _innerInvoker; public STAOperationInvoker(IOperationInvoker invoker) { _innerInvoker = invoker; } public object[] AllocateInputs() { return _innerInvoker.AllocateInputs(); } public object Invoke(object instance, object[] inputs, out object[] outputs) { // Create a new, STA thread object[] staOutputs = null; object retval = null; Thread thread = new Thread( delegate() { retval = _innerInvoker.Invoke(instance, inputs, out staOutputs); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); thread.Join(); outputs = staOutputs; return retval; } public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state) { // We don’t handle async… throw new NotImplementedException(); } public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result) { // We don’t handle async… throw new NotImplementedException(); } public bool IsSynchronous { get { return true; } } } The key in this setup is the Invoker and the Invoke method which creates a new thread and then fires the request on this new thread. Because this approach creates a new thread for every request it's not super efficient. There's a bunch of overhead involved in creating the thread and throwing it away after each thread, but it'll work for low volume requests and insure each thread runs in STA mode. If better performance is required it would be useful to create a custom thread manager that can pool a number of STA threads and hand off threads as needed rather than creating new threads on every request. If your Web Service needs are simple and you need only to serve standard SOAP 1.x requests, I would recommend sticking with ASMX services. It's easier to set up and work with and for STA component use it'll be significantly better performing since ASP.NET manages the STA thread pool for you rather than firing new threads for each request. One nice thing about Scotts code is though that it works in any WCF environment including self hosting. It has no dependency on ASP.NET or WebForms for that matter. STA - If you must STA components are a  pain in the ass and thankfully there isn't too much stuff out there anymore that requires it. But when you need it and you need to access STA functionality from .NET at least there are a few options available to make it happen. Each of these solutions is a bit hacky, but they work - I've used all of them in production with good results with FoxPro components. I hope compiling all of these in one place here makes it STA consumption a little bit easier. I feel your pain :-) Resources Download STA Handler Code Examples Scott Seely's original STA WCF OperationBehavior Article© Rick Strahl, West Wind Technologies, 2005-2012Posted in FoxPro   ASP.NET  .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>”. This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks”. By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". Carl Dacosta ASP.NET QA Team

    Read the article

  • Compress Large Video Files with DivX / Xvid and AutoGK

    - by DigitalGeekery
    Have you ever recorded home video on a camcorder only to find the video size is enormous? What if you wanted to share a video clip on YouTube or another video sharing site, but the file size was bigger than the maximum upload size? Today we’ll look at a way to compress certain video files, such as MPEG and AVI, with Auto Gordian Knot (AutoGK). AutoGK is a free application that runs on Windows. It supports Mpeg1, Mpeg2, Transport Streams, Vobs, and virtually any codec used for an .AVI file. AutoGK will accept as input the following file types: MPG, MPEG, VOB, VRO, M2V, DAT, IFO, TS, TP, TRP, M2T, and AVI. Files are output as .AVI files and are converted using the DivX or XviD codecs. Installing and Using AutoGK Download and install AutoGK (link below) Open the AutoGK. You’ll need to navigate a few wizard screens, but you can just accept the defaults.   Choose your video file by clicking on the folder to the right of the Input file text box.   Browse for and select your video file and click “Open.”   For this example, we’ll be working with an .AVI file that’s 167MB in size.   The output file is copied into the same directory as the input file by default, but you can change this if you choose. If the input file is also .AVI, AutoGK will append an _agk to the output file so that the original is not overwritten. Next, you’ll see any audio tracks listed. You can unselect the check box if you’d like to remove the audio track. You can choose one of the Predefined size options… Or, select a Custom size in MB or Target Quality in percentage. For our example, we’ll be compressing our 167MB file to 35MB. Click on Advanced Settings. Here you can choose your codec, if you have a preference, as well as output resolution and output audio. If you’d like to use the DivX codec, you’ll need to download and install it separately. (See link below) Typically you’ll want to keep the defaults. Click “OK.” Now you’re ready to add your file conversion job to the Job queue. Click Add Job to add it to the queue. You can add multiple files conversions to the job queue and  convert them in one batch. Click Start to begin the conversion process. The process will begin. You’ll be able to see the progress in the Log window on the bottom left. When the conversion is complete you’ll see a “Job finished” and the total time in the log window.   Check your output file to see it’s compressed size. Test your video just to make sure the output quality is satisfactory.   Note:  Conversion times can vary greatly depending on the size of the file and your computer hardware. Files that are several GBs in size may take several hours to compress. AutoGK is no longer being actively developed but is still a wonderful DivX/XviD conversion tool. It can also be used to compress and convert non-copy protected DVDs. Downloads AutoGordianKnot DivX (optional) Similar Articles Productive Geek Tips Use Your Mac Mini as a Media Server Part 2Make Disk Cleanup Compress Older(or Newer) Files on XPMysticgeek Blog: Exclusive Look Inside Vreel – Including Interview With Vreel Founder!Friday Fun: Watch HD Video Content with MeevidConvert a DVD Movie Directly to AVI with FairUse Wizard 2.9 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox)

    Read the article

  • The Interaction between Three-Tier Client/Server Model and Three-Tier Application Architecture Model

    The three-tier client/server model is a network architectural approach currently used in modern networking. This approach divides a network in to three distinct components. Three-Tier Client/Server Model Components Client Component Server Component Database Component The Client Component of the network typically represents any device on the network. A basic example of this would be computer or another network/web enabled devices that are connected to a network. Network clients request resources on the network, and are usually equipped with a user interface for the presentation of the data returned from the Server Component. This process is done through the use of various software clients, and example of this can be seen through the use of a web browser client. The web browser request information from the Server Component located on the network and then renders the results for the user to process. The Server Components of the network return data based on specific client request back to the requesting client.  Server Components also inherit the attributes of a Client Component in that they are a device on the network and that they can also request information from other Server Components. However what differentiates a Client Component from a Server Component is that a Server Component response to requests from devices on the network. An example of a Server Component can be seen in a web server. A web server listens for new requests and then interprets the request, processes the web pages, and then returns the processed data back to the web browser client so that it may render the data for the user to interpret. The Database Component of the network returns unprocessed data from databases or other resources. This component also inherits attributes from the Server Component in that it is a device on a network, it can request information from other server components and database components, and it also listens for new requests so that it can return data when needed. The three-tier client/server model is very similar to the three-tier application architecture model, and in fact the layers can be mapped to one another. Three-Tier Application Architecture Model Presentation Layer/Logic Business Layer/Logic Data Layer/Logic The Presentation Layer including its underlying logic is very similar to the Client Component of the three-tiered model. The Presentation Layer focuses on interpreting the data returned by the Business Layer as well as presents the data back to the user.  Both the Presentation Layer and the Client Component focus primarily on the user and their experience. This allows for segments of the Business Layer to be distributable and interchangeable because the Presentation Layer is not directly integrated in with Business Layer. The Presentation Layer does not care where the data comes from as long as it is in the proper format. This allows for the Presentation Layer and Business Layer to be stored on one or more different servers so that it can provide a higher availability to clients requesting data. A good example of this is a web site that uses load balancing. When a web site decides to take on the task of load balancing they must obtain a network device that sits in front of a one or machines in order to distribute the request across multiple servers. When a user comes in through the load balanced device they are redirected to a specific server based on a few factors. Common Load Balancing Factors Current Server Availability Current Server Response Time Current Server Priority The Business Layer and corresponding logic are business rules applied to data prior to it being sent to the Presentation Layer. These rules are used to manipulate the data coming from the Data Access Layer, in addition to validating any data prior to being stored in the Data Access Layer. A good example of this would be when a user is trying to create multiple accounts under one email address. The Business Layer logic can prevent duplicate accounts by enforcing a unique email for every new account before the data is even stored in the Data Access Layer. The Server Component can be directly tied to this layer in that the server typically stores and process the Business Layer before it is returned to the end-user via the Presentation Layer. In addition the Server Component can also run automated process through the Business Layer on the data in the Data Access Layer so that additional business analysis can be derived from the data that has been already collected. The Data Layer and its logic are responsible for storing information so that it can be easily retrieved. Typical in most modern applications data is stored in a database management system however data can also be in the form of files stored on a file server. In addition a database can take on one of several forms. Common Database Formats XML File Pipe Delimited File Tab Delimited File Comma Delimited File (CSV) Plain Text File Microsoft Access Microsoft SQL Server MySql Oracle Sybase The Database component of the Networking model can be directly tied to the Data Layer because this is where the Data Layer obtains the data to return back the Business Layer. The Database Component basically allows for a place on the network to store data for future use. This enables applications to save data when they can and then quickly recall the saved data as needed so that the application does not have to worry about storing the data in memory. This prevents overhead that could be created when an application must retain all data in memory. As you can see the Three-Tier Client/Server Networking Model and the Three-Tiered Application Architecture Model rely very heavily on one another to function especially if different aspects of an application are distributed across an entire network. The use of various servers and database servers are wonderful when an application has a need to distribute work across the network. Network Components and Application Layers Interaction Database components will store all data needed for the Data Access Layer to manipulate and return to the Business Layer Server Component executes the Business Layer that manipulates data so that it can be returned to the Presentation Layer Client Component hosts the Presentation Layer that  interprets the data and present it to the user

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>” This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks” /> By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH".

    Read the article

  • Installing SOA Suite 11.1.1.3

    - by James Taylor
    With the release of Oracle SOA Suite 11.1.1.3 last week (28 April 2010) I thought I would attempt to implement a complete SOA Environment with SOA Suite, BPM and OSB on the WLS infrastructure. One major point of difference with the 11.1.1.3 is that is is released as a point release so you must have 11.1.1.2 installed first, then upgrade to 11.1.1.3. This post is performing the upgrade on Linux, if upgrading on windows you will need to substitute the directories and files accordingly. This post assumes that you have SOA Suite 11.1.1.2 installed already. 1. Download 11.1.1.3 software from the following site: http://www.oracle.com/technology/software/products/middleware/htdocs/fmw_11_download.html WLS 11.1.1.3   RCU 11.1.1.3 SOA Suite 11.1.1.3 OSB 11.1.1.3 Copy files to a staging area. For the purpose of this document the staging area is: /u01/stage  2. Shutdown your existing SOA Suite 11.1.1.2 environment 3. Execute the WLS 11.1.1.3 install from the stage directory. wls1033_linux32.bin 4. Choose the existing 11.1.1.2 Middleware Home 5. Ignore the security update notification 6. Accept the default products to be upgraded. 7. Upgrade of WebLogic has been completed   8. Upgrade the SOA Suite database schemas using the RCU utility. Unzip the RCU utility into the staging area and run the install ./u01/stage/rcuHome/bin/rcu 9. Drop the existing Repository and provide connection details 9. Install SOA Suite patch set 11.1.1.3. Unzip the SOA Suite patchset and execute the runInstaller with the following command. ./u01/stage/Disk1/runInstaller –jreLoc $MW_HOME/jdk160_18/jre 10. Choose the existing 11.1.1.2 middleware home 11. Start Install 12. Your SOA Suite Install should now be completed. Now we need to update the database repository. Login to SQLPlus as sysdba and execute the following command. SELECT version, status FROM schema_version_registry where owner = 'DEV_SOAINFRA'; the result should be similar to this: VERSION                        STATUS      OWNER ------------------------------ ----------- ------------------------------ 11.1.1.2.0                     VALID       DEV_SOAINFRA As you can see the version if these repositories are still at 11.1.1.2. 13. To upgrade these versions you have 2 options. 1 install via RCU, but this will remove any existing services. The second option is to use the Patch Set Assistant. From the $MW_HOME directory run the following command ./Oracle_SOA1/bin/psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_SOAINFRA 14. Install OSB. For the OSB install I did not install the IDE, or the Examples. run the runInstaller from the command line, unzip the OSB download to the stage area. ./u01/stage/osb/Disk1/runInstaller –jreLoc $MW_HOME/jdk160_18/jre 15. Choose Custom Install NOT to install the IDE (Eclipse) or Examples. 16. Unselect the, Examples and IDE checkboxes. 17. Accept the defaults and start installing. 18. Once the install has been completed configure the domain by running the Configuration Wizard. $MW_HOME/oracle_common/common/bin/config.sh You can create a new domain. In this document I will extend the soa_domain. 19. Select the following from the check list. I have selected the BPM Suite, this is unrelated to OSB but wanted it for my development purposes. To use this functionality additional license are required. 20. Configure the database connectivity. 21. Configure the database connectivity for the OSB schema. 22. Accept the defaults if installing on standard machine, if you require a cluster or advanced configuration then choose the option for you. 23. Upgrade is complete and OSB has been installed. Now you can start your environment.

    Read the article

  • Testing Mobile Websites with Adobe Shadow

    - by dwahlin
    It’s no surprise that mobile development is all the rage these days. With all of the new mobile devices being released nearly every day the ability for developers to deliver mobile solutions is more important than ever. Nearly every developer or company I’ve talked to recently about mobile development in training classes, at conferences, and on consulting projects says that they need to find a solution to get existing websites into the mobile space. Although there are several different frameworks out there that can be used such as jQuery Mobile, Sencha Touch, jQTouch, and others, how do you test how your site renders on iOS, Android, Blackberry, Windows Phone, and the variety of mobile form factors out there? Although there are different virtual solutions that can be used including Electric Plum for iOS, emulators, browser plugins for resizing the laptop/desktop browser, and more, at some point you need to test on as many physical devices as possible. This can be extremely challenging and quite time consuming though especially when you consider that you have to manually enter URLs into devices and click links on each one to drill-down into sites. Adobe Labs just released a product called Adobe Shadow (thanks to Kurt Sprinzl for letting me know about it) that significantly simplifies testing sites on physical devices, debugging problems you find, and even making live modifications to HTML and CSS content while viewing a site on the device to see how rendering changes. You can view a page in your laptop/desktop browser and have it automatically pushed to all of your devices without actually touching the device (a huge time saver). See a problem with a device? Locate it using the free Chrome extension, pull up inspection tools (based on the Chrome Developer tools) and make live changes through Chrome that appear on the respective device so that it’s easy to identify how problems can be resolved. I’ve been using Adobe Shadow and am very impressed with the amount of time saved and the different features that it offers. In the rest of the post I’ll walk through how to get it installed, get it started, and use it to view and debug pages.   Getting Adobe Shadow Installed The following steps can be used to get Adobe Shadow installed: 1. Download and install Adobe Shadow on your laptop/desktop 2. Install the Adobe Shadow extension for Chrome 3. Install the Adobe Shadow app on all of your devices (you can find it in various app stores) 4. Connect your devices to Wifi. Make sure they’re on the same network that your laptop/desktop machine is on   Getting Adobe Shadow Started Once Adobe Shadow is installed, you’ll need to get it running on your laptop/desktop and on all your mobile devices. The following steps walk through that process: 1. Start the Adobe Shadow application on your laptop/desktop 2. Start the Adobe Shadow app on each of your mobile devices 3. Locate the laptop/desktop name in the list that’s shown on each mobile device: 4. Select the laptop/desktop name and a passcode will be shown: 5. Open the Adobe Shadow Chrome extension on the laptop/desktop and enter the passcode for the given device: Using Adobe Shadow to View and Modify Pages Once Adobe Shadow is up and running on your laptop/desktop and on all of your mobile devices you can navigate to a page in Chrome on the laptop/desktop and it will automatically be pushed out to all connected mobile devices. If you have 5 mobile devices setup they’ll all navigate to the page displayed in Chrome (pretty awesome!). This makes it super easy to see how a given page looks on your iPad, Android device, etc. without having to touch the device itself. If you find a problem with a page on a device you can select the device in the Chrome Adobe Shadow extension on your laptop/desktop and select the remote inspector icon (it’s the < > icon): This will pull up the Adobe Shadow remote debugging window which contains the standard Chrome Developer tool tabs such as Elements, Resources, Network, etc. Click on the Elements tab to see the HTML rendered for the target device and then drill into the respective HTML content, CSS styles, etc. As HTML elements are selected in the Adobe Shadow debugging tool they’ll be highlighted on the device itself just like they would if you were debugging a page directly in Chrome with the developer tools. Here’s an example from my Android device that shows how the page looks on the device as I select different HTML elements on the laptop/desktop: Conclusion I’m really impressed with what I’ve to this point from Adobe Shadow. Controlling pages that display on devices directly from my laptop/desktop is a big time saver and the ability to remotely see changes made through the Chrome Developer Tools (on my laptop/desktop) really pushes the tool over the top. If you’re developing mobile applications it’s definitely something to check out. It’s currently free to download and use. For additional details check out the video below:  

    Read the article

  • Hack Extension Files to Make Them Version-Compatible for Firefox

    - by Asian Angel
    A well known drawback in using Firefox is the problem with extension compatibility when a new major version is released. Whether it is for a new extension that you are trying for the first time or an old favorite we have a way to get those extensions working for you again. There are multiple reasons why you might want to choose this method to fix a non-compatible extension: You are uncomfortable with tweaking the “about:config” settings You prefer to maintain the original “about:config” settings in a pristine state and like having compatibility checking active You are looking to gain some “geek cred” Keep in mind that most extensions will work perfectly well with a new version of Firefox and simply have the “version compatibility number” problem. But once in a while there may be one that needs to have some work done on it by the extension’s author. The Problem Here is a perfect example of everyone’s least favorite “extension message”. This is the last thing that you need when all that you want is for your favorite extension (or a new one) to work on a fresh clean install. Note: This works nicely to “replace” non-compatible extensions already present in your browser if you are simply upgrading. Hacking the XPI File For this procedure you will need to manually download the extension to your hard-drive (right click on the extension’s “Install Button” and select “Save As”). Once you have done that you are ready to start hacking the extension. For our example we chose the “GCal Popup Extension”. The best thing to do is place the extension in a new folder (i.e. the Desktop or other convenient location) then unzip it just the same way that you would with any regular zip file. Once it is unzipped you will see the various folders and files that were in the “xpi file” (we had four files here but depending on the extension the number may vary). There is only one file that you need to focus on…the “install.rdf” file. Note: At this point you should move the original extension file to a different location (i.e. outside of the folder) so that it is no longer present. Open the file in “Notepad” so that you can change the number for the “maxVersion”. Here the number is listed as “3.5.*” but we needed to make it higher… Replacing the “5” with a “7” is all that we needed to do. Once you have entered your new “maxVersion” number save the file. At this point you will need to re-zip all of the files back into a single file. Make certain that you “create” a file with the “.zip file extension” otherwise this will not work. Once you have the new zip file created you will need to rename the entire file including the “file extension”. For our example we copied and pasted the original extension name. Once you have changed the name click outside of the “text area”. You will see a small message window like this asking for confirmation…click “Yes” to finish the process. Now your modified/updated extension is ready to install. Drag the extension into your browser to install it and watch that wonderful “Restart to complete the installation.” message appear. As soon as your browser starts you can check the “Add-ons Manager Window” and see the version compatibility numbers for the extension. Looking very very nice! And just like that your extension should be up and running without any problems. Conclusion If you are looking to try something new, gain some geek cred, or just want to keep your Firefox install as close to the original condition as possible this method should get those extensions working nicely for you again. Similar Articles Productive Geek Tips Make Firefox Extensions Compatible After Firefox Update Breaks Them For No Good ReasonCheck Extension Compatibility for Upcoming Firefox ReleasesFirefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible ExtensionsHow To Force Extension Compatibility with Firefox 3.6+Test and Report Add-on Compatibility in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab

    Read the article

  • Upgrading Fusion Middleware 11.1.1.x to 11.1.1.4

    - by James Taylor
    This is a follow on from my previous post where we upgraded 11.1.1.2 to 11.1.1.3. The instructions I provide here will work for Fusion Middleware 11.1.1.2 and 11.1.1.3 wanting to upgrade to 11.1.1.4. In this example I’m just upgrading SOA Suite on OEL 64bit but the steps will be the same, some of the downloads may be different based on your environment. To upgrade to 11.1.1.4 you need to have access to http://support.oracle.com as this is where the downloads reside. Oracle provides 11.1.1.4 as a standalone download so you can do a fresh install if required using OTN downloads (http://www.oracle.com/technetwork/indexes/downloads/index.html). The high level steps to upgrade are as follows: Download software Shutdown you SOA Environment Upgrade WLS to 11.1.1.4 Upgrade SOA Suite to 11.1.1.4 Upgrade OSB to 11.1.1.4 Upgrade MSD Schemas Identify the downloads you require for your install. You will need the WebLogic Server Upgrade and the additional product downloads. If you are using 64bit then use the generic version. The downloads are found from the following location - http://download.oracle.com/docs/html/E18749_01/download_readme.htm#BABDDIIC For the purpose of this post I downloaded the following patches 11060985 – WLS Server Generic 11060960 – SOA Suite 11061005 – OSB Suite You must also download the 11.1.1.4 RCU tool to upgrade the DB schemas. It is available via OTN, or, Oracle Support, I have provided the link from Oracle Support.  11060956 – RCU Make sure you have set the Java executable in your PATH e.g. export PATH=$JAVA_HOME/bin:$PATH  Make sure all your WebLogic environment has been shut down before performing the upgrade. Extract the WLS patch 11060985 to a temporary directory and start the installer java –jar wls1034_upgrade_generic.jar Please note if you are not running 64BIT then the upgrade executable will be just a bin file which you can execute directly. Chose the right Oracle home for your WebLogic Server install. In the Register for Security Updates you can enter your details or just click Next. If you do not enter details confirm that you don’t want to receive these updates Select the products you want to upgrade and select next. It is recommended that you accept the defaults. Confirm the directories that will be upgraded Upgrade of WLS ahs been completed   Extract your both SOA downloads to a temporary directory and run the installer found in Disk1 ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Please note that the java location and version may be different for your environment Skip the Software Updates Ensure your system meets the prerequisites Set the Oracle home for your SOA install. You will be asked to confirm that you want to upgrade, click Yes Choose your application server. Since you are upgrading from 11.1.1.x you will be on WebLogic Start the Install Installation Upgrade of SOA Suite completed accept the default to finish.   In my environment I have OSB installed so I need to upgrade this next. If you don’t have SOA Suite you can go straight to completing the DB Schema updates at Step 24.  Extract the OSB upgrade files to a temporary directory and execute the installer found in the Disk1 folder. ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Skip the software updates Select the Oracle home for your environment Accept the warning to continue the upgrade Point to the location of your WebLogic Server installation Install the OSB upgrade Upgrade has been completed accept the defaults Change directory to $MW_HOME/oracle_common/bin where the Patch Set Assistant is installed Execute the following command to update the MDS schema. Please not for my examples I have the context set to DEV. your may be different. This means that all my schemas are prefixed by DEV. ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_MDS You will be asked you passwords for sys and the schema Enter the database administrator password for "sys": Enter the schema password for schema user "DEV_MDS": Change directory to $MW_HOME/Oracle_SOA1/bin to where the Patch Set Assistant is installed for SOA Suite. Execute the following command to update the SOA and BAM schemas ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_SOAINFRA   To check that you have the installed correctly run the following SQL as sysdba. SELECT owner, version, status FROM schema_version_registry; OWNER                          VERSION                        STATUS ------------------------------ ------------------------------ ----------- DEV_MDS                        11.1.1.4.0                     VALID DEV_SOAINFRA                   11.1.1.4.0                     VALID Don’t stress if the versions are not all sitting at version 11.1.1.4 as not all schemas need to be updated. The key ones are MDS and SOAINFRA

    Read the article

  • Silverlight 5 &ndash; What&rsquo;s New? (Including Screenshots &amp; Code Snippets)

    - by mbcrump
    Silverlight 5 is coming next year (2011) and this blog post will tell you what you need to know before the beta ships. First, let me address people saying that it is dead after PDC 2010. I believe that it’s best to see what the market is doing, not the vendor. Below is a list of companies that are developing Silverlight 4 applications shown during the Silverlight Firestarter. Some of the companies have shipped and some haven’t. It’s just great to see the actual company names that are working on Silverlight instead of “people are developing for Silverlight”. The next thing that I wanted to point out was that HTML5, WPF and Silverlight can co-exist. In case you missed Scott Gutherie’s keynote, they actually had a slide with all three stacked together. This shows Microsoft will be heavily investing in each technology.  Even I, a Silverlight developer, am reading Pro HTML5. Microsoft said that according to the Silverlight Feature Voting site, 21k votes were entered. Microsoft has implemented about 70% of these votes in Silverlight 5. That is an amazing number, and I am crossing my fingers that Microsoft bundles Silverlight with Windows 8. Let’s get started… what’s new in Silverlight 5? I am going to show you some great application and actual code shown during the Firestarter event. Media Hardware Video Decode – Instead of using CPU to decode, we will offload it to GPU. This will allow netbooks, etc to play videos. Trickplay – Variable Speed Playback – Pitch Correction (If you speed up someone talking they won’t sound like a chipmunk). Power Management – Less battery when playing video. Screensavers will no longer kick in if watching a video. If you pause a video then screensaver will kick in. Remote Control Support – This will allow users to control playback functions like Pause, Rewind and Fastforward. IIS Media Services 4 has shipped and now supports Azure. Data Binding Layout Transitions – Just with a few lines of XAML you can create a really rich experience that is not using Storyboards or animations. RelativeSource FindAncestor – Ancestor RelativeSource bindings make it much easier for a DataTemplate to bind to a property on a container control. Custom Markup Extensions – Markup extensions allow code to be run at XAML parse time for both properties and event handlers. This is great for MVVM support. Changing Styles during Runtime By Binding in Style Setters – Changing Styles at runtime used to be a real pain in Silverlight 4, now it’s much easier. Binding in style setters allows bindings to reference other properties. XAML Debugging – Below you can see that we set a breakpoint in XAML. This shows us exactly what is going on with our binding.  WCF & RIA Services WS-Trust Support – Taken from Wikipedia: WS-Trust is a WS-* specification and OASIS standard that provides extensions to WS-Security, specifically dealing with the issuing, renewing, and validating of security tokens, as well as with ways to establish, assess the presence of, and broker trust relationships between participants in a secure message exchange. You can reduce network latency by using a background thread for networking. Supports Azure now.  Text and Printing Improved text clarity that enables better text rendering. Multi-column text flow, Character tracking and leading support, and full OpenType font support.  Includes a new Postscript Vector Printing API that provides control over what you print . Pivot functionality baked into Silverlight 5 SDK. Graphics Immediate mode graphics support that will enable you to use the GPU and 3D graphics supports. Take a look at what was shown in the demos below. 1) 3D view of the Earth – not really a real-world application though. A doctor’s portal. This demo really stood out for me as it shows what we can do with the 3D / GPU support. Out of Browser OOB applications can now create and manage childwindows as shown in the screenshot below.  Trusted OOB applications can use P/Invoke to call Win32 APIs and unmanaged libraries.  Enterprise Group Policy Support allow enterprises to lock down or up the sandbox capabilities of Silverlight 5 applications. In this demo, he tore the “notes” off of the application and it appeared in a new window. See the black arrow below. In this demo, he connected a USB Device which fired off a local Win32 application that provided the data off the USB stick to Silverlight. Another demo of a Silverlight 5 application exporting data right into Excel running inside of browser. Testing They demoed Coded UI, which is available now in the Visual Studio Feature Pack 2. This will allow you to create automated testing without writing any code manually. Performance: Microsoft has worked to improve the Silverlight startup time. Silverlight 5 provides 64-bit browser support.  Silverlight 5 also provides IE9 Hardware acceleration.   I am looking forward to Silverlight 5 and I hope you are too. Thanks for reading and I hope you visit again soon.  Subscribe to my feed CodeProject

    Read the article

  • Ctrl + C doesn't abort programs in terminal

    - by jake
    I changed the keyboard shortcut in terminal so that Ctrl + C would copy text. I realized I can't abort a program I am running since Ctrl + C used to be the abort command. I know that Ctrl + Shift + C works but want it switched back. Is there a way to revert the keyboard shortcuts to the real defaults before I decided to mess with it? What is the abort command defined as in keyboard shortcuts? Not a big program if I can't but it would be nice to know.

    Read the article

  • Is there a expected set of button mappings games commonly use?

    - by Scott Chamberlain
    I am making a game that will support a XBox 360 controller but I would like to try and keep the default button mappings to be what is expected from a user's past history from playing other games. Is there a set of guidelines from Microsoft on what should map to what (Do you use A for fire or left trigger?), or has the gaming community picked up a common set of controls that is just not written anywhere, everyone just "knows" it (like WASD for movement). The hardest thing for me is I have walking movement, vehicle movement, and airplane movement. I plan on allowing custom configuration of each, but I don't know what to set as the defaults.

    Read the article

  • gksudo waits for a few seconds after execution

    - by phoenix
    i'm frequently using application launchers to run personal bash scripts and thus i often use gksudo in case i do administrative tasks. the problem is that when i execute a command with gksudo,the execution is successful, but afterwards gksudo waits for about 5 seconds before it closes/finishes. in some scripts i use gksudo multiple times, resulting in execution times of a few minutes, even though everything should be done in a few seconds. can anyone help me here? ps: here are my main /etc/sudoers-settings (might have something to do with my problem): Defaults env_reset,!tty_tickets,timestamp_timeout=2 phoenix ALL= NOPASSWD: /bin/mount,/bin/umount,/usr/sbin/firestarter,/usr/bin/truecrypt,/usr/bin/apt-get

    Read the article

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Virtual host is not working in Ubuntu 14 VPS using XAMPP 1.8.3

    - by viral4ever
    I am using XAMPP as server in ubuntu 14.04 VPS of digitalocean. I tried to setup virtual hosts. But it is not working and I am getting 403 error of access denied. I changed files too. My files with changes are /opt/lampp/etc/httpd.conf # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/trunk/> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/trunk/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so 'log/access_log' # with ServerRoot set to '/www' will be interpreted by the # server as '/www/log/access_log', where as '/log/access_log' will be # interpreted as '/log/access_log'. # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to specify a local disk on the # Mutex directive, if file-based mutexes are used. If you wish to share the # same ServerRoot for multiple httpd daemons, you will need to change at # least PidFile. # ServerRoot "/opt/lampp" # # Mutex: Allows you to set the mutex mechanism and mutex file directory # for individual mutexes, or change the global defaults # # Uncomment and change the directory if mutexes are file-based and the default # mutex file directory is not on a local disk or is not appropriate for some # other reason. # # Mutex default:logs # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule authn_socache_module modules/mod_authn_socache.so LoadModule authn_core_module modules/mod_authn_core.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_dbd_module modules/mod_authz_dbd.so LoadModule authz_core_module modules/mod_authz_core.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule access_compat_module modules/mod_access_compat.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_form_module modules/mod_auth_form.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule allowmethods_module modules/mod_allowmethods.so LoadModule file_cache_module modules/mod_file_cache.so LoadModule cache_module modules/mod_cache.so LoadModule cache_disk_module modules/mod_cache_disk.so LoadModule socache_shmcb_module modules/mod_socache_shmcb.so LoadModule socache_dbm_module modules/mod_socache_dbm.so LoadModule socache_memcache_module modules/mod_socache_memcache.so LoadModule dbd_module modules/mod_dbd.so LoadModule bucketeer_module modules/mod_bucketeer.so LoadModule dumpio_module modules/mod_dumpio.so LoadModule echo_module modules/mod_echo.so LoadModule case_filter_module modules/mod_case_filter.so LoadModule case_filter_in_module modules/mod_case_filter_in.so LoadModule buffer_module modules/mod_buffer.so LoadModule ratelimit_module modules/mod_ratelimit.so LoadModule reqtimeout_module modules/mod_reqtimeout.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule request_module modules/mod_request.so LoadModule include_module modules/mod_include.so LoadModule filter_module modules/mod_filter.so LoadModule substitute_module modules/mod_substitute.so LoadModule sed_module modules/mod_sed.so LoadModule charset_lite_module modules/mod_charset_lite.so LoadModule deflate_module modules/mod_deflate.so LoadModule mime_module modules/mod_mime.so LoadModule ldap_module modules/mod_ldap.so LoadModule log_config_module modules/mod_log_config.so LoadModule log_debug_module modules/mod_log_debug.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule cern_meta_module modules/mod_cern_meta.so LoadModule expires_module modules/mod_expires.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule unique_id_module modules/mod_unique_id.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule version_module modules/mod_version.so LoadModule remoteip_module modules/mod_remoteip.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so LoadModule proxy_scgi_module modules/mod_proxy_scgi.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_express_module modules/mod_proxy_express.so LoadModule session_module modules/mod_session.so LoadModule session_cookie_module modules/mod_session_cookie.so LoadModule session_dbd_module modules/mod_session_dbd.so LoadModule slotmem_shm_module modules/mod_slotmem_shm.so LoadModule ssl_module modules/mod_ssl.so LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so LoadModule unixd_module modules/mod_unixd.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule suexec_module modules/mod_suexec.so LoadModule cgi_module modules/mod_cgi.so LoadModule cgid_module modules/mod_cgid.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so <IfDefine JUSTTOMAKEAPXSHAPPY> LoadModule php4_module modules/libphp4.so LoadModule php5_module modules/libphp5.so </IfDefine> <IfModule unixd_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User root Group www </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # #ServerName www.example.com:@@Port@@ # XAMPP ServerName localhost # # Deny access to the entirety of your server's filesystem. You must # explicitly permit access to web content directories in other # <Directory> blocks below. # <Directory /> AllowOverride none Require all denied </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/opt/lampp/htdocs" <Directory "/opt/lampp/htdocs"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/trunk/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks # XAMPP Options Indexes FollowSymLinks ExecCGI Includes # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride None # since XAMPP 1.4: AllowOverride All # # Controls who can get stuff from this server. # Require all granted </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> #DirectoryIndex index.html # XAMPP DirectoryIndex index.html index.html.var index.php index.php3 index.php4 </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ".ht*"> Require all denied </Files> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "logs/error_log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "logs/access_log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "logs/access_log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAlias /cgi-bin/ "/opt/lampp/cgi-bin/" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock logs/cgisock </IfModule> # # "/opt/lampp/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/opt/lampp/cgi-bin"> AllowOverride None Options None Require all granted </Directory> <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig etc/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # XAMPP, since LAMPP 0.9.8: AddHandler cgi-script .cgi .pl # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # # XAMPP AddType text/html .shtml AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile etc/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # MaxRanges: Maximum number of Ranges in a request before # returning the entire resource, or one of the special # values 'default', 'none' or 'unlimited'. # Default setting is to accept 200 Ranges. #MaxRanges unlimited # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall may be used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # Defaults: EnableMMAP On, EnableSendfile Off # EnableMMAP off EnableSendfile off # Supplemental configuration # # The configuration files in the etc/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) #Include etc/extra/httpd-mpm.conf # Multi-language error messages Include etc/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include etc/extra/httpd-autoindex.conf # Language settings #Include etc/extra/httpd-languages.conf # User home directories #Include etc/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include etc/extra/httpd-info.conf # Virtual hosts Include etc/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include etc/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include etc/extra/httpd-dav.conf # Various default settings Include etc/extra/httpd-default.conf # Configure mod_proxy_html to understand HTML4/XHTML1 <IfModule proxy_html_module> Include etc/extra/proxy-html.conf </IfModule> # Secure (SSL/TLS) connections <IfModule ssl_module> # XAMPP <IfDefine SSL> Include etc/extra/httpd-ssl.conf </IfDefine> </IfModule> # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> # XAMPP Include etc/extra/httpd-xampp.conf Include "/opt/lampp/apache2/conf/httpd.conf" I used command shown in this example. I used below lines to change and add group Add group "groupadd www" Add user to group "usermod -aG www root" Change htdocs group "chgrp -R www /opt/lampp/htdocs" Change sitedir group "chgrp -R www /opt/lampp/htdocs/mysite" Change htdocs chmod "chmod 2775 /opt/lampp/htdocs" Change sitedir chmod "chmod 2775 /opt/lampp/htdocs/mysite" And then I changed my vhosts.conf file # Virtual Hosts # # Required modules: mod_log_config # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.4/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host.example.com" ServerName dummy-host.example.com ServerAlias www.dummy-host.example.com ErrorLog "logs/dummy-host.example.com-error_log" CustomLog "logs/dummy-host.example.com-access_log" common </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host2.example.com" ServerName dummy-host2.example.com ErrorLog "logs/dummy-host2.example.com-error_log" CustomLog "logs/dummy-host2.example.com-access_log" common </VirtualHost> NameVirtualHost * <VirtualHost *> ServerAdmin [email protected] DocumentRoot "/opt/lampp/htdocs/mysite" ServerName mysite.com ServerAlias mysite.com ErrorLog "/opt/lampp/htdocs/mysite/errorlogs" CustomLog "/opt/lampp/htdocs/mysite/customlog" common <Directory "/opt/lampp/htdocs/mysite"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order Allow,Deny Allow from all Require all granted </Directory> </VirtualHost> but still its not working and I am getting 403 error on my ip and domain however I can access phpmyadmin. If anyone can help me, please help me.

    Read the article

  • Check Your Spelling, Grammar, and Style in Firefox and Chrome

    - by Matthew Guay
    Are you tired of making simple writing mistakes that get past your browser’s spell-check?  Here’s how you can get advanced grammar check and more in Firefox and Chrome with After the Deadline. Microsoft Word has spoiled us with grammar, syntax, and spell checking, but the default spell check in Firefox and Chrome still only does basic checks.  Even webapps like Google Docs don’t check more than basic spelling errors.  However, WordPress.com is an exception; it offers advanced spelling, grammar, and syntax checking with its After the Deadline proofing system.  This helps you keep from making embarrassing mistakes on your blog posts, and now, thanks to a couple free browser plugins, it can help you keep from making these mistakes in any website or webapp. After the Deadline in Google Chrome Add the After the Deadline extension (link below) to Chrome as usual. As soon as it’s installed, you’re ready to start improving your online writing.  To check spelling, grammar, and more, click the ABC button that you’ll now see at the bottom of most text boxes online. After a quick scan, grammar mistakes are highlighted in green, complex expressions and other syntax problems are highlighted in blue, and spelling mistakes are highlighted in red as would be expected.  Click on an underlined word to choose one of its recommended changes or ignore the suggestion. Or, if you want more explanation about what was wrong with that word or phrase, click Explain for more info. And, if you forget to run an After the Deadline scan before submitting a text entry, it will automatically check to make sure you still want to submit it.  Click Cancel to go back and check your writing first.   To change the After the Deadline settings, click its icon in the toolbar and select View Options.  Additionally, if you want to disable it on the site you’re on, you can click Disable on this site directly from the popup. From the settings page, you can choose extra things to check for such as double negatives and redundant phrases, as well as add sites and words to ignore. After the Deadline in Firefox Add the After the Deadline add-on to Firefox (link below) as normal. After the Deadline basically the same in Firefox as it does in Chrome.  Select the ABC icon in the lower right corner of textboxes to check them for problems, and After the Deadline will underline the problems as it did in Chrome.  To view a suggested change in Firefox, right-click on the underlined word and select the recommended change or ignore the suggestion. And, if you forget to check, you’ll see a friendly reminder asking if you’re sure you want to submit your text like it is. You can access the After the Deadline settings in Firefox from the menu bar.  Click Tools, then select AtD Preferences.  In Firefox, the settings are in a options dialog with three tabs, but it includes the same options as the Chrome settings page.  Here you can make After the Deadline as correction-happy as you like.   Conclusion The web has increasingly become an interactive place, and seldom does a day go by that we aren’t entering text in forms and comments that may stay online forever.  Even our insignificant tweets are being archived in the Library of Congress.  After the Deadline can help you make sure that your permanent internet record is as grammatically correct as possible.  Even though it doesn’t catch every problem, and even misses some spelling mistakes, it’s still a great help. Links Download the After the Deadline extension for Google Chrome Download the After the Deadline add-on for Firefox Similar Articles Productive Geek Tips Quick Tip: Disable Favicons in FirefoxStupid Geek Tricks: Duplicate a Tab with a Shortcut Key in Chrome or FirefoxHow to Disable the New Geolocation Feature in Google ChromeStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeStop YouTube Videos from Automatically Playing in Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

  • Understanding each other in web development

    - by Pete Hotchkin
    During my career I have been lucky enough to work in several different roles within web development with many extremely talented people, from incredible designers who were passionate about the placement of every pixel right through to server administrators and DBAs who were always measuring the improvements they were making to their queries in the smallest possible unit. The problem I always faced was that more often than not I was stuck in the middle trying to mediate between these different functions and enable each side to understand the other’s point of view. The main areas of contention that there have always been between these functional groups in my experience have been at 2 key points: during the build phase and then when there is a problem post-build. During both of these times it is often easier for someone to pass the buck onto someone else than spend the time to understand the other person’s perspective. Below is a quick look at two upcoming tools that will not only speed up the build phase for each function, but  also help when it comes to the issues faced once a site has been pushed live. In my experience a web project goes through several phases of development. The first of these is design, generally handled as Photoshop files which are then passed onto a front-end developer. This is the first point at which heated discussions can arise. One problem I’ve seen several times is that the designer doesn’t fully understand the platform constraints that need to be considered, and as a result has designed something that does not translate very well or is simply not possible. Working at Red Gate, I am lucky enough to be able to meet some amazing people and this happened just the other day when I was introduced to Neil Kinnish and Pete Nelson, the creators of what I believe could be a great asset in this designer-developer relationship, Mixture. Mixture allows the front end developer to quickly prototype a web page with built-in frameworks such as bootstrap. It’s not an IDE however, it just sits there in the background and monitors the project files in the background so every time you save a file from your favorite IDE, it will compile things like LESS, compact your JavaScript and the automatically refresh your test browser so you can see the changes instantly. I think one of the best parts of this however is a single button that pushes the changed files up to the web so the designer can instantly see how far the developer has got and the problem that he is facing at that time without the need to spend time setting up a remote server. I can see this being a real asset to remote teams where there needs to be a compromise between the designer and the front-end developer, or just to allow the designer to see how the build is progressing and suggest small alterations. Once the design has been built into the front end the designer’s job is generally done and there are no other points of contention between the designer and the other functions involved in building these web projects. As the project moves into the stage of integrating it into the back end and deploying it to the production server other functions start to be pulled in and other issues arise such as the back-end developer understanding the frameworks that they are using such as the routes that are in place in an MVC application or the number of database calls that the ORM layer is actually making. There are many tools out there that can actually help with these problems such as mini profiler that gives you a quick snapshot of what is going on directly in the browser. For a slightly more in-depth look at what is happening and to gain a deeper understanding of an application you may be working on though, you may want to consider Glimpse. Created by Nik and Anthony, it is an application that sits at the bottom of your browser (installed via NuGet) which can show you information about how your application is pieced together and how the information on screen is being delivered as it happens. With a wealth of community-built plugins such as one for nHibernate and linq2SQL (full list of plugins on NuGet). It can be customized directly to your own setup to truly delve into the code to see what is happening, and can help to reduce the number of confusing moments about whether it is your code that is going wrong or whether there is something more sinister happening directly on the server. All the tools that I have mentioned in this post help to do one thing above all, and that is to ease the barrier of understanding between the different functions that are involved in building and maintaining a web application. In my experience it is very easy to say “Well, that’s not my problem”, simply because the two functions involved don’t truly understand the other’s point of view. Software should not only be seen as a way to streamline our own working process or as a debugging tool but also a communication aid to improve the entire lifecycle of a web project. Glimpse is actually the project that I am the designer on and I would love to get your feedback if you do decide to try it out or if you would like to share your own experiences of working on web projects please fill in your details at https://www.surveymk.com/s/joinGlimpse  or add a comment below and I will get in touch with you.

    Read the article

  • Demystified - BI in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Frequently, my clients ask me if there is a good guide on deciphering the seemingly daunting choice of products from Microsoft when it comes to business intelligence offerings in a SharePoint 2010 world. These are all described in detail in my book, but here is a one (well maybe two) page executive overview. Microsoft Excel: Yes, Microsoft Excel! Your favorite and most commonly used in the world database. No it isn’t a database in technical pure definitions, but this is the most commonly used ‘database’ in the world. You will find many business users craft up very compelling excel sheets with tonnes of logic inside them. Good for: Quick Ad-Hoc reports. Excel 64 bit allows the possibility of very large datasheets (Also see 32 bit vs 64 bit Office, and PowerPivot Add-In below). Audience: End business user can build such solutions. Related technologies: PowerPivot, Excel Services Microsoft Excel with PowerPivot Add-In: The powerpivot add-in is an extension to Excel that adds support for large-scale data. Think of this as Excel with the ability to deal with very large amounts of data. It has an in-memory data store as an option for Analysis services. Good for: Ad-hoc reporting and logic with very large amounts of data. Audience: End business user can build such solutions. Related technologies: Excel, and Excel Services Excel Services: Excel Services is a Microsoft SharePoint Server 2010 shared service that brings the power of Excel to SharePoint Server by providing server-side calculation and browser-based rendering of Excel workbooks. Thus, excel sheets can be created by end users, and published to SharePoint server – which are then rendered right through the browser in read-only or parameterized-read-only modes. They can also be accessed by other software via SOAP or REST based APIs. Good for: Sharing excel sheets with a larger number of people, while maintaining control/version control etc. Sharing logic embedded in excel sheets with other software across the organization via REST/SOAP interfaces Audience: End business users can build such solutions once your tech staff has setup excel services on a SharePoint server instance. Programmers can write software consuming functionality/complex formulae contained in your sheets. Related technologies: PerformancePoint Services, Excel, and PowerPivot. Visio Services: Visio Services is a shared service on the Microsoft SharePoint Server 2010 platform that allows users to share and view Visio diagrams that may or may not have data connected to them. Connected data can update these diagrams allowing a visual/graphical view into the data. The diagrams are viewable through the browser. They are rendered in silverlight, but will automatically down-convert to .png formats. Good for: Showing data as diagrams, live updating. Comes with a developer story. Audience: End business users can build such solutions once your tech staff has setup visio services on a SharePoint server instance. Developers can enhance the visualizations Related Technologies: Visio Services can be used to render workflow visualizations in SP2010 Reporting Services: SQL Server reporting services can integrate with SharePoint, allowing you to store reports and data sources in SharePoint document libraries, and render these reports and associated functionality such as subscriptions through a SharePoint site. In SharePoint 2010, you can also write reports against SharePoint lists (access services uses this technique). Good for: Showing complex reports running in a industry standard data store, such as SQL server. Audience: This is definitely developer land. Don’t expect end users to craft up reports, unless a report model has previously been published. Related Technologies: PerformancePoint Services PerformancePoint Services: PerformancePoint Services in SharePoint 2010 is now fully integrated with SharePoint, and comes with features that can either be used in the BI center site definition, or on their own as activated features in existing site collections. PerformancePoint services allows you to build reports and dashboards that target a variety of back-end datasources including: SQL Server reporting services, SQL Server analysis services, SharePoint lists, excel services, simple tables, etc. Using these you have the ability to create dashboards, scorecards/kpis, and simple reports. You can also create reports targeting hierarchical multidimensional data sources. The visual decomposition tree is a new report type that lets you quickly breakdown multi-dimensional data. Good for: Mostly everything :), except your wallet – it’s not free! But this is the most comprehensive offering. If you have SharePoint server, forget everything and go with performance point. Audience: Developers need to setup the back-end sources, manageability story. DBAs need to setup datawarehouses with cubes. Moderately sophisticated business users, or developers can craft up reports using dashboard designer which is a click-once App that deploys with PerformancePoint Related Technologies: Excel services, reporting services, etc.   Other relevant technologies to know about: Business Connectivity Services: Allows for consumption of external data in SharePoint as columns or external lists. This can be paired with one or more of the above BI offerings allowing insight into such data. Access Services: Allows the representation/publishing of an access database as a SharePoint 2010 site, leveraging many SharePoint features. Reporting services is used by Access services. Secure Store Service: The SP2010 Secure store service is a replacement for the SP2007 single sign on feature. This acts as a credential policeman providing credentials to various applications running with SharePoint. BCS, PerformancePoint Services, Excel Services, and many other apps use the SSS (Secure Store Service) for credential control. Comment on the article ....

    Read the article

  • How to automatically mount hibernated NTFS to read-only?

    - by Piotr
    Is there any way to set up Ubuntu this way: If I can't mount the filesystem in rw mode, then mount it in ro mode in the same directory. In result I should not come across the notification that the system can't mount the filesystem (Skip or manual fix notification). SO when I start the system I should have my ntfs partitions mounted either in rw or ro mode depends if the windows is hibernated. fstab entry: #/dev/sda7 UUID=D0B43178B43161E0 /media/Dane ntfs defaults,errors=remount-ro 0 1 "mount -a" result: The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. Failed to mount '/dev/sda7': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option. I have ubuntu 13.10 and win8. I use uefi secure boot.

    Read the article

  • Monitoring Events in your BPEL Runtime - RSS Feeds?

    - by Ramkumar Menon
    @10g - It had been a while since I'd tried something different. so here's what I did this week!Whenever our Developers deployed processes to the BPEL runtime, or perhaps when a process gets turned off due to connectivity issues, or maybe someone retired a process, I needed to know. So here's what I did. Step 1: Downloaded Quartz libraries and went through the documentation to understand what it takes to schedule a recurring job. Step 2: Cranked out two components using Oracle JDeveloper. [Within a new Web Project] a) A simple Java Class named FeedUpdater that extends org.quartz.Job. All this class does is to connect to your BPEL Runtime [via opmn:ormi] and fetch all events that occured in the last "n" minutes. events? - If it doesn't ring a bell - its right there on the BPEL Console. If you click on "Administration > Process Log" - what you see are events.The API to retrieve the events is //get the locator reference for the domain you are interested in.Locator l = .... //Predicate to retrieve events for last "n" minutesWhereCondition wc = new WhereCondition(...) //get all those events you needed.BPELProcessEvent[] events = l.listProcessEvents(wc); After you get all these events, write out these into an RSS Feed XML structure and stream it into a file that resides either in your Apache htdocs, or wherever it can be accessed via HTTP.You can read all about RSS 2.0 here. At a high level, here is how it looks like. <?xml version = '1.0' encoding = 'UTF-8'?><rss version="2.0">  <channel>    <title>Live Updates from the Development Environment</title>    <link>http://soadev.myserver.com/feeds/</link>    <description>Live Updates from the Development Environment</description>    <lastBuildDate>Fri, 19 Nov 2010 01:03:00 PST</lastBuildDate>    <language>en-us</language>    <ttl>1</ttl>    <item>      <guid>1290213724692</guid>      <title>Process compiled</title>      <link>http://soadev.myserver.com/BPELConsole/mdm_product/administration.jsp?mode=processLog&amp;processName=&amp;dn=all&amp;eventType=all&amp;eventDate=600&amp;Filter=++Filter++</link>      <pubDate>Fri Nov 19 00:00:37 PST 2010</pubDate>      <description>SendPurchaseOrderRequestService: 3.0 Time : Fri Nov 19 00:00:37                   PST 2010</description>    </item>   ...... </channel> </rss> For writing ut XML content, read through Oracle XML Parser APIs - [search around for oracle.xml.parser.v2] b) Now that my "Job" was done, my job was half done. Next, I wrote up a simple Scheduler Servlet that schedules the above "Job" class to be executed ever "n" minutes. It is very straight forward. Here is the primary section of the code.           try {        Scheduler sched = StdSchedulerFactory.getDefaultScheduler();         //get n and make a trigger that executes every "n" seconds        Trigger trigger = TriggerUtils.makeSecondlyTrigger(n);        trigger.setName("feedTrigger" + System.currentTimeMillis());        trigger.setGroup("feedGroup");                JobDetail job = new JobDetail("SOA_Feed" + System.currentTimeMillis(), "feedGroup", FeedUpdater.class);        sched.scheduleJob(job,trigger);         }catch(Exception ex) {            ex.printStackTrace();            throw new ServletException(ex.getMessage());        } Look up the Quartz API and documentation. It will make this look much simpler.   Now that both components were ready, I packaged the Application into a war file and deployed it onto my Application Server. When the servlet initialized, the "n" second schedule was set/initialized. From then on, the servlet kept populating the RSS Feed file. I just ensured that my "Job" code keeps only 30 latest events within it, so that the feed file is small and under control. [a few kbs]   Next I opened up the feed xml on my browser - It requested a subscription - and Here I was - watching new deployments/life cycle events all popping up on my browser toolbar every 5 (actually n)  minutes!   Well, you could do it on a browser/reader of your choice - or perhaps read them like you read an email on your thunderbird!.      

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >