Search Results

Search found 6178 results on 248 pages for 'section'.

Page 45/248 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Creating a list based on a column

    - by MikkoP
    I need to create a dropdown list in sheet A based on the values in sheet B in column A. I clicked on the A column in B sheet and named it as Models. Then I clicked on the cell in sheet A where I wanted the list to be and selected Data -> Data validation -> Data validation. In the Settings page I selected List in the Allow section, checked Ignore blank and In-cell dropdown. In the Source section I inserted =Models. This way I get all the right values plus a lot of blank values. How do I prevent the blank lines from appearing in the list?

    Read the article

  • Running JBoss 6 with Runit / daemontools or other process supervision framework

    - by Alex Recarey
    I'm tying to use runit to daemonize JBoss. I use the /opt/jboss-6.1.0.Final/bin/run.sh script to start the server. When I do so from the comandline, JBoss does not detach (which is what we want), and will also shut down when CTRL+C is pressed. In theory a perfect candidate to use runit on. Everything works fine except when I try to get runit to shut down JBoss. When I issue the command sv stop jboss nothing happens. Runit thinks the process is stopped but jboss continues to run normally. I'm not doing anything special with the run script. This is my runit run script: #!/bin/sh exec 2>&1 exec /opt/jboss-6.1.0.Final/bin/run.sh -c standard -b 0.0.0.0 Looking at the jboss_init_redhat.sh script, the start section does mention ./bin/run.sh but the stop section has the following text: JBOSS_CMD_STOP=${JBOSS_CMD_STOP:-"java -classpath $JBOSSCP org.jboss.Shutdown --shutdown"} Any ideas of what I could try?

    Read the article

  • How to enable Postfix filter on OS X server (Lion) without getting overwritten?

    - by mjbraun
    I have a Postfix after-queue content filter that works fine on stock Ubuntu Postfix install. However, on a OS X Lion Server I'm testing out, I see this "auto-generated" language in the master.cf file: # ==== Begin auto-generated section ======================================== # This section of the master.cf file is auto-generated by the Server Admin # Mail backend plugin whenever mails settings are modified. smtp inet n - n - 1 postscreen smtpd pass - - n - - smtpd -o receive_override_options=no_address_mappings The problem is, to enable the filter the same area of the master.cf file has to be changed to have an entry like this: smtp inet n - - - - smtpd -o content_filter=mytest_filter:dummy So, it looks like the line would be blown away every I make a change in Server Admin. Is there a better place for me to put the call to the filter such that it's persistent? I appreciate any assistance and/or guidance!

    Read the article

  • Windows 8: Is Internet Explorer10 Metro mode disabled? Here is the fix.

    - by Gopinath
    Are you having issues with Internet Explorer 10 on Windows 8? Is Internet Explorer 10 application not launching in Metro mode and always launching in Desktop mode? Continue reading the post to understand and fix the problem. When Internet Explorer 10 does not launch in Metro mode? On Windows 8 when Internet Explorer is launched from Start screen it launches in Metro UI mode and when launched from Desktop it launches in traditional desktop UI mode. But when Internet Explorer  is not set as default browser, Windows 8 always launches Internet Explorer in Desktop mode irrespective of whether its launched from Start screen or Desktop. This generally happens when we install the browsers Firefox or Chrome and set them as default browser. How to restore Internet Explorer 10 Metro mode? The problem can be resolved by setting Internet Explorer 10 as the default web browser in Windows 8. But setting default web browser in Windows 8 is a bit tricky as there is no option available Options screen of  of Internet Explorer. To set Internet Explorer 10 as the default web browser follow these steps 1. Go to Start screen of Windows 8 and search for Default Programs app by typing Default using keyboard. 2. Click on the link Set your default programs 3. Choose Internet Explorer on the left section of the screen(pointer 1) and then click on the option Set this program as default available at the bottom right section(pointer 2) 4. That’s it. Now Internet Explorer is set as default web browser and from now onwards you will be able to launch Metro UI based Internet Explorer from Start screen

    Read the article

  • ASP.NET MVC 2 RTM Unit Tests not compiling

    - by nmarun
    I found something weird this time when it came to ASP.NET MVC 2 release. A very handful of people ‘made noise’ about the release.. at least on the asp.net blog site, usually there’s a big ‘WOOHAA… <something> is released’, kind of a thing. Hmm… but here’s the reason I’m writing this post. I’m not sure how many of you read the release notes before downloading the version.. I did, I did, I did. Now there’s a ‘Known issues’ section in the document and I’m quoting the text as is from this section: Unit test project does not contain reference to ASP.NET MVC 2 project: If the Solution Explorer window is hidden in Visual Studio, when you create a new ASP.NET MVC 2 Web application project and you select the option Yes, create a unit test project in the Create Unit Test Project dialog box, the unit test project is created but does not have a reference to the associated ASP.NET MVC 2 project. When you build the solution, Visual Studio will display compilation errors and the unit tests will not run. There are two workarounds. The first workaround is to make sure that the Solution Explorer is displayed when you create a new ASP.NET MVC 2 Web application project. If you prefer to keep Solution Explorer hidden, the second workaround is to manually add a project reference from the unit test project to the ASP.NET MVC 2 project. This definitely looks like a bug to me and see below for a visual: At the top right corner you’ll see that the Solution Explorer is set to auto hide and there’s no reference for the TestMvc2 project and that is the reason we get compilation errors without even writing a single line of code. So thanks to <VeryBigFont>ME</VeryBigFont> and <VerySmallFont>Microsoft</VerySmallFont>) , we’ve shown the world how to resolve a major issue and to live in Peace with the rest of humanity!

    Read the article

  • Thread Synchronization and Synchronization Primitives

    When considering synchronization in an application, the decision truly depends on what the application and its worker threads are going to do. I would use synchronization if two or more threads could possibly manipulate the same instance of an object at the same time. An example of this in C# can be demonstrated through the use of storing data in a static object. A static object is initialized once per application and the data within the object can be accessed by all threads. I would use the synchronization primitives to prevent any data from being manipulated by multiple threads simultaneously. This would reduce any data corruption from occurring within the object. On the other hand if all the threads used non static objects and were independent of the other tasks there would be no need to use synchronization. Synchronization Primitives in C#: Basic Blocking Locking Signaling Non-Blocking Synchronization Constructs The Basic Blocking methods include Sleep, Join, and Task.Wait.  These methods force threads to wait until other threads have completed. In addition, these methods can also force a thread to wait a set amount of time before continuing to work.   The Locking primitive prevents a thread from entering a critical section of code while another thread is in the same critical section.  If another thread attempts to enter a locked code, it will wait, until the code block is released. The Signaling primitive allows a thread to temporarily pause work until receiving a notification from another thread that it is ok to continue working. The Signaling primitive removes the need for polling.The Non-Blocking Synchronization Constructs protect access to a common field by calling upon processor primitives.

    Read the article

  • Doubt about texture waves in CG Ocean Shader

    - by Alexandre
    I'm new on graphical programming, and I'm having some trouble understanding the Ocean Shader described on "Effective Water Simulation from Physical Models" from GPU Gems. The source code associated to this article is here. My problem has been to understand the concept of texture waves. First of all, what is achieved by texture waves? I'm having a hard time trying to figure out it's usefulness. In the section 1.2.4 of the article, it does say that the waves summed into the texture have the same parametrization as the waves used for vertex positioning. Does it mean that I can't use the texture provided by the source code if I change the parameters of the waves, or add more waves to sum? And in the section 1.4.1, is said that we can assume that there is no rotation between texture space and world space if the texture coordinates for our normal map are implicit. What does mean that the "normal map are implicit'? And why do I need a rotation between texture and world spaces if the normal map are not implicit? I would be very grateful for any help on this.

    Read the article

  • How to Use USER_DEFINED Activity in OWB Process Flow

    - by Jinggen He
    Process Flow is a very important component of Oracle Warehouse Builder. With Process Flow, we can create and control the ETL process by setting all kinds of activities in a well-constructed flow. In Oracle Warehouse Builder 11gR2, there are 28 kinds of activities, which fall into three categories: Control activities, OWB specific activities and Utility activities. For more information about Process Flow activities, please refer to OWB online doc. Most of those activities are pre-defined for some specific use. For example, the Mapping activity allows execution an OWB mapping in Process Flow and the FTP activity allows an interaction between the local host and a remote FTP server. Besides those activities for specific purposes, the User Defined activity enables you to incorporate into a Process Flow an activity that is not defined within Warehouse Builder. So the User Defined activity brings flexibility and extensibility to Process Flow. In this article, we will take an amazing tour of using the User Defined activity. Let's start. Enable execution of User Defined activity Let's start this section from creating a very simple Process Flow, which contains a Start activity, a User Defined activity and an End Success activity. Leave all parameters of activity USER_DEFINED unchanged except that we enter /tmp/test.sh into the Value column of the COMMAND parameter. Then let's create the shell script test.sh in /tmp directory. Here is the content of /tmp/test.sh (this article is demonstrating a scenario in Linux system, and /tmp/test.sh is a Bash shell script): echo Hello World! > /tmp/test.txt Note: don't forget to grant the execution privilege on /tmp/test.sh to OS Oracle user. For simplicity, we just use the following command. chmod +x /tmp/test.sh OK, it's so simple that we’ve almost done it. Now deploy the Process Flow and run it. For a newly installed OWB, we will come across an error saying "RPE-02248: For security reasons, activity operator Shell has been disabled by the DBA". See below. That's because, by default, the User Defined activity is DISABLED. Configuration about this can be found in <ORACLE_HOME>/owb/bin/admin/Runtime.properties: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint=DISABLED The property can be set to three different values: NATIVE_JAVA, SCHEDULER and DISBALED. Where NATIVE_JAVA uses the Java 'Runtime.exec' interface, SCHEDULER uses a DBMS Scheduler external job submitted by the Control Center repository owner which is executed by the default operating system user configured by the DBA. DISABLED prevents execution via these operators. We enable the execution of User Defined activity by setting: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint= NATIVE_JAVA Restart the Control Center service for the change of setting to take effect. cd <ORACLE_HOME>/owb/rtp/sql sqlplus OWBSYS/<password of OWBSYS> @stop_service.sql sqlplus OWBSYS/<password of OWBSYS> @start_service.sql And then run the Process Flow again. We will see that the Process Flow completes successfully. The execution of /tmp/test.sh successfully generated a file /tmp/test.txt, containing the line Hello World!. Pass parameters to User Defined Activity The Process Flow created in the above section has a drawback: the User Defined activity doesn't accept any information from OWB nor does it give any meaningful results back to OWB. That's to say, it lacks interaction. Maybe, sometimes such a Process Flow can fulfill the business requirement. But for most of the time, we need to get the User Defined activity executed according to some information prior to that step. In this section, we will see how to pass parameters to the User Defined activity and pass them into the to-be-executed shell script. First, let's see how to pass parameters to the script. The User Defined activity has an input parameter named PARAMETER_LIST. This is a list of parameters that will be passed to the command. Parameters are separated from one another by a token. The token is taken as the first character on the PARAMETER_LIST string, and the string must also end in that token. Warehouse Builder recommends the '?' character, but any character can be used. For example, to pass 'abc,' 'def,' and 'ghi' you can use the following equivalent: ?abc?def?ghi? or !abc!def!ghi! or |abc|def|ghi| If the token character or '\' needs to be included as part of the parameter, then it must be preceded with '\'. For example '\\'. If '\' is the token character, then '/' becomes the escape character. Let's configure the PARAMETER_LIST parameter as below: And modify the shell script /tmp/test.sh as below: echo $1 is saying hello to $2! > /tmp/test.txt Re-deploy the Process Flow and run it. We will see that the generated /tmp/test.txt contains the following line: Bob is saying hello to Alice! In the example above, the parameters passed into the shell script are static. This case is not so useful because: instead of passing parameters, we can directly write the value of the parameters in the shell script. To make the case more meaningful, we can pass two dynamic parameters, that are obtained from the previous activity, to the shell script. Prepare the Process Flow as below: The Mapping activity MAPPING_1 has two output parameters: FROM_USER, TO_USER. The User Defined activity has two input parameters: FROM_USER, TO_USER. All the four parameters are of String type. Additionally, the Process Flow has two string variables: VARIABLE_FOR_FROM_USER, VARIABLE_FOR_TO_USER. Through VARIABLE_FOR_FROM_USER, the input parameter FROM_USER of USER_DEFINED gets value from output parameter FROM_USER of MAPPING_1. We achieve this by binding both parameters to VARIABLE_FOR_FROM_USER. See the two figures below. In the same way, through VARIABLE_FOR_TO_USER, the input parameter TO_USER of USER_DEFINED gets value from output parameter TO_USER of MAPPING_1. Also, we need to change the PARAMETER_LIST of the User Defined activity like below: Now, the shell script is getting input from the Mapping activity dynamically. Deploy the Process Flow and all of its necessary dependees then run the Process Flow. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! 'USER B' and 'USER A' are two outputs of the Mapping execution. Write the shell script within Oracle Warehouse Builder In the previous section, the shell script is located in the /tmp directory. But sometimes, when the shell script is small, or for the sake of maintaining consistency, you may want to keep the shell script inside Oracle Warehouse Builder. We can achieve this by configuring these three parameters of a User Defined activity properly: COMMAND: Set the path of interpreter, by which the shell script will be interpreted. PARAMETER_LIST: Set it blank. SCRIPT: Enter the shell script content. Note that in Linux the shell script content is passed into the interpreter as standard input at runtime. About how to actually pass parameters to the shell script, we can utilize variable substitutions. As in the following figure, ${FROM_USER} will be replaced by the value of the FROM_USER input parameter of the User Defined activity. So will the ${TO_USER} symbol. Besides the custom substitution variables, OWB also provide some system pre-defined substitution variables. You can refer to the online document for that. Deploy the Process Flow and run it. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! Leverage the return value of User Defined activity All of the previous sections are connecting the User Defined activity to END_SUCCESS with an unconditional transition. But what should we do if we want different subsequent activities for different shell script execution results? 1.  The simplest way is to add three simple-conditioned out-going transitions for the User Defined activity just like the figure below. In the figure, to simplify the scenario, we connect the User Defined activity to three End activities. Basically, if the shell script ends successfully, the whole Process Flow will end at END_SUCCESS, otherwise, the whole Process Flow will end at END_ERROR (in our case, ending at END_WARNING seldom happens). In the real world, we can add more complex and meaningful subsequent business logic. 2.  Or we can utilize complex conditions to work with different results of the User Defined activity. Previously, in our script, we only have this line: echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt We can add more logic in it and return different values accordingly. echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt if CONDITION_1 ; then ...... exit 0 fi if CONDITION_2 ; then ...... exit 2 fi if CONDITION_3 ; then ...... exit 3 fi After that we can leverage the result by checking RESULT_CODE in condition expression of those out-going transitions. Let's suppose that we have the Process Flow as the following graph (SUB_PROCESS_n stands for more different further processes): We can set complex condition for the transition from USER_DEFINED to SUB_PROCESS_1 like this: Other transitions can be set in the same way. Note that, in our shell script, we return 0, 2 and 3, but not 1. As in Linux system, if the shell script comes across a system error like IO error, the return value will be 1. We can explicitly handle such a return value. Summary Let's summarize what has been discussed in this article: How to create a Process Flow with a User Defined activity in it How to pass parameters from the prior activity to the User Defined activity and finally into the shell script How to write the shell script within Oracle Warehouse Builder How to do variable substitutions How to let the User Defined activity return different values and in what way can we leverage

    Read the article

  • Selecting Your Theme

    - by Ruth
    Would you like to personalize CRM On Demand? You can quite easily with CRM On Demand Release 17. Whether your company wants a custom theme that matches your company brand, or you have a preference about the look and feel of the application, you can select your theme in a few clicks. If you are interested in creating a custom theme, take a look at the Themes - Create Your CRM Style blog article. Selecting Your Company Theme If you are the company administrator, you can select the company theme from the company profile. Click the Admin link. Navigate to Company Administration Company Profile. In the Company Theme Setting section, click the Theme Name field to select a new theme. Selecting Your Personal Theme Even if you are not an administrator, you can select a theme for CRM On Demand on your computer. Your company may not allow access to this option, so talk to your company administrator if you are unable to select your theme. Click the My Setup link. Click the My Profile link. Click the Personal Profile link. In the Additional Information section, select the theme that you want in the Theme Name picklist. Here are some standard themes to help you find the look that you want:

    Read the article

  • enable email on Godaddy when using Zerigo on Heroku hosted app

    - by joelmaranhao
    A little recap of what I have done ... and then my questions Q1, Q2 and Q3 1 - I developed a RoR app that I deployed on Heroku. biowatts.heroku.com 2 - I bought a domain name at GoDaddy: biowattsonline.com 3 - I am using Zerigo addon as for the DNS heroku addons:add custom_domains heroku addons:add zerigo_dns:basic 4 - Added my domains in Heroku heroku domains:add biowattsonline.com heroku domains:add www.biowattsonline.com and subdomains heroku domains:add calculator.biowattsonline.com Q1: Where do we configure the forward to http://biowattsonline.com/biogas_calculator ? 5 - Configured GoDaddy adding the Zerigo domains In the Nameservers section a.ns.zerigo.net b.ns.zerigo.net c.ns.zerigo.net d.ns.zerigo.net e.ns.zerigo.net The GoDaddy DNS section is empty: "Not hosted here" Ok this works all fine ... http://biowattsonline.com is correctly found 6 - Subdomain forward I want calculator.biowattsonline.com to be forwarded to biowattsonline.com/biogas_calculator Q2: So I created the forward in GoDaddy ... but is that correct? 7 - GoDaddy email Q3: I have one free email account with go GoDaddy, only now that I am using Zerigo I don't know how configure GoDaddy to make it work again?... because it work with the default values Any ideason Q1, Q2 and Q3? Thanks, Joel

    Read the article

  • New Release Overview Part 1

    - by brian.harrison
    Ladies & Gentlemen, I have been getting a lot of questions over the last month or two about the next release of WCI codenamed "Neo". Unfortunately I cannot give you an exact release date which I know you all would be asking me for if we were talking face to face, but I can definitely provide you with information about some of the features that will be made available. So over the next few blog entries, I am going to provide you with details about two features and even provide you with screenshots for some of them. KD Browser Portlet This portlet will provide a windows explorer look and feel to the Knowledge Directory from with a Community Page or My Page. Not only will the portlet provide access to the folder structure and the documents within, but the user or community manager will also have the ability to modify what is being shown. From with a preferences page, the user or community manager can change what top-level folders are shown within the folder structure as well as what properties are available for each document that is shown. There are also a number of other portlet specific customizations available as well. Embedded Tagging Engine As some of you might be aware, there was a product made available just prior to the Oracle acquisition known as Pathways which gave users the ability to add tags to documents that were either in the Knowledge Directory or in the Collaboration Documents section. Although this product is no longer available separately for customers to purchase, we definitely did feel that the functionality was important and interesting enough that other customers should have access to it. The decision was made for this release to embed the original Pathways product as the Tagging Engine for WCI and Collaboration. This tagging engine will allow a user to add tags to a document as well as through the Collaboration Documents section. Once the tags are added to the Tagging Engine and associated with documents, then a user will have the ability to filter the documents when processing a search according to the Tags Cloud that will now be available on the Search Results page and this will be true no matter what kind of search is being processed. In addition to all of that, all of the Pathways portlets will also be available for users to add to their My Page.

    Read the article

  • Visual Studio confused when there are multiple system.web sections in your web.config

    - by Jeff Widmer
    I am trying to start debugging in Visual Studio for the website I am currently working on but Visual Studio is telling me that I have to enable debugging in the web.config to continue: But I clearly have debugging enabled: At first I chose the option to Modify the Web.config file to enable debugging but then I started receiving the following exception on my site: HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. Config section 'system.web/compilation' already defined. Sections must only appear once per config file. See the help topic <location> for exceptions   So what is going on here?  I already have debug=”true”, Visual Studio tells me I do not, and then when I give Visual Studio permission to fix the problem, I get a configuration error. Eventually I tracked it down to having two <system.web> sections. I had defined customErrors higher in the web.config: And then had a second system.web section with compilation debug=”true” further down in the web.config.  This is valid in the web.config and my site was not complaining but I guess Visual Studio does not know how to handle it and sees the first system.web, does not see the debug=”true” and thinks your site is not set up for debugging. To fix this so that Visual Studio was not going to complain, I removed the duplicate system.web declaration and moved the customErrors statement down.

    Read the article

  • Facebook: Hide Your Status Updates From Your Boss/Ex or Any Specific Friend

    - by Gopinath
    Sometime we want to hide our status updates from specific people who are already accepted as Friends in Facebook. Do you wonder why we need to accept someone as friend and then hide status updates from them? Well, may be you have to accept a friend request from your boss, but certainly love to hide status updates as well as other Facebook activities from him. Something similar goes with few annoying friends whom you cant’ de-friend but like to hide your updates. Thanks to Facebook for providing fine grain privacy options on controlling what we want to share and with whom we want to share. It’s very easy to block one or more specific friends from seeing your status updates. Here are the step by step instructions: 1. Login to Facebook and go to Privacy Settings Page. It shows a page something similar to what is shown in the below image. 2. Click on “Customize settings” link 3. Expand the privacy options available in the section Things I Share -> Posts by me. Choose Customise from the list of available options   4. Type the list of unwanted friend’s names in to input box of the section "Hide this from”. Here is a screen grab of couple of my friends whom I added for writing this post 5. Click the Save Settings button. That’s all. Facebook will ensure that these people will not see your status updates on their news feed. Enjoyed the Facebook Tip? Join Tech Dreams on Facebook to read all our blog posts on your Facebook’s news feed. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Visual WebGui launches a CompanionKit for enhanced developers experience

    - by Webgui
    Visual WebGui launched a new major live demo of the platform's concepts, features and controls and the code behind them. The new Developer CompanionKit is a hige leap forward in the developer experience by allowing developers a hands-on exploration of Visual WebGui which should provide better understanding of the system and the ability to utilize the great advantages of Visual WebGui in order to develop better performing rich web applications. The CompanionKit is available online at companionkit.visualwebgui.com/main.wgx We invite you to Explore Visual WebGui via the new CompanionKit and to watch the CompanionKit Intro video. Below is a screenshot taken from the live CompanionKit which allows developers to see how applying an alternate style to the appearance of a DataGridView is done and how it looks running live and its code (C# or VB.NET). You can access the different Controls (within the Controls section) from the left navigation bar or perform a free text search which shows the relevant results from all the sections - additional sections such as a Concept section are expected to be added in the near future.   In addition, the New Developer CompanionKit which was built with Visual WebGui showcases the enhanced UI design capabilities of building more engaing, modern Web 2.0 applications. The CompanionKit will also be available for download in the next few days as part of the media for 6.4 beta 2 SDK (.NET 2.0 or .NET 3.5) under "Help and Documentation".

    Read the article

  • Unity – Part 5: Injecting Values

    - by Ricardo Peres
    Introduction This is the fifth post on Unity. You can find the introductory post here, the second post, on dependency injection here, a third one on Aspect Oriented Programming (AOP) here and the latest so far, on writing custom extensions, here. This time we will talk about injecting simple values. An Inversion of Control (IoC) / Dependency Injector (DI) container like Unity can be used for things other than injecting complex class dependencies. It can also be used for setting property values or method/constructor parameters whenever a class is built. The main difference is that these values do not have a lifetime manager associated with them and do not come from the regular IoC registration store. Unlike, for instance, MEF, Unity won’t let you register as a dependency a string or an integer, so you have to take a different approach, which I will describe in this post. Scenario Let’s imagine we have a base interface that describes a logger – the same as in previous examples: 1: public interface ILogger 2: { 3: void Log(String message); 4: } And a concrete implementation that writes to a file: 1: public class FileLogger : ILogger 2: { 3: public String Filename 4: { 5: get; 6: set; 7: } 8:  9: #region ILogger Members 10:  11: public void Log(String message) 12: { 13: using (Stream file = File.OpenWrite(this.Filename)) 14: { 15: Byte[] data = Encoding.Default.GetBytes(message); 16: 17: file.Write(data, 0, data.Length); 18: } 19: } 20:  21: #endregion 22: } And let’s say we want the Filename property to come from the application settings (appSettings) section on the Web/App.config file. As usual with Unity, there is an extensibility point that allows us to automatically do this, both with code configuration or statically on the configuration file. Extending Injection We start by implementing a class that will retrieve a value from the appSettings by inheriting from ValueElement: 1: sealed class AppSettingsParameterValueElement : ValueElement, IDependencyResolverPolicy 2: { 3: #region Private methods 4: private Object CreateInstance(Type parameterType) 5: { 6: Object configurationValue = ConfigurationManager.AppSettings[this.AppSettingsKey]; 7:  8: if (parameterType != typeof(String)) 9: { 10: TypeConverter typeConverter = this.GetTypeConverter(parameterType); 11:  12: configurationValue = typeConverter.ConvertFromInvariantString(configurationValue as String); 13: } 14:  15: return (configurationValue); 16: } 17: #endregion 18:  19: #region Private methods 20: private TypeConverter GetTypeConverter(Type parameterType) 21: { 22: if (String.IsNullOrEmpty(this.TypeConverterTypeName) == false) 23: { 24: return (Activator.CreateInstance(TypeResolver.ResolveType(this.TypeConverterTypeName)) as TypeConverter); 25: } 26: else 27: { 28: return (TypeDescriptor.GetConverter(parameterType)); 29: } 30: } 31: #endregion 32:  33: #region Public override methods 34: public override InjectionParameterValue GetInjectionParameterValue(IUnityContainer container, Type parameterType) 35: { 36: Object value = this.CreateInstance(parameterType); 37: return (new InjectionParameter(parameterType, value)); 38: } 39: #endregion 40:  41: #region IDependencyResolverPolicy Members 42:  43: public Object Resolve(IBuilderContext context) 44: { 45: Type parameterType = null; 46:  47: if (context.CurrentOperation is ResolvingPropertyValueOperation) 48: { 49: ResolvingPropertyValueOperation op = (context.CurrentOperation as ResolvingPropertyValueOperation); 50: PropertyInfo prop = op.TypeBeingConstructed.GetProperty(op.PropertyName); 51: parameterType = prop.PropertyType; 52: } 53: else if (context.CurrentOperation is ConstructorArgumentResolveOperation) 54: { 55: ConstructorArgumentResolveOperation op = (context.CurrentOperation as ConstructorArgumentResolveOperation); 56: String args = op.ConstructorSignature.Split('(')[1].Split(')')[0]; 57: Type[] types = args.Split(',').Select(a => Type.GetType(a.Split(' ')[0])).ToArray(); 58: ConstructorInfo ctor = op.TypeBeingConstructed.GetConstructor(types); 59: parameterType = ctor.GetParameters().Where(p => p.Name == op.ParameterName).Single().ParameterType; 60: } 61: else if (context.CurrentOperation is MethodArgumentResolveOperation) 62: { 63: MethodArgumentResolveOperation op = (context.CurrentOperation as MethodArgumentResolveOperation); 64: String methodName = op.MethodSignature.Split('(')[0].Split(' ')[1]; 65: String args = op.MethodSignature.Split('(')[1].Split(')')[0]; 66: Type[] types = args.Split(',').Select(a => Type.GetType(a.Split(' ')[0])).ToArray(); 67: MethodInfo method = op.TypeBeingConstructed.GetMethod(methodName, types); 68: parameterType = method.GetParameters().Where(p => p.Name == op.ParameterName).Single().ParameterType; 69: } 70:  71: return (this.CreateInstance(parameterType)); 72: } 73:  74: #endregion 75:  76: #region Public properties 77: [ConfigurationProperty("appSettingsKey", IsRequired = true)] 78: public String AppSettingsKey 79: { 80: get 81: { 82: return ((String)base["appSettingsKey"]); 83: } 84:  85: set 86: { 87: base["appSettingsKey"] = value; 88: } 89: } 90: #endregion 91: } As you can see from the implementation of the IDependencyResolverPolicy.Resolve method, this will work in three different scenarios: When it is applied to a property; When it is applied to a constructor parameter; When it is applied to an initialization method. The implementation will even try to convert the value to its declared destination, for example, if the destination property is an Int32, it will try to convert the appSettings stored string to an Int32. Injection By Configuration If we want to configure injection by configuration, we need to implement a custom section extension by inheriting from SectionExtension, and registering our custom element with the name “appSettings”: 1: sealed class AppSettingsParameterInjectionElementExtension : SectionExtension 2: { 3: public override void AddExtensions(SectionExtensionContext context) 4: { 5: context.AddElement<AppSettingsParameterValueElement>("appSettings"); 6: } 7: } And on the configuration file, for setting a property, we use it like this: 1: <appSettings> 2: <add key="LoggerFilename" value="Log.txt"/> 3: </appSettings> 4: <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> 5: <container> 6: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.ConsoleLogger, MyAssembly"/> 7: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.FileLogger, MyAssembly" name="File"> 8: <lifetime type="singleton"/> 9: <property name="Filename"> 10: <appSettings appSettingsKey="LoggerFilename"/> 11: </property> 12: </register> 13: </container> 14: </unity> If we would like to inject the value as a constructor parameter, it would be instead: 1: <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> 2: <sectionExtension type="MyNamespace.AppSettingsParameterInjectionElementExtension, MyAssembly" /> 3: <container> 4: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.ConsoleLogger, MyAssembly"/> 5: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.FileLogger, MyAssembly" name="File"> 6: <lifetime type="singleton"/> 7: <constructor> 8: <param name="filename" type="System.String"> 9: <appSettings appSettingsKey="LoggerFilename"/> 10: </param> 11: </constructor> 12: </register> 13: </container> 14: </unity> Notice the appSettings section, where we add a LoggerFilename entry, which is the same as the one referred by our AppSettingsParameterInjectionElementExtension extension. For more advanced behavior, you can add a TypeConverterName attribute to the appSettings declaration, where you can pass an assembly qualified name of a class that inherits from TypeConverter. This class will be responsible for converting the appSettings value to a destination type. Injection By Attribute If we would like to use attributes instead, we need to create a custom attribute by inheriting from DependencyResolutionAttribute: 1: [Serializable] 2: [AttributeUsage(AttributeTargets.Parameter | AttributeTargets.Property, AllowMultiple = false, Inherited = true)] 3: public sealed class AppSettingsDependencyResolutionAttribute : DependencyResolutionAttribute 4: { 5: public AppSettingsDependencyResolutionAttribute(String appSettingsKey) 6: { 7: this.AppSettingsKey = appSettingsKey; 8: } 9:  10: public String TypeConverterTypeName 11: { 12: get; 13: set; 14: } 15:  16: public String AppSettingsKey 17: { 18: get; 19: private set; 20: } 21:  22: public override IDependencyResolverPolicy CreateResolver(Type typeToResolve) 23: { 24: return (new AppSettingsParameterValueElement() { AppSettingsKey = this.AppSettingsKey, TypeConverterTypeName = this.TypeConverterTypeName }); 25: } 26: } As for file configuration, there is a mandatory property for setting the appSettings key and an optional TypeConverterName  for setting the name of a TypeConverter. Both the custom attribute and the custom section return an instance of the injector AppSettingsParameterValueElement that we implemented in the first place. Now, the attribute needs to be placed before the injected class’ Filename property: 1: public class FileLogger : ILogger 2: { 3: [AppSettingsDependencyResolution("LoggerFilename")] 4: public String Filename 5: { 6: get; 7: set; 8: } 9:  10: #region ILogger Members 11:  12: public void Log(String message) 13: { 14: using (Stream file = File.OpenWrite(this.Filename)) 15: { 16: Byte[] data = Encoding.Default.GetBytes(message); 17: 18: file.Write(data, 0, data.Length); 19: } 20: } 21:  22: #endregion 23: } Or, if we wanted to use constructor injection: 1: public class FileLogger : ILogger 2: { 3: public String Filename 4: { 5: get; 6: set; 7: } 8:  9: public FileLogger([AppSettingsDependencyResolution("LoggerFilename")] String filename) 10: { 11: this.Filename = filename; 12: } 13:  14: #region ILogger Members 15:  16: public void Log(String message) 17: { 18: using (Stream file = File.OpenWrite(this.Filename)) 19: { 20: Byte[] data = Encoding.Default.GetBytes(message); 21: 22: file.Write(data, 0, data.Length); 23: } 24: } 25:  26: #endregion 27: } Usage Just do: 1: ILogger logger = ServiceLocator.Current.GetInstance<ILogger>("File"); And off you go! A simple way do avoid hardcoded values in component registrations. Of course, this same concept can be applied to registry keys, environment values, XML attributes, etc, etc, just change the implementation of the AppSettingsParameterValueElement class. Next stop: custom lifetime managers.

    Read the article

  • Mac OS ? Assembly Language Esoteria

    - by veryfoolish
    I've been playing around with assembly and object files in general on Mac OS ? and was wondering if somebody could provide some edification. Specifically, I'm wondering what the extra code GCC generates when compiling the C file in the following example does. I have a toy C program so I can comprehend the assembly output. int main() { int a = 5; int b = 5; int c = a + b; } Running this through gcc -S creates the following assembly: .text .globl _main _main: LFB2: pushq %rbp LCFI0: movq %rsp, %rbp LCFI1: movl $5, -4(%rbp) movl $5, -8(%rbp) movl -8(%rbp), %eax addl -4(%rbp), %eax movl %eax, -12(%rbp) leave ret LFE2: .section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support EH_frame1: .set L$set$0,LECIE1-LSCIE1 .long L$set$0 LSCIE1: .long 0x0 .byte 0x1 .ascii "zR\0" .byte 0x1 .byte 0x78 .byte 0x10 .byte 0x1 .byte 0x10 .byte 0xc .byte 0x7 .byte 0x8 .byte 0x90 .byte 0x1 .align 3 LECIE1: .globl _main.eh _main.eh: LSFDE1: .set L$set$1,LEFDE1-LASFDE1 .long L$set$1 LASFDE1: .long LASFDE1-EH_frame1 .quad LFB2-. .set L$set$2,LFE2-LFB2 .quad L$set$2 .byte 0x0 .byte 0x4 .set L$set$3,LCFI0-LFB2 .long L$set$3 .byte 0xe .byte 0x10 .byte 0x86 .byte 0x2 .byte 0x4 .set L$set$4,LCFI1-LCFI0 .long L$set$4 .byte 0xd .byte 0x6 .align 3 LEFDE1: .subsections_via_symbols The LCFI1 section seems to contain the actual logic for the program, but I'm not sure what the misc. other stuff is for... also, is there any scheme these labels are following? I'm sorry this is such a vague question. I'd appreciate anything, including being pointed to a resource where I can find out more about this. Thanks!

    Read the article

  • Mac OS ? Assembly Language Esoteria

    - by veryfoolish
    I've been playing around with assembly and object files in general on Mac OS ? and was wondering if somebody could provide some edification. Specifically, I'm wondering what the extra code GCC generates when compiling the C file in the following example does. I have a toy C program so I can comprehend the assembly output. int main() { int a = 5; int b = 5; int c = a + b; } Running this through gcc -S creates the following assembly: .text .globl _main _main: LFB2: pushq %rbp LCFI0: movq %rsp, %rbp LCFI1: movl $5, -4(%rbp) movl $5, -8(%rbp) movl -8(%rbp), %eax addl -4(%rbp), %eax movl %eax, -12(%rbp) leave ret LFE2: .section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support EH_frame1: .set L$set$0,LECIE1-LSCIE1 .long L$set$0 LSCIE1: .long 0x0 .byte 0x1 .ascii "zR\0" .byte 0x1 .byte 0x78 .byte 0x10 .byte 0x1 .byte 0x10 .byte 0xc .byte 0x7 .byte 0x8 .byte 0x90 .byte 0x1 .align 3 LECIE1: .globl _main.eh _main.eh: LSFDE1: .set L$set$1,LEFDE1-LASFDE1 .long L$set$1 LASFDE1: .long LASFDE1-EH_frame1 .quad LFB2-. .set L$set$2,LFE2-LFB2 .quad L$set$2 .byte 0x0 .byte 0x4 .set L$set$3,LCFI0-LFB2 .long L$set$3 .byte 0xe .byte 0x10 .byte 0x86 .byte 0x2 .byte 0x4 .set L$set$4,LCFI1-LCFI0 .long L$set$4 .byte 0xd .byte 0x6 .align 3 LEFDE1: .subsections_via_symbols The LCFI1 section seems to contain the actual logic for the program, but I'm not sure what the misc. other stuff is for... also, is there any scheme these labels are following? I'm sorry this is such a vague question. I'd appreciate anything, including being pointed to a resource where I can find out more about this. Thanks!

    Read the article

  • 'Content' is NOT 'Text' in XAML

    - by psheriff
    One of the key concepts in XAML is that the Content property of a XAML control like a Button or ComboBoxItem does not have to contain just textual data. In fact, Content can be almost any other XAML that you want. To illustrate here is a simple example of how to spruce up your Button controls in Silverlight. Here is some very simple XAML that consists of two Button controls within a StackPanel on a Silverlight User Control. <StackPanel>  <Button Name="btnHome"          HorizontalAlignment="Left"          Content="Home" />  <Button Name="btnLog"          HorizontalAlignment="Left"          Content="Logs" /></StackPanel> The XAML listed above will produce a Silverlight control within a Browser that looks like Figure 1.   Figure 1: Normal button controls are quite boring. With just a little bit of refactoring to move the button attributes into Styles, we can make the buttons look a little better. I am a big believer in Styles, so I typically create a Resources section within my user control where I can factor out the common attribute settings for a particular set of controls. Here is a Resources section that I added to my Silverlight user control. <UserControl.Resources>  <Style TargetType="Button"         x:Key="NormalButton">    <Setter Property="HorizontalAlignment"            Value="Left" />    <Setter Property="MinWidth"            Value="50" />    <Setter Property="Margin"            Value="10" />  </Style></UserControl.Resources> Now back in the XAML within the Grid control I update the Button controls to use the Style attribute and have each button use the Static Resource called NormalButton. <StackPanel>  <Button Name="btnHome"          Style="{StaticResource NormalButton}"          Content="Home" />  <Button Name="btnLog"          Style="{StaticResource NormalButton}"          Content="Logs" /></StackPanel> With the additional attributes set in the Resources section on the Button, the above XAML will now display the two buttons as shown in Figure 2. Figure 2: Use Resources to Make Buttons More Consistent Now let’s re-design these buttons even more. Instead of using words for each button, let’s replace the Content property to use a picture. As they say… a picture is worth a thousand words, so let’s take advantage of that. Modify each of the Button controls and eliminate the Content attribute and instead, insert an <Image> control with the <Button> and the </Button> tags. Add a ToolTip to still display the words you had before in the Content and you will now have better looking buttons, as shown in Figure 3.   Figure 3: Using pictures instead of words can be an effective method of communication The XAML to produce Figure 3 is shown in the following listing: <StackPanel>  <Button Name="btnHome"          ToolTipService.ToolTip="Home"          Style="{StaticResource NormalButton}">    <Image Style="{StaticResource NormalImage}"            Source="Images/Home.jpg" />  </Button>  <Button Name="btnLog"          ToolTipService.ToolTip="Logs"          Style="{StaticResource NormalButton}">    <Image Style="{StaticResource NormalImage}"            Source="Images/Log.jpg" />  </Button></StackPanel> You will also need to add the following XAML to the User Control’s Resources section. <Style TargetType="Image"        x:Key="NormalImage">  <Setter Property="Width"          Value="50" /></Style> Add Multiple Controls to Content Now, since the Content can be whatever we want, you could also modify the Content of each button to be a StackPanel control. Then you can have an image and text within the button. <StackPanel>  <Button Name="btnHome"          ToolTipService.ToolTip="Home"          Style="{StaticResource NormalButton}">    <StackPanel>      <Image Style="{StaticResource NormalImage}"              Source="Images/Home.jpg" />      <TextBlock Text="Home"                  Style="{StaticResource NormalTextBlock}" />    </StackPanel>  </Button>  <Button Name="btnLog"          ToolTipService.ToolTip="Logs"          Style="{StaticResource NormalButton}">    <StackPanel>      <Image Style="{StaticResource NormalImage}"              Source="Images/Log.jpg" />      <TextBlock Text="Logs"                  Style="{StaticResource NormalTextBlock}" />    </StackPanel>  </Button></StackPanel> You will need to add one more resource for the TextBlock control too. <Style TargetType="TextBlock"        x:Key="NormalTextBlock">  <Setter Property="HorizontalAlignment"          Value="Center" /></Style> All of the above will now produce the following:   Figure 4: Add multiple controls to the content to make your buttons even more interesting. Summary While this is a simple example, you can see how XAML Content has great flexibility. You could add a MediaElement control as the content of a Button and play a video within the Button. Not that you would necessarily do this, but it does work. What is nice about adding different content within the Button control is you still get the Click event and other attributes of a button, but it does necessarily look like a normal button. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled "Silverlight XAML for the Complete Novice - Part 1."

    Read the article

  • How to Skip the Start Screen and Boot to the Desktop in Windows 8.1

    - by Mark Wilson
    For almost everyone who made the upgrade, Windows 8 proved to be something of a disappointment for one reason or another. Windows 8.1 (or Windows Blue) was released to address many of the issues users had complained about including reintroducing the ability to boot straight to the desktop. Being able to boot to the desktop rather than the Start screen is something that people have been clammering for ever since the first preview versions of Windows 8 were unveiled. There have been various third-party tools released as numerous workarounds used to get around the problem, but now it is an option that is built directly into the operating system. You’ll need to have downloaded and installed the update in order to proceed, but once you have done this, things are very simple. When you have Windows up and running after the upgrade, right click an empty section of the taskbar and select properties to bring up the newly named “Taskbar and Navigation properties” dialog.  Move to the Navigation tab and look in the “Start screen” section in the lower half of the dialog. Check the box labelled ‘Go to the desktop instead of Start when I sign in” and click OK.    

    Read the article

  • HTML5-MVC application using VS2010 SP1

    - by nmarun
    This is my first attempt at creating HTML5 pages. VS 2010 allows working with HTML5 now (you just need to make a small change after installing SP1). So my Razor view is now a HTML5 page. I call this application - 5Commerce – (an over-simplified) HTML5 ECommerce site. So here’s the flow of the application: home page renders user enters first and last name, chooses a product and the quantity can enter additional instructions for the order place the order user is then taken to another page showing the order details Off to the details. This is what my page looks in Google Chrome 10 beta (or later) soon after it renders. Here are some of the things to observe on this. Look a little closer and you’ll see a border around the first name textbox – this is ‘autofocus’ in action. I’ve set the autofocus attribute on this textbox. So as soon as the page loads, this control gets focus. 1: <input type="text" autofocus id="firstName" class="inputWidth" data_minlength="" 2: data_maxlength="" placeholder="first name" /> See a partially grayed out ‘last name’ text in the second textbox. This is set using a placeholder attribute (see above). It gets wiped out on-focus and improves the UI visuals in general. The quantity textbox is actually a numerical-only textbox. 1: <input type="number" id="quantity" data_mincount="" class="inputWidth" /> The last line is for additional instructions. This looks like a label but it’s content is editable. Just adding the ‘contenteditable’ attribute to the span allow the user to edit the text inside. 1: <span contenteditable id="additionalInstructions" data_texttype="" class="editableContent">select text and edit </span> All of the above is just plain HTML (no lurking javascript acting in here). Makes it real clean and simple. Going more into the HTML, I see that the _Layout.cshtml already is using some HTML5 content. I created my project before installing SP1, so that was the reason for my surprise. 1: <!DOCTYPE html> This is the doctype declaration in HTML5 and this is supported even by IE6 (just take my word on IE6 now, don’t go install it to test it, especially when MS is doing an IE6 countdown). That’s just amazing and extremely easy to read remember and talk about a few less bytes on every call! I modified the rest of my _Layout.cshtml to the below: 1: <!DOCTYPE html> 2: <html> 3: <head> 4: <title>5Commerce - HTML 5 Ecommerce site</title> 5: <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> 6: <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> 7: <script src="@Url.Content("~/Scripts/CustomScripts.js")" type="text/javascript"></script> 8: <script type="text/javascript"> 9: $(document).ready(function () { 10: WireupEvents(); 11: }); 12:</script> 13:  14: </head> 15:  16: <body role="document" class="bodybackground"> 17: <header role="heading"> 18: <h2>5Commerce - HTML 5 Ecommerce site!</h2> 19: </header> 20: <section id="mainForm"> 21: @RenderBody() 22: </section> 23: <footer id="page_footer" role="siteBaseInfo"> 24: <p>&copy; 2011 5Commerce Inc!</p> 25: </footer> 26: </body> 27: </html> I’m sure you’re seeing some of the new tags here. To give a brief intro about them: <header>, <footer>: Marks the header/footer region of a page or section. <section>: A logical grouping of content role attribute: Identifies the responsibility of an element. This attribute can be used by screen readers and can also be filtered through jQuery. SP1 also allows for some intellisense in HTML5. You see the other types of input fields – email, date, datetime, month, url and there are others as well. So once my page loads, i.e., ‘on document ready’, I’m wiring up the events following the principles of unobtrusive javascript. In the snippet below, I’m controlling the behavior of the input controls for specific events. 1: $("#productList").bind('change blur', function () { 2: IsSelectedProductValid(); 3: }); 4:  5: $("#quantity").bind('blur', function () { 6: IsQuantityValid(); 7: }); 8:  9: $("#placeOrderButton").click( 10: function () { 11: if (IsPageValid()) { 12: LoadProducts(); 13: } 14: }); This enables some client-side validation to occur before the data is sent to the server. These validation constraints are obtained through a JSON call to the WCF service and are set to the ‘data_’ attributes of the input controls. Have a look at the ‘GetValidators()’ function below: 1: function GetValidators() { 2: // the post to your webservice or page 3: $.ajax({ 4: type: "GET", //GET or POST or PUT or DELETE verb 5: url: "http://localhost:14805/OrderService.svc/GetValidators", // Location of the service 6: data: "{}", //Data sent to server 7: contentType: "application/json; charset=utf-8", // content type sent to server 8: dataType: "json", //Expected data format from server 9: processdata: true, //True or False 10: success: function (result) {//On Successfull service call 11: if (result.length > 0) { 12: for (i = 0; i < result.length; i++) { 13: if (result[i].PropertyName == "FirstName") { 14: if (result[i].MinLength > 0) { 15: $("#firstName").attr("data_minLength", result[i].MinLength); 16: } 17: if (result[i].MaxLength > 0) { 18: $("#firstName").attr("data_maxLength", result[i].MaxLength); 19: } 20: } 21: else if (result[i].PropertyName == "LastName") { 22: if (result[i].MinLength > 0) { 23: $("#lastName").attr("data_minLength", result[i].MinLength); 24: } 25: if (result[i].MaxLength > 0) { 26: $("#lastName").attr("data_maxLength", result[i].MaxLength); 27: } 28: } 29: else if (result[i].PropertyName == "Quantity") { 30: if (result[i].MinCount > 0) { 31: $("#quantity").attr("data_minCount", result[i].MinCount); 32: } 33: } 34: else if (result[i].PropertyName == "AdditionalInstructions") { 35: if (result[i].TextType.length > 0) { 36: $("#additionalInstructions").attr("data_textType", result[i].TextType); 37: } 38: } 39: } 40: } 41: }, 42: error: function (result) {// When Service call fails 43: alert('Service call failed: ' + result.status + ' ' + result.statusText); 44: } 45: }); 46:  47: //.... 48: } Just before the GetValidators() function runs and sets the validation constraints, this is what the html looks like (seen through the Dev tools of Chrome): After the function executes, you see the values in the ‘data_’  attributes. As and when we enter valid data into these fields, the error messages disappear, since the validation is bound to the blur event of the control. There you see… no error messages (well, the catch here is that once you enter THAT name, all errors disappear automatically). Clicking on ‘Place Order!’ runs the SaveOrder function. You can see the JSON for the order object that is getting constructed and passed to the WCF Service. 1: function SaveOrder() { 2: var addlInstructionsDefaultText = "select text and edit"; 3: var addlInstructions = $("span:first").text(); 4: if(addlInstructions == addlInstructionsDefaultText) 5: { 6: addlInstructions = ''; 7: } 8: var orderJson = { 9: AdditionalInstructions: addlInstructions, 10: Customer: { 11: FirstName: $("#firstName").val(), 12: LastName: $("#lastName").val() 13: }, 14: OrderedProduct: { 15: Id: $("#productList").val(), 16: Quantity: $("#quantity").val() 17: } 18: }; 19:  20: // the post to your webservice or page 21: $.ajax({ 22: type: "POST", //GET or POST or PUT or DELETE verb 23: url: "http://localhost:14805/OrderService.svc/SaveOrder", // Location of the service 24: data: JSON.stringify(orderJson), //Data sent to server 25: contentType: "application/json; charset=utf-8", // content type sent to server 26: dataType: "json", //Expected data format from server 27: processdata: false, //True or False 28: success: function (result) {//On Successfull service call 29: window.location.href = "http://localhost:14805/home/ShowOrderDetail/" + result; 30: }, 31: error: function (request, error) {// When Service call fails 32: alert('Service call failed: ' + request.status + ' ' + request.statusText); 33: } 34: }); 35: } The service saves this order into an XML file and returns the order id (a guid). On success, I redirect to the ShowOrderDetail action method passing the guid. This page will show all the details of the order. Although the back-end weightlifting is done by WCF, I did not show any of that plumbing-work as I wanted to concentrate more on the HTML5 and its associates. However, you can see it all in the source here. I do have one issue with HTML5 and this is an existing issue with HTML4 as well. If you see the snippet above where I’ve declared a textbox for first name, you’ll see the autofocus attribute just dangling by itself. It doesn’t follow the xml syntax of ‘key="value"’ allowing users to continue writing badly-formatted html even in the new version. You’ll see the same issue with the ‘contenteditable’ attribute as well. The work-around is that you can do ‘autofocus=”true”’ and it’ll work fine plus make it well-formatted. But unless the standards enforce this, there will be people (me included) who’ll get by, by just typing the bare minimum! Hoping this will get fixed in the coming version-updates. Source code here. Verdict: I think it’s time for us to embrace the new HTML5. Thank you HTML4 and Welcome HTML5.

    Read the article

  • Financial Statement Presentation Changes

    - by Theresa Hickman
    On March 10, 2010, FASB and IASB came to an agreement on financial statement presentation. They have been discussing changes for some time, such as displaying more lines items and moving certain line items from equity to the P&L, and now it seems they have finally come to a joint decision and put it in writing. I recently learned that there will be a trend to book nothing to equity and to convey everything in the P&L to take away any facility for companies to hide losses from its shareholders, investors, etc. (I'm exaggerating when I say book nothing to equity. Obviously, those items that already live there, such as stocks and dividends, would stay there.) But accounts like your CTA (Cumulative Translation Adjustment account) used to plug the gain or loss from equity translation would move from the equity section to your expense section. The rationale is that when you run translation, you're doing so for a subsidiary that you own, or simply put, it's a foriegn investment. Thus, the gains/losses of that foriegn investment should be itemized on your P&L and not buried in equity. The FASB will include changes in financial statement presentation in its Exposure Draft that is planned for issuance at the end of April 2010. Companies will be required to adopt the financial statement presentation provisions retroactively. Yes, that means companies will need to apply these changes to previously issued financial statements. The FASB Summary of Board Decisions can be found at "fasb.org".

    Read the article

  • Firefox 12's hardware acceleration on Ubuntu 12.04 LTS

    - by user64943
    After I have installed Ubuntu 12.04 LTS (32 bits), Firefox 12 works fine but without hardware acceleration. Needless to say I have the latest nVidia's proprietary drivers installed and my Firefox Preferences, on "Advanced" tab, "Browsing" section, have the option "Use hardware acceleration when available" checked. I have tried the following things before asking this question: Creating a boolean key "webgl.force-enabled" and set it to true on Firefox's page about:config; Starting a new profile like commented on thread Mozilla Firefox 12 is very slow on Ubuntu 12.04 LTS; I have updated my nVidia driver to version 295.53. And none of this have worked. As you can see below in my Firefox's page about:support report, "Graphics" section shows no "GPU Accelerated Windows": Adapter Description NVIDIA Corporation -- GeForce GTX 460/PCIe/SSE2 Vendor ID NVIDIA Corporation Device ID GeForce GTX 460/PCIe/SSE2 Driver Version 4.2.0 NVIDIA 295.53 WebGL Renderer NVIDIA Corporation -- GeForce GTX 460/PCIe/SSE2 -- 4.2.0 NVIDIA 295.53 GPU Accelerated Windows 0 AzureBackend skia I use the following site to test hardware acceleration: http://ie.microsoft.com/testdrive/Performance/FishBowl/ On Windows 7 I get 60 fps even with 1,750 fishes on browser's Full Screen Mode (1680x1050x32bit-color). On Ubuntu 12.04 LTS, same nVidia drivers (as shown in report), won't go faster than 15 fps with only 1,000 fishes. Can anybody help me? Best Regards,

    Read the article

  • How can I make PDFs appear life-size when displayed at 100%?

    - by ændrük
    When I open a letter size PDF and then zoom to 100%, the page physically displayed on my monitor is smaller than a real letter size sheet of paper. How can I make "100%" on the computer screen correspond with "100%" in real life? Details This message suggests that I should be investigating the system-wide DPI settings for my monitor. xdpyinfo reports: dimensions: 1024x768 pixels (271x203 millimeters) resolution: 96x96 dots per inch My monitor has a native display resolution of 1024x768 pixels and a diagonal display size of 12.07 inches. PX CALC returns the following information: DPI: 106.05 Dot Pitch: 0.2395mm Size: 9.66" × 7.24" (24.53cm × 18.39cm) What I've tried so far Running xrandr --dpi 106.05 successfully caused my PDF to appear actual size at 100%, but this effect was lost after rebooting. To make the setting persistent I tried creating the following /etc/X11/xorg.conf: Section "Monitor" Identifier "ThinkPad X60 LCD" DisplaySize 245 183 EndSection Section "Screen" Identifier "Screen0" Monitor "ThinkPad X60 LCD" EndSection After re-logging in, /var/log/Xorg.0.log contained [ 1167.824] (**) intel(0): Display dimensions: (245, 183) mm [ 1167.824] (**) intel(0): DPI set to (106, 106) but xdpyinfo still reported dimensions: 1024x768 pixels (271x203 millimeters) resolution: 96x96 dots per inch and "100%" still appeared too small. Link to XRANDR wiki Link to making XRANDR changes persistant

    Read the article

  • How to protect UI components using OPSS Resource Permissions

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid black 1.0pt; mso-border-alt:solid black .5pt; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid black; mso-border-insidev:.5pt solid black; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} ADF security protects ADF bound pages, bounded task flows and ADF Business Components entities with framework specific JAAS permissions classes (RegionPermission, TaskFlowPermission and EntityPermission). If used in combination with the ADF security expression language and security checks performed in Java, this protection already provides you with fine grained access control that can also be used to secure UI components like buttons and input text field. For example, the EL shown below disables the user profile panel tabs for unauthenticated users: <af:panelTabbed id="pt1" position="above">   ...   <af:showDetailItem        text="User Profile" id="sdi2"                                       disabled="#{!securityContext.authenticated}">   </af:showDetailItem>   ... </af:panelTabbed> The next example disables a panel tab item if the authenticated user is not granted access to the bounded task flow exposed in a region on this tab: <af:panelTabbed id="pt1" position="above">   ...   <af:showDetailItem text="Employees Overview" id="sdi4"                        disabled="#{!securityContext.taskflowViewable         ['/WEB-INF/EmployeeUpdateFlow.xml#EmployeeUpdateFlow']}">   </af:showDetailItem>   ... </af:panelTabbed> Security expressions like shown above allow developers to check the user permission, authentication and role membership status before showing UI components. Similar, using Java, developers can use code like shown below to verify the user authentication status: ADFContext adfContext = ADFContext.getCurrent(); SecurityContext securityCtx = adfContext.getSecurityContext(); boolean userAuthenticated = securityCtx.isAuthenticated(); Note that the Java code lines use the same security context reference that is used with expression language. But is this all that there is? No ! The goal of ADF Security is to enable all ADF developers to build secure web application with JAAS (Java Authentication and Authorization Service). For this, more fine grained protection can be defined using the ResourcePermission, a generic JAAS permission class owned by the Oracle Platform Security Services (OPSS).  Using the ResourcePermission  class, developers can grant permission to functional parts of an application that are not protected by page or task flow security. For example, an application menu allows creating and canceling product shipments to customers. However, only a specific user group - or application role, which is the better way to use ADF Security - is allowed to cancel a shipment. To enforce this rule, a permission is needed that can be used declaratively on the UI to hide a menu entry and programmatically in Java to check the user permission before the action is performed. Note that multiple lines of defense are what you should implement in your application development. Don't just rely on UI protection through hidden or disabled command options. To create menu protection permission for an ADF Security enable application, you choose Application | Secure | Resource Grants from the Oracle JDeveloper menu. The opened editor shows a visual representation of the jazn-data.xml file that is used at design time to define security policies and user identities for testing. An option in the Resource Grants section is to create a new Resource Type. A list of pre-defined types exists for you to create policy definitions for. Many of these pre-defined types use the ResourcePermission class. To create a custom Resource Type, for example to protect application menu functions, you click the green plus icon next to the Resource Type select list. The Create Resource Type editor that opens allows you to add a name for the resource type, a display name that is shown when granting resource permissions and a description. The ResourcePermission class name is already set. In the menu protection sample, you add the following information: Name: MenuProtection Display Name: Menu Protection Description: Permission to grant menu item permissions OK the dialog to close the resource permission creation. To create a resource policy that can be used to check user permissions at runtime, click the green plus icon in the Resources section of the Resource Grants section. In the Create Resource dialog, provide a name for the menu option you want to protect. To protect the cancel shipment menu option, create a resource with the following settings Resource Type: Menu Protection Name: Cancel Shipment Display Name: Cancel Shipment Description: Grant allows user to cancel customer good shipment   A new resource Cancel Shipmentis added to the Resources panel. Initially the resource is not granted to any user, enterprise or application role. To grant the resource, click the green plus icon in the Granted To section, select the Add Application Role option and choose one or more application roles in the opened dialog. Finally, you click the process action to define the policy. Note that permission can have multiple actions that you can grant individually to users and roles. The cancel shipment permission for example could have another action "view" defined to determine which user should see that this option exist and which users don't. To use the cancel shipment permission, select the disabled property on a command item, like af:commandMenuItem and click the arrow icon on the right. From the context menu, choose the Expression Builder entry. Expand the ADF Bindings | securityContext node and click the userGrantedResource option. Hint: You can expand the Description panel below the EL selection panel to see an example of how the grant should look like. The EL that is created needs to be manually edited to show as #{!securityContext.userGrantedResource[               'resourceName=Cancel Shipment;resourceType=MenuProtection;action=process']} OK the dialog so the permission checking EL is added as a value to the disabled property. Running the application and expanding the Shipment menu shows the Cancel Shipments menu item disabled for all users that don't have the custom menu protection resource permission granted. Note: Following the steps listed above, you create a JAAS permission and declaratively configure it for function security in an ADF application. Do you need to understand JAAS for this? No!  This is one of the benefits that you gain from using the ADF development framework. To implement multi lines of defense for your application, the action performed when clicking the enabled "Cancel Shipments" option should also check if the authenticated user is allowed to use process it. For this, code as shown below can be used in a managed bean public void onCancelShipment(ActionEvent actionEvent) {       SecurityContext securityCtx =       ADFContext.getCurrent().getSecurityContext();   //create instance of ResourcePermission(String type, String name,   //String action)   ResourcePermission resourcePermission =     new ResourcePermission("MenuProtection","Cancel Shipment",                            "process");        boolean userHasPermission =          securityCtx.hasPermission(resourcePermission);   if (userHasPermission){       //execute privileged logic here   } } Note: To learn more abput ADF Security, visit http://download.oracle.com/docs/cd/E17904_01/web.1111/b31974/adding_security.htm#BGBGJEAHNote: A monthly summary of OTN Harvest blog postings can be downloaded from ADF Code Corner. The monthly summary is a PDF document that contains supporting screen shots for some of the postings: http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >