Search Results

Search found 3022 results on 121 pages for 'loose typing'.

Page 115/121 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Too nervous to install

    - by The Prop
    Yesterday I (a professional rugby prop of somewhat limited intellect) landed in http://htmlagilitypack.codeplex.com/ and found myself stranded in a town with no signposts. The locals don't need signposts - they know their way around - so who gives a hoot about visitors? Well I'm a visitor and I'm lost. Here's my plea to the good burgesses of Codeplex-sans-signs: HELP!! Let me back-track and explain what landed me at the bottom of this tangled ruck. There's a "Download" button positioned near the top-right of the Codeplex web page, right? Like the Sword of Damocles, a down-arrow to the left of the button indicates, presumably, what a download would include: CURRENT 1.4.0 Stable DATE Fri May 7 2010 at 7:00 AM STATUS Stable With a simple-minded confidence that has since deserted me (the confidence - not the simple-mindedness), I clicked "Download". This introduced 3 new files to my computer: HtmlAgilityPack.dll, HtmlAgilityPack.pdb, and HtmlAgilityPack.XML This is when the first stab of doubt penetrated that globe between my cauliflower ears that I call a head. Where's the dot cs? Somewhere in Codeplex, I'd read advice to another lost soul to "download and build the HTMLAgilityPack solution". As I've done so many times as an All Black prop, I glared at the opposition front row - ah, I mean the 3 new files. Shouldn't one of them have a ".cs" on the back of his jersey - er, on the end of its name? Or is this just how they play the game in Codeplex-sans-signs? Undaunted (props have more courage than sense) I packed into my first C# scrum. The half-back feeds the ball in, and the front rows collapse - er, the debugging stops at this line of my code: "HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();" Then the Referee blows his whistle and announces one of those verdicts that's utterly indecipherable to your average loose-head prop: Locating source for 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'. Checksum: MD5 {62 bc f3 7e 9a 92 a6 32 7 d6 5b f8 76 59 7b 5b} The file 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs' does not exist. Looking in script documents for 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'... Looking in the projects for 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'. The file was not found in a project. Looking in directory 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\vc7\atlmfc'... Looking in directory 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\vc7\crt'... The debugger will ask the user to find the file: C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs. The user pressed Cancel [a brain-stemmer from the prop] in the Find Source dialog. The debug source files settings for the active solution have been modified so that the debugger will not ask the user to find the file: C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs. The debugger could not locate the source file 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'. Even if it had been the first 50 stanzas of "Eskimo Nell", I couldn't have been more shocked. I'm so shocked, my jaws clamp shut around the opposition hooker's ear. He thumbs me in the iris. With a cornea-torn eye I peer at the Codeplex site. My brain stem sparks and I punch the "View all downloads" link. It sparks four more times on each download link, and.. lo! FOUR files this time: HAPExplorer.zip, HtmlAgilityPack.1.4.0.Source.zip, HtmlAgilityPack.1.4.0.zip, HtmlAgilityPack.Documentation.chm But... is this not the same place arrived at recently by my flat-mate Chaz, journalist extraordinaire? (Chaz, if you're reading this, I'm not plugging for nothing - just write kindly about me in your next report, okay?) Didn't these same four files flummox Chaz The Great? He told me about it. Chaz left a message with Codeplex and then solved the problem by just walking away. Typical journalist, huh. But I'm not like that. I don't walk away. I'm made of the sort of stubborn stuff that becomes an All Black prop. Hence this impassioned plea: GOOD TOWNSFOLK OF CODEPLEX-SANS-SIGNS, WHAT SHOULD I DO NEXT? Can somebody point me to Main Street? How does a simpleton install 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'? I'm willing to prostrate myself and grovel to the first kind face that passes in front of my rapidly clouding sight. So help me, I'd even tug my forelock if I had one! Should I hold forth my rod over the wilderness, and create a folder called 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\' or some such? If so, what files should I move into it? ANYTHING else a dum-ass should know about? - and I mean ANYTHING - you just don't know how witless a punch-drunk prop can be.. %( Whenever I've installed other programs they've given me an ".exe" or ".msi" that I can click on and it's all done for me like magic. HEY... there's nothing of that nature here, is there? Am I missing something? Something for dummies to click? (From the waiting rooms of Dr I. Sight Phixes) (signed) The Prop

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • org.apache.struts2.dispatcher.Dispatcher: Could not find action or result Error

    - by peterwkc
    i tried to code the following simple struts but encounter this error during run time. [org.apache.struts2.dispatcher.Dispatcher] Could not find action or result: No result defined for action com.peter.action.LoginAction and result success index.jsp <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <%@ taglib prefix="s" uri="/struts-tags" %> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Struts Tutorial</title> </head> <body> <h2>Hello Struts</h2> <s:form action="login" > <s:textfield name="username" label="Username:" /> <s:password name="password" label="Password:"/> <s:submit /> </s:form> </body> </html> LoginAction.java /** * */ package com.peter.action; //import org.apache.struts2.convention.annotation.Namespace; import org.apache.struts2.convention.annotation.ResultPath; import org.apache.struts2.convention.annotation.Result; import org.apache.struts2.convention.annotation.Action; import com.opensymphony.xwork2.ActionSupport; /** * @author nicholas_tse * */ //@Namespace("/") To define URL namespace @ResultPath("/") // To instruct Struts where to search result page(jsp) public class LoginAction extends ActionSupport { private String username, password; /** * */ private static final long serialVersionUID = -8992836566328180883L; /** * */ public LoginAction() { // TODO Auto-generated constructor stub } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } @Override @Action(value = "login", results = {@Result(name="success", location="welcome.jsp")}) public String execute() { return SUCCESS; } } /* Remove * struts2-gxp-plugin * struts2-portlet-plugin * struts2-jsf-plugin * struts2-osgi-plugin and its related osgi-plugin * struts-rest-plugin * * Add * velocity-tools-view * * */ web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app id="WebApp_ID" version="3.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <display-name>Struts</display-name> <!-- org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter org.apache.struts2.dispatcher.FilterDispatcher --> <filter> <filter-name>Struts_Filter</filter-name> <filter-class>org.apache.struts2.dispatcher.FilterDispatcher</filter-class> <init-param> <param-name>actionPackages</param-name> <param-value>com.peter.action</param-value> </init-param> </filter> <filter-mapping> <filter-name>Struts_Filter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> </web-app> Besides the runtime error, there is deployment error which is ERROR [com.opensymphony.xwork2.util.finder.ClassFinder] (MSC service thread 1-2) Unable to read class [WEB-INF.classes.com.peter.action.LoginAction]: Could not load WEB-INF/classes/com/peter/action/LoginAction.class - [unknown location] at com.opensymphony.xwork2.util.finder.ClassFinder.readClassDef(ClassFinder.java:785) [xwork-core-2.3.1.2.jar:2.3.1.2] AFAIK, the scanning methodology of struts will scan the default packages named struts2 for any annotated class but i have instructed struts2 to scan in com.peter.action using init-param but still unable to find the class. It is pretty weird. Please help. Thanks.

    Read the article

  • Change or Reset Windows Password from a Ubuntu Live CD

    - by Trevor Bekolay
    If you can’t log in even after trying your twelve passwords, or you’ve inherited a computer complete with password-protected profiles, worry not – you don’t have to do a fresh install of Windows. We’ll show you how to change or reset your Windows password from a Ubuntu Live CD. This method works for all of the NT-based version of Windows – anything from Windows 2000 and later, basically. And yes, that includes Windows 7. You’ll need a Ubuntu 9.10 Live CD, or a bootable Ubuntu 9.10 Flash Drive. If you don’t have one, or have forgotten how to boot from the flash drive, check out our article on creating a bootable Ubuntu 9.10 flash drive. The program that lets us manipulate Windows passwords is called chntpw. The steps to install it are different in 32-bit and 64-bit versions of Ubuntu. Installation: 32-bit Open up Synaptic Package Manager by clicking on System at the top of the screen, expanding the Administration section, and clicking on Synaptic Package Manager. chntpw is found in the universe repository. Repositories are a way for Ubuntu to group software together so that users are able to choose if they want to use only completely open source software maintained by Ubuntu developers, or branch out and use software with different licenses and maintainers. To enable software from the universe repository, click on Settings > Repositories in the Synaptic window. Add a checkmark beside the box labeled “Community-maintained Open Source software (universe)” and then click close. When you change the repositories you are selecting software from, you have to reload the list of available software. In the main Synaptic window, click on the Reload button. The software lists will be downloaded. Once downloaded, Synaptic must rebuild its search index. The label over the text field by the Search button will read “Rebuilding search index.” When it reads “Quick search,” type chntpw in the text field. The package will show up in the list. Click on the checkbox near the chntpw name. Click on Mark for Installation. chntpw won’t actually be installed until you apply the changes you’ve made, so click on the Apply button in the Synaptic window now. You will be prompted to accept the changes. Click Apply. The changes should be applied quickly. When they’re done, click Close. chntpw is now installed! You can close Synaptic Package Manager. Skip to the section titled Using chntpw to reset your password. Installation: 64-bit The version of chntpw available in Ubuntu’s universe repository will not work properly on a 64-bit machine. Fortunately, a patched version exists in Debian’s Unstable branch, so let’s download it from there and install it manually. Open Firefox. Whether it’s your preferred browser or not, it’s very readily accessible in the Ubuntu Live CD environment, so it will be the easiest to use. There’s a shortcut to Firefox in the top panel. Navigate to http://packages.debian.org/sid/amd64/chntpw/download and download the latest version of chntpw for 64-bit machines. Note: In most cases it would be best to add the Debian Unstable branch to a package manager, but since the Live CD environment will revert to its original state once you reboot, it’ll be faster to just download the .deb file. Save the .deb file to the default location. You can close Firefox if desired. Open a terminal window by clicking on Applications at the top-left of the screen, expanding the Accessories folder, and clicking on Terminal. In the terminal window, enter the following text, hitting enter after each line: cd Downloadssudo dpkg –i chntpw* chntpw will now be installed. Using chntpw to reset your password Before running chntpw, you will have to mount the hard drive that contains your Windows installation. In most cases, Ubuntu 9.10 makes this simple. Click on Places at the top-left of the screen. If your Windows drive is easily identifiable – usually by its size – then left click on it. If it is not obvious, then click on Computer and check out each hard drive until you find the correct one. The correct hard drive will have the WINDOWS folder in it. When you find it, make a note of the drive’s label that appears in the menu bar of the file browser. If you don’t already have one open, start a terminal window by going to Applications > Accessories > Terminal. In the terminal window, enter the commands cd /medials pressing enter after each line. You should see one or more strings of text appear; one of those strings should correspond with the string that appeared in the title bar of the file browser earlier. Change to that directory by entering the command cd <hard drive label> Since the hard drive label will be very annoying to type in, you can use a shortcut by typing in the first few letters or numbers of the drive label (capitalization matters) and pressing the Tab key. It will automatically complete the rest of the string (if those first few letters or numbers are unique). We want to switch to a certain Windows directory. Enter the command: cd WINDOWS/system32/config/ Again, you can use tab-completion to speed up entering this command. To change or reset the administrator password, enter: sudo chntpw SAM SAM is the file that contains your Windows registry. You will see some text appear, including a list of all of the users on your system. At the bottom of the terminal window, you should see a prompt that begins with “User Edit Menu:” and offers four choices. We recommend that you clear the password to blank (you can always set a new password in Windows once you log in). To do this, enter “1” and then “y” to confirm. If you would like to change the password instead, enter “2”, then your desired password, and finally “y” to confirm. If you would like to reset or change the password of a user other than the administrator, enter: sudo chntpw –u <username> SAM From here, you can follow the same steps as before: enter “1” to reset the password to blank, or “2” to change it to a value you provide. And that’s it! Conclusion chntpw is a very useful utility provided for free by the open source community. It may make you think twice about how secure the Windows login system is, but knowing how to use chntpw can save your tail if your memory fails you two or eight times! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDChange Your Forgotten Windows Password with the Linux System Rescue CDHow to Create and Use a Password Reset Disk in Windows Vista & Windows 7Reset Your Forgotten Password the Easy Way Using the Ultimate Boot CD for WindowsHow to install Spotify in Ubuntu 9.10 using Wine TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox) New Firefox release 3.6.3 fixes 1 Critical bug Dark Side of the Moon (8-bit)

    Read the article

  • Sending an Activation Email when a New User Registers

    - by John
    Hello, The code below is a login system that I am using. It is supposed to allow a new user to register and then send the new user an activation email. It is inserting the new user into the MySQL database, but it is not sending the activation email. Any ideas why it's not sending the activation email? Thanks in advance, John header.php: <?php //error_reporting(0); session_start(); require_once ('db_connect.inc.php'); require_once ("function.inc.php"); $seed="0dAfghRqSTgx"; $domain = "...com"; ?> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>The Sandbox - <?php echo $domain; ?></title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <link rel="stylesheet" type="text/css" href="sandbox.css"> <div class="hslogo"><a href="http://www...com/sandbox/"><img src="images/hslogo.png" alt="Example" border="0"/></a></div> </head> <body> login.php: <?php if (!isLoggedIn()) { // user is not logged in. if (isset($_POST['cmdlogin'])) { // retrieve the username and password sent from login form & check the login. if (checkLogin($_POST['username'], $_POST['password'])) { show_userbox(); } else { echo "Incorrect Login information !"; show_loginform(); } } else { // User is not logged in and has not pressed the login button // so we show him the loginform show_loginform(); } } else { // The user is already loggedin, so we show the userbox. show_userbox(); } ?> function show_loginform($disabled = false) { echo '<form name="login-form" id="login-form" method="post" action="./index.php?'.$_SERVER['QUERY_STRING'].'"> <div class="usernameformtext"><label title="Username">Username: </label></div> <div class="usernameformfield"><input tabindex="1" accesskey="u" name="username" type="text" maxlength="30" id="username" /></div> <div class="passwordformtext"><label title="Password">Password: </label></div> <div class="passwordformfield"><input tabindex="2" accesskey="p" name="password" type="password" maxlength="15" id="password" /></div> <div class="registertext"><a href="http://www...com/sandbox/register.php" title="Register">Register</a></div> <div class="lostpasswordtext"><a href="http://www...com/sandbox/lostpassword.php" title="Lost Password">Lost password?</a></div> <p class="loginbutton"><input tabindex="3" accesskey="l" type="submit" name="cmdlogin" value="Login" '; if ($disabled == true) { echo 'disabled="disabled"'; } echo ' /></p></form>'; } register.php: <?php require_once "header.php"; if (isset($_POST['register'])){ if (registerNewUser($_POST['username'], $_POST['password'], $_POST['password2'], $_POST['email'])){ echo "<div class='registration'>Thank you for registering, an email has been sent to your inbox, Please activate your account. <a href='http://www...com/sandbox/index.php'>Click here to login.</a> </div>"; }else { echo "Registration failed! Please try again."; show_registration_form(); } } else { // has not pressed the register button show_registration_form(); } ?> New User Function: function registerNewUser($username, $password, $password2, $email) { global $seed; if (!valid_username($username) || !valid_password($password) || !valid_email($email) || $password != $password2 || user_exists($username)) { return false; } $code = generate_code(20); $sql = sprintf("insert into login (username,password,email,actcode) value ('%s','%s','%s','%s')", mysql_real_escape_string($username), mysql_real_escape_string(sha1($password . $seed)) , mysql_real_escape_string($email), mysql_real_escape_string($code)); if (mysql_query($sql)) { $id = mysql_insert_id(); if (sendActivationEmail($username, $password, $id, $email, $code)) { return true; } else { return false; } } else { return false; } return false; } Send Activation Email function: function sendActivationEmail($username, $password, $uid, $email, $actcode) { global $domain; $link = "http://www.$domain/sandbox/activate.php?uid=$uid&actcode=$actcode"; $message = " Thank you for registering on http://www.$domain/, Your account information: username: $username password: $password Please click the link below to activate your account. $link Regards $domain Administration "; if (sendMail($email, "Please activate your account.", $message, "no-reply@$domain")) { return true; } else { return false; } }

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

  • Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CD

    - by Trevor Bekolay
    Accidentally deleting a file is a terrible feeling. Not being able to boot into Windows and undelete that file makes that even worse. Fortunately, you can recover deleted files on NTFS hard drives from an Ubuntu Live CD. To show this process, we created four files on the desktop of a Windows XP machine, and then deleted them. We then booted up the same machine with the bootable Ubuntu 9.10 USB Flash Drive that we created last week. Once Ubuntu 9.10 boots up, open a terminal by clicking Applications in the top left of the screen, and then selecting Accessories > Terminal. To undelete our files, we first need to identify the hard drive that we want to undelete from. In the terminal window, type in: sudo fdisk –l and press enter. What you’re looking for is a line that ends with HPSF/NTFS (under the heading System). In our case, the device is “/dev/sda1”. This may be slightly different for you, but it will still begin with /dev/. Note this device name. If you have more than one hard drive partition formatted as NTFS, then you may be able to identify the correct partition by the size. If you look at the second line of text in the screenshot above, it reads “Disk /dev/sda: 136.4 GB, …” This means that the hard drive that Ubuntu has named /dev/sda is 136.4 GB large. If your hard drives are of different size, then this information can help you track down the right device name to use. Alternatively, you can just try them all, though this can be time consuming for large hard drives. Now that you know the name Ubuntu has assigned to your hard drive, we’ll scan it to see what files we can uncover. In the terminal window, type: sudo ntfsundelete <HD name> and hit enter. In our case, the command is: sudo ntfsundelete /dev/sda1 The names of files that can recovered show up in the far right column. The percentage in the third column tells us how much of that file can be recovered. Three of the four files that we originally deleted are showing up in this list, even though we shut down the computer right after deleting the four files – so even in ideal cases, your files may not be recoverable. Nevertheless, we have three files that we can recover – two JPGs and an MPG. Note: ntfsundelete is immediately available in the Ubuntu 9.10 Live CD. If you are in a different version of Ubuntu, or for some other reason get an error when trying to use ntfsundelete, you can install it by entering “sudo apt-get install ntfsprogs” in a terminal window. To quickly recover the two JPGs, we will use the * wildcard to recover all of the files that end with .jpg. In the terminal window, enter sudo ntfsundelete <HD name> –u –m *.jpg which is, in our case, sudo ntfsundelete /dev/sda1 –u –m *.jpg The two files are recovered from the NTFS hard drive and saved in the current working directory of the terminal. By default, this is the home directory of the current user, though we are working in the Desktop folder. Note that the ntfsundelete program does not make any changes to the original NTFS hard drive. If you want to take those files and put them back in the NTFS hard drive, you will have to move them there after they are undeleted with ntfsundelete. Of course, you can also put them on your flash drive or open Firefox and email them to yourself – the sky’s the limit! We have one more file to undelete – our MPG. Note the first column on the far left. It contains a number, its Inode. Think of this as the file’s unique identifier. Note this number. To undelete a file by its Inode, enter the following in the terminal: sudo ntfsundelete <HD name> –u –i <Inode> In our case, this is: sudo ntfsundelete /dev/sda1 –u –i 14159 This recovers the file, along with an identifier that we don’t really care about. All three of our recoverable files are now recovered. However, Ubuntu lets us know visually that we can’t use these files yet. That’s because the ntfsundelete program saves the files as the “root” user, not the “ubuntu” user. We can verify this by typing the following in our terminal window: ls –l We want these three files to be owned by ubuntu, not root. To do this, enter the following in the terminal window: sudo chown ubuntu <Files> If the current folder has other files in it, you may not want to change their owner to ubuntu. However, in our case, we only have these three files in this folder, so we will use the * wildcard to change the owner of all three files. sudo chown ubuntu * The files now look normal, and we can do whatever we want with them. Hopefully you won’t need to use this tip, but if you do, ntfsundelete is a nice command-line utility. It doesn’t have a fancy GUI like many of the similar Windows programs, but it is a powerful tool that can recover your files quickly. See ntfsundelete’s manual page for more detailed usage information Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDUse Ubuntu Live CD to Backup Files from Your Dead Windows ComputerCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy WayGuide to Using Check Disk in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • Automating Form Login

    - by Greg_Gutkin
    Introduction A common task in configuring a web application for proxying in Pagelet Producer is setting up form autologin. PP provides a wizard-like tool for detecting the login form fields, but this is usually only the first step in configuring this feature. If the generated configuration doesn't seem to work, some additional manual modifications will be needed to complete the setup. This article will try to guide you through this process while steering you away from common pitfalls. For the purposes of this article, let's assume the following characteristics about your environment: Web Application Base URL: http://host/app (configured as Resource Source URL in PP) Pagelet Producer Base URL: http://pp/pagelets Form Field Auto-Detection Form Autologin is configured in the PP Admin UI under resource_name/Autologin/Form Login. First, you'll enter the URL to the login form under "Login Form Identification". This will enable the admin wizard to connect to and display the login page. Caution: RedirectsMake sure the entered URL matches what you see in the browser's address bar, when the application login page is displayed. For example, even though you may be able to reach the login page by simply typing http://host/app, the URL you end up on may change to http://host/app/login via browser redirect(s).The second URL is the one you will want to use. Caution: External Login ServersThe login page may actually come from a different server than the application you are trying to proxy. For example, you may notice that the login page URL changes to http://hostB/appB. This is common when external SSO products are involved. There are two ways of dealing with this situation. One is to configure Pagelet Producer to participate in SSO. This approach is out of scope of this article and is discussed in a separate whitepaper (TODO add link). The second approach is to use the autologin feature to provide stored credentials to the SSO login form. Since the login form URL is not an extension of the application base URL (PP resource URL), you will need to add a new PP resource for the SSO server and configure the login form on that resource instead of the original application resource. One side benefit of this additional resource is that it can reused for other applications relying on the same SSO server for login. After entering the login page URL (make sure dropdown says "URL"), click "Automatically Detect Form Fields". This will bring up the web app's login page in a new browser window. Fill it out and submit it as you would normally. If everything goes right, Pagelet Producer will intercept the submitted values and fill out all the needed configuration data in the Admin UI. If the login form window doesn't close or configuration data doesn't get filled in, you may have not entered the login page URL correctly. Review the two cautionary notes above and make any necessary changes. If the form fields got filled automatically, it's time to save the configuration and test it out. If you can access a protected area of the backend application via a proxied PP URL without filling out its login form, then you are pretty much done with login form configuration. The only other step you will need to complete before declaring this aspect of configuration production ready is configuring form field source. You may skip to that section below. Manual Login Form Identification Let's take a closer look at Login Form Identification. This determines how Pagelet Producer recognizes login forms as such. URL The most efficient way of detecting login forms is by looking at the page URL. This method can only be used under the following conditions: Login page URL must be different from the post login application URLs. Login page URL must stay constant regardless of the path it takes to reach the page. For example, reaching the login page by going to the application base URL or to a specific protected URL must result in a redirect to the same login page URL (query string excluded). If only the query string parameters change, just leave out the query string from the configured login page URL. If either of these conditions is not fullfilled, you must switch to the RegEx approach below. RegEx If the login page URL is not uniform enough across all scenarios or is indistinguishable from other page locations, PP can be configured to recognize it by looking at the page markup itself. This is accomplished by changing the dropdown to "RegEx". If regular expressions scare you, take comfort from the fact that in most cases you won't need to enter any special regex characters. Let's look at an example: Say you have a login form that looks like <form id='loginForm' action='login?from=pageA' > <input id='user'> <input id='pass'> </form> Since this form has an id attribute, you can be reasonably sure that this login form can be uniquely identified across the web application by this snippet: "id='loginForm'". (Unless, of course your backend web application contains login forms to other apps). Since no wildcards are needed to find this snippet, you can just enter it as is into the RegEx field - no special regular expression characters needed! If the web developer who created the form wasn't kind enough to provide a unique id, you will need to look for other snippets of the page to uniquely identify it. It could be the action URL, an input field id, or some other markup fragment. You should abstain from using UI text as an identifier it may change in translated versions of the page and prevent the login page logic from working for international users. You may need to turn to regular expression wildcard syntax if no simple matches work. For more information on regular expression, refer to the Resources section. Form Submit Location Now we'll look at the form submit location. If the captured URL contains query string parameters that will likely change from one form submission to the next, you will need to change its type to RegEx. This type will tell Pagelet Producer to parse the login page for the action URL and submit to the value found. The regular expression needs to point at the actual action URL with its first grouping expression. Taking the example form definition above, the form submit location regex would be: action='(.*?)' The parentheses are used to identify the actual action URL, while the rest of the expression provides the context for finding it. Expression .*? is a so-called reluctant wildcard that matches any character excluding the single quote that follows. See Resources section below for further information on regular expressions. Manual Form Field Detection If the Admin UI form field detection wizard fails to populate login form configuration page, you will have to enter the fields by hand. Use a built-in browser developer tool or addon (e.g. Firebug) to inspect the form element and its children input elements. For each input element (including hidden elements), create an entry under Form Fields. Change its Source according to the next section. Form Field Source Change the source of any of the fields not exposed to the users of the login form (i.e. hidden fields) to "Generated". This means Pagelet Producer will just use the values returned by the web app rather than supplying values it stored. For fields that contain sensitive data or vary from user to user (e.g. username & password), change the source to User (Credential) Vault. Logging Support To help you troubleshoot you autologin configuration, PP provides some useful logging support. To turn on detailed logging for the autologin feature, navigate to Settings in Admin UI. Under Logging, change the log level for AutoLogin to Finest. Known Limitations Autologin feature may not work as expected if login form fields (not just the values, but the DOM elements themselves) are generated dynamically by client side JavaScript. Resources RegEx RegEx Reference from Java RegEx Test Tool

    Read the article

  • 202 blog articles

    - by mprove
    All my blog articles under blogs.oracle.com since August 2005: 202 blog articles Apr 2012 blogs.oracle.com design patch Mar 2012 Interaction 12 - Critique Mar 2012 Typing. Clicking. Dancing. Feb 2012 Desktop Mobility in Hospitals with Oracle VDI /video Feb 2012 Interaction 12 in Dublin - Highlights of Day 3 Feb 2012 Interaction 12 in Dublin - Highlights of Day 2 Feb 2012 Interaction 12 in Dublin - Highlights of Day 1 Feb 2012 Shit Interaction Designers Say Feb 2012 Tips'n'Tricks for WebCenter #3: How to display custom page titles in Spaces Jan 2012 Tips'n'Tricks for WebCenter #2: How to create an Admin menu in Spaces and save a lot of time Jan 2012 Tips'n'Tricks for WebCenter #1: How to apply custom resources in Spaces Jan 2012 Merry XMas and a Happy 2012! Dec 2011 One Year Oracle SocialChat - The Movie Nov 2011 Frank Ludolph's Last Working Day Nov 2011 Hans Rosling at TED Oct 2011 200 Countries x 200 Years Oct 2011 Blog Aggregation for Desktop Virtualization Oct 2011 Oracle VDI at OOW 2011 Sep 2011 Design for Conversations & Conversations for Design Sep 2011 All Oracle UX Blogs Aug 2011 Farewell Loriot Aug 2011 Oracle VDI 3.3 Overview Aug 2011 Sutherland's Closing Remarks at HyperKult Aug 2011 Surface and Subface Aug 2011 Back to Childhood in UI Design Jul 2011 The Art of Engineering and The Engineering of Art Jul 2011 Oracle VDI Seminar - June-30 Jun 2011 SGD White Paper May 2011 TEDxHamburg Live Feed May 2011 Oracle VDI in 3 Minutes May 2011 Space Ship Earth 2011 May 2011 blog moving times Apr 2011 Frozen tag cloud Apr 2011 Oracle: Hardware Software Complete in 1953 Apr 2011 Interaction Design with Wireframes Apr 2011 A guide to closing down a project Feb 2011 Oracle VDI 3.2.2 Jan 2011 free VDI charts Jan 2011 Sun Founders Panel 2006 Dec 2010 Sutherland on Leadership Dec 2010 SocialChat: Efficiency of E20 Dec 2010 ALWAYS ON Desktop Virtualization Nov 2010 12,000 Desktops at JavaOne Nov 2010 SocialChat on Sharing Best Practices Oct 2010 Globe of Visitors Oct 2010 SocialChat about the Next Big Thing Oct 2010 Oracle VDI UX Story - Wireframes Oct 2010 What's a PC anyway? Oct 2010 SocialChat on Getting Things Done Oct 2010 SocialChat on Infoglut Oct 2010 IT Twenty Twenty Oct 2010 Desktop Virtualization Webcasts from OOW Oct 2010 Oracle VDI 3.2 Overview Sep 2010 Blog Usability Top 7 Sep 2010 100 and counting Aug 2010 Oracle'izing the VDI Blogs Aug 2010 SocialChat on Apple Aug 2010 SocialChat on Video Conferencing Aug 2010 Oracle VDI 3.2 - Features and Screenshots Aug 2010 SocialChat: Don't stop making waves Aug 2010 SocialChat: Giving Back to the Community Aug 2010 SocialChat on Learning in Meetings Aug 2010 iPAD's Natural User Interface Jul 2010 Last day for Sun Microsystems GmbH Jun 2010 SirValUse Celebration Snippets Jun 2010 10 years SirValUse - Happy Birthday! Jun 2010 Wim on Virtualization May 2010 New Home for Oracle VDI Apr 2010 Renaissance Slide Sorter Comments Apr 2010 Unboxing Sun Ray 3 Plus Apr 2010 Desktop Virtualisierung mit Sun VDI 3.1 Apr 2010 Blog Relaunch Mar 2010 Social Messaging Slides from CeBIT Mar 2010 Social Messaging Talk at CeBIT Feb 2010 Welcome Oracle Jan 2010 My last presentation at Sun Jan 2010 Ivan Sutherland on Leadership Jan 2010 Learning French with Sun VDI Jan 2010 Learning Danish with Sun Ray Jan 2010 VDI workshop in Nieuwegein Jan 2010 Happy New Year 2010 Jan 2010 On Creating Slides Dec 2009 Best VDI Ever Nov 2009 How to store the Big Bang Nov 2009 Social Enterprise Tools. Beipiel Sun. Nov 2009 Nov-19 Nov 2009 PDF and ODF links on your blog Nov 2009 Q&A on VDI and MySQL Cluster Nov 2009 Zürich next week: Swiss Intranet Summit 09 Nov 2009 Designing for a Sustainable World - World Usabiltiy Day, Nov-12 Nov 2009 How to export a desktop from VDI 3 Nov 2009 Virtualisation Roadshow in the UK Nov 2009 Project Wonderland at EDUCAUSE 09 Nov 2009 VDI Roadshow in Dublin, Nov-26, 2009 Nov 2009 Sun VDI at EDUCAUSE 09 Nov 2009 Sun VDI 3.1 Architecture and New Features Oct 2009 VDI 3.1 is Early-Access Sep 2009 Virtualization for MySQL on VMware Sep 2009 Silpion & 13. Stock Sommerparty Sep 2009 Sun Ray and VMware View 3.1.1 2009-08-31 New Set of Sun Ray Status Icons 2009-08-25 Virtualizing the VDI Core? 2009-08-23 World Usability Day Hamburg 2009 - CfP 2009-07-16 Rising Sun 2009-07-15 featuring twittermeme 2009-06-19 ISC09 Student Party on June-20 /Hamburg 2009-06-18 Before and behind the curtain of JavaOne 2009-06-09 20k desktops at JavaOne 2009-06-01 sweet microblogging 2009-05-25 VDI 3 - Why you need 3 VDI hosts and what you can do about that? 2009-05-21 IA Konferenz 2009 2009-05-20 Sun VDI 3 UX Story - Power of the Web 2009-05-06 Planet of Sun and Oracle User Experience Design 2009-04-22 Sun VDI 3 UX Story - User Research 2009-04-08 Sun VDI 3 UX Story - Concept Workshops 2009-04-06 Localized documentation for Sun Ray Connector for VMware View Manager 1.1 2009-04-03 Sun VDI 3 Press Release 2009-03-25 Sun VDI 3 launches today! 2009-03-25 Sun Ray Connector for VMware View Manager 1.1 Update 2009-03-11 desktop virtualization wiki relaunch 2009-03-06 VDI 3 at CeBIT hall 6, booth E36 2009-03-02 Keyboard layout problems with Sun Ray Connector for VMware VDM 2009-02-23 wikis.sun.com tips & tricks 2009-02-23 Sun VDI 3 is in Early Access 2009-02-09 VirtualCenter unable to decrypt passwords 2009-02-02 Sun & VMware Desktop Training 2009-01-30 VDI at next09? 2009-01-16 Sun VDI: How to use virtual machines with multiple network adapters 2009-01-07 Sun Ray and VMware View 2009-01-07 Hamburg World Usability Day 2008 - Webcasts 2009-01-06 Sun Ray Connector for VMware VDM slides 2008-12-15 mother of all demos 2008-12-08 Build your own Thumper 2008-12-03 Troubleshooting Sun Ray Connector for VMware VDM 2008-12-02 My Roller Tag Cloud 2008-11-28 Sun Ray Connector: SSL connection to VDM 2008-11-25 Setting up SSL and Sun Ray Connector for VMware VDM 2008-11-13 Inspiration for Today and Tomorrow 2008-10-23 Sun Ray Connector for VMware VDM released 2008-10-14 From Sketchpad to ILoveSketch 2008-10-09 Desktop Virtualization on Xing 2008-10-06 User Experience Forum on Xing 2008-10-06 Sun Ray Connector for VMware VDM certified 2008-09-17 Virtual Clouds over Las Vegas 2008-09-14 Bill Verplank sketches metaphors 2008-09-04 End of Early Access - Sun Ray Connector for VMware 2008-08-27 Early Access: Sun Ray Connector for VMware Virtual Desktop Manager 2008-08-12 Sun Virtual Desktop Connector - Insides on Recycling Part 2 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling Part 3 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling 2008-07-20 lost in wiki space 2008-07-07 Evolution of the Desktop 2008-06-17 Virtual Desktop Webcast 2008-06-16 Woodstock 2008-06-16 What's a Desktop PC anyway? 2008-06-09 Virtual-T-Box 2008-06-05 Virtualization Glossary 2008-05-06 Five User Experience Principles 2008-04-25 Virtualization News Feed 2008-04-21 Acetylcholinesterase - Second Season 2008-04-18 Acetylcholinesterase - End of Signal 2007-12-31 Produkt-Management ist... 2007-10-22 Usability Verbände, Verteiler und Netzwerke. 2007-10-02 The Meaning is the Message 2007-09-28 Visualization Methods 2007-09-10 Inhouse und Open Source Projekte – Usability verankern und Synergien nutzen 2007-09-03 Der Schwabe Darth Vader entdeckt das Virale Marketing 2007-08-29 Dick Hardt 3.0 on Identity 2.0 2007-08-27 quality of written text depends on the tool 2007-07-27 podcasts for reboot9 2007-06-04 It is the user's itch that need to be scratched 2007-05-25 A duel at reboot9 2007-05-14 Taxonomien und Folksonomien - Tagging als neues HCI-Element 2007-05-10 Dueling Interaction Models of Personal-Computing and Web-Computing 2007-03-01 22.März: Weizenbaum. Rebel at Work. /Filmpremiere Hamburg 2007-02-25 Bruce Sterling at UbiComp 2006 /webcast 2006-11-12 FSOSS 2006 /webcasts 2006-11-10 Highway 101 2006-11-09 User Experience Roundtable Hamburg: EuroGEL 2006 2006-11-08 Douglas Adams' Hyperland (BBC 1990) 2006-10-08 Taxonomien und Folksonomien – Tagging als neues HCI-Element 2006-09-13 Usability im Unternehmen 2006-09-13 Doug does HyperScope 2006-08-26 TED Talks and TechTalks 2006-08-21 Kai Krause über seine Freundschaft zu Douglas Adams 2006-07-20 Rebel At Work: Film Portrait on Weizenbaum 2006-07-04 Gabriele Fischer, mp3 2006-06-07 Dick Hardt at ETech 06 2006-06-05 Weinberger: From Control to Conversation 2006-04-16 Eye Tracking at User Experience Roundtable Hamburg 2006-04-14 dropping knowledge 2006-04-09 GEL 2005 2006-03-13 slide photos of reboot7 2006-03-04 Dick Hardt on Identity 2.0 2006-02-28 User Experience Newsletter #13: Versioning 2006-02-03 Ester Dyson on Choice and Happyness 2006-02-02 Requirements-Engineering im Spannungsfeld von Individual- und Produktsoftware 2006-01-15 User Experience Newsletter #12: Intuition Quiz 2005-11-30 User Experience und Requirements-Engineering für Software-Projekte 2005-10-31 Ivan Sutherland on "Research and Fun" 2005-10-18 Ars Electronica / Mensch und Computer 2005 2005-09-14 60 Jahre nach Memex: Über die Unvereinbarkeit von Desktop- und Web-Paradigma 2005-08-31 reboot 7 2005-06-30

    Read the article

  • FAQ: GridView Calculation with JavaScript

    - by Vincent Maverick Durano
    In my previous post I wrote a simple demo on how to Calculate Totals in GridView and Display it in the Footer. Basically what it does is it calculates the total amount by typing into the TextBox and display the grand total in the footer of the GridView and basically it was a server side implemenation.  Many users in the forums are asking how to do the same thing without postbacks and how to calculate both amount and total amount together. In this post I will demonstrate how to do this using JavaScript. To get started let's go ahead and set up the form. Just for the simplicity of this demo I just set up the form like this:   <asp:gridview ID="GridView1" runat="server" ShowFooter="true" AutoGenerateColumns="false"> <Columns> <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> <asp:BoundField DataField="Description" HeaderText="Item Description" /> <asp:TemplateField HeaderText="Item Price"> <ItemTemplate> <asp:Label ID="LBLPrice" runat="server" Text='<%# Eval("Price") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Quantity"> <ItemTemplate> <asp:TextBox ID="TXTQty" runat="server"></asp:TextBox> </ItemTemplate> <FooterTemplate> <b>Total Amount:</b> </FooterTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Sub-Total"> <ItemTemplate> <asp:Label ID="LBLSubTotal" runat="server"></asp:Label> </ItemTemplate> <FooterTemplate> <asp:Label ID="LBLTotal" runat="server" ForeColor="Green"></asp:Label> </FooterTemplate> </asp:TemplateField> </Columns> </asp:gridview>   As you can see there's no fancy about the mark up above. It just a standard GridView with BoundFields and TemplateFields on it. Now just for the purpose of this demo I just use a dummy data for populating the GridView. Here's the code below:   public partial class GridCalculation : System.Web.UI.Page { private void BindDummyDataToGrid() { DataTable dt = new DataTable(); DataRow dr = null; dt.Columns.Add(new DataColumn("RowNumber", typeof(string))); dt.Columns.Add(new DataColumn("Description", typeof(string))); dt.Columns.Add(new DataColumn("Price", typeof(string))); dr = dt.NewRow(); dr["RowNumber"] = 1; dr["Description"] = "Nike"; dr["Price"] = "1000"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 2; dr["Description"] = "Converse"; dr["Price"] = "800"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 3; dr["Description"] = "Adidas"; dr["Price"] = "500"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 4; dr["Description"] = "Reebok"; dr["Price"] = "750"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 5; dr["Description"] = "Vans"; dr["Price"] = "1100"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 6; dr["Description"] = "Fila"; dr["Price"] = "200"; dt.Rows.Add(dr); //Bind the Gridview GridView1.DataSource = dt; GridView1.DataBind(); } protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { BindDummyDataToGrid(); } } }   Now try to run the page. The output should look something like below: The Client-Side Calculation Here's the code for the GridView calculation:   <script type="text/javascript"> function CalculateTotals() { var gv = document.getElementById("<%= GridView1.ClientID %>"); var tb = gv.getElementsByTagName("input"); var lb = gv.getElementsByTagName("span"); var sub = 0; var total = 0; var indexQ = 1; var indexP = 0; for (var i = 0; i < tb.length; i++) { if (tb[i].type == "text") { sub = parseFloat(lb[indexP].innerHTML) * parseFloat(tb[i].value); if (isNaN(sub)) { lb[i + indexQ].innerHTML = ""; sub = 0; } else { lb[i + indexQ].innerHTML = sub; } indexQ++; indexP = indexP + 2; total += parseFloat(sub); } } lb[lb.length -1].innerHTML = total; } </script>   The code above calculates the sub-total by multiplying the price and the quantity and at the same time calculates the total amount  by adding the sub-total values. Now you can simply call the JavaScript function above like this:   <ItemTemplate> <asp:TextBox ID="TXTQty" runat="server" onkeyup="CalculateTotals();"></asp:TextBox> </ItemTemplate>   Running the code above will display something like below: That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,JavaScript,GridView,TipsTricks

    Read the article

  • how to integrate jquery with jsf richfaces tags for print the image and textarea content?

    - by eswaramoorthy-nec
    hi, Here i write code to take printout the textarea content using jquery. I load the two java script (jquery-1.3.2.js and jquery.print.js) But these two source file not support rich:datascroller tag.. That means there is no reaction in datascroller. I need to take print the textarea content and also perfectly work to datascroller also. Here i give jsp and related java files. This code have datatable, rich:datascroller and textarea. Datatable for only used for test the datascroller component. My focus : print the textarea content as well as perfectly work to datascroller component. printer.jsp <%@page contentType="text/html" pageEncoding="UTF-8"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%@ taglib uri="http://java.sun.com/jsf/html" prefix="h" %> <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f" %> <%@ taglib uri="http://richfaces.org/a4j" prefix="a4j"%> <%@ taglib uri="http://richfaces.org/rich" prefix="rich"%> <html> <head> <title>Print Viewer </title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <a4j:loadScript src="resource/jquery-1.3.2.js"/> <a4j:loadScript src="resource/jquery.print.js"/> // The problem is : above two loadscript does not support datascroller //componenet. // But that two jquery file for using to take the print. <script type="text/javascript"> function printData() { //Print the Div content for textarea jQuery( ".printable" ).print(); return( false ); } </script> </head> <body> <h:form id="printViewerForm" binding="#{PrintViewer.initForm}"> <rich:panel id="printViewerRichPanel"> <h:panelGrid cellpadding="3" columns="2" id="printPanelGridId" cellspacing="3" border ="1"> <h:panelGrid> //DataScroller for dataTable <rich:datascroller id = "dataScrollerTop" align="center" for= "printDataTable" page="1" maxPages="20"/> <rich:dataTable id="printDataTable" value="#{PrintViewer.printViewerList}" cellpadding="3" rows = "5" rowKeyVar="rowIndex" cellspacing="3" var="printViewerResultListTo"> <f:facet name="header"> <rich:columnGroup> <rich:column> <h:outputText value="PrintTable"/> </rich:column> </rich:columnGroup> </f:facet> <rich:column> <h:outputText value="#{printViewerResultListTo.printName}"/> </rich:column> </rich:dataTable> </h:panelGrid> //Print Content Region <a4j:region id="printContentViewRegion"> <a4j:commandButton id="printButton" value="PrintContent" onclick="printData()"/> <div id="printContentDiv" class="printable"> <h:inputTextarea id="printContentTextArea" style="width:300px;height:300px; value =" This is Sample Jquery For Test working Text Area"/> </div> </a4j:region> </h:panelGrid> </rich:panel> </h:form> </body> PrintViewer.java import java.util.ArrayList; import java.util.List; import javax.faces.component.html.HtmlForm; public class PrintViewer { private HtmlForm initForm; private List printViewerList = new ArrayList(); public HtmlForm getInitForm() { printViewerList = getPrintList(); return initForm; } private List getUploadList() { if (!printViewerList.isEmpty()) { printViewerList.clear(); } printViewerList.add(new PrintViewerResultListTo("print 1")); printViewerList.add(new PrintViewerResultListTo("print 2")); printViewerList.add(new PrintViewerResultListTo("print 3")); printViewerList.add(new PrintViewerResultListTo("print 4")); printViewerList.add(new PrintViewerResultListTo("print 5")); printViewerList.add(new PrintViewerResultListTo("print 6")); printViewerList.add(new PrintViewerResultListTo("print 7")); printViewerList.add(new PrintViewerResultListTo("print 8")); printViewerList.add(new PrintViewerResultListTo("print 9")); printViewerList.add(new PrintViewerResultListTo("print 10")); printViewerList.add(new PrintViewerResultListTo("print 11")); printViewerList.add(new PrintViewerResultListTo("print 12")); printViewerList.add(new PrintViewerResultListTo("print 13")); printViewerList.add(new PrintViewerResultListTo("print 14")); printViewerList.add(new PrintViewerResultListTo("print 15")); return printViewerList; } public void setInitForm(HtmlForm initForm) { this.initForm = initForm; } public List getPrintViewerList() { return printViewerList; } public void setPrintViewerList(List printViewerList) { this.printViewerList = printViewerList; } } PrintViewerResultListTo.java public class PrintViewerResultListTo { private String printName; PrintViewerResultListTo(String printName) { this.printName = printName; } public String getPrintName() { return printName; } public void setPrintName(String printName) { this.printName = printName; } } I hope help me about this. Thanks in advance.

    Read the article

  • Solaris X86 AESNI OpenSSL Engine

    - by danx
    Solaris X86 AESNI OpenSSL Engine Cryptography is a major component of secure e-commerce. Since cryptography is compute intensive and adds a significant load to applications, such as SSL web servers (https), crypto performance is an important factor. Providing accelerated crypto hardware greatly helps these applications and will help lead to a wider adoption of cryptography, and lower cost, in e-commerce and other applications. The Intel Westmere microprocessor has six new instructions to acclerate AES encryption. They are called "AESNI" for "AES New Instructions". These are unprivileged instructions, so no "root", other elevated access, or context switch is required to execute these instructions. These instructions are used in a new built-in OpenSSL 1.0 engine available in Solaris 11, the aesni engine. Previous Work Previously, AESNI instructions were introduced into the Solaris x86 kernel and libraries. That is, the "aes" kernel module (used by IPsec and other kernel modules) and the Solaris pkcs11 library (for user applications). These are available in Solaris 10 10/09 (update 8) and above, and Solaris 11. The work here is to add the aesni engine to OpenSSL. X86 AESNI Instructions Intel's Xeon 5600 is one of the processors that support AESNI. This processor is used in the Sun Fire X4170 M2 As mentioned above, six new instructions acclerate AES encryption in processor silicon. The new instructions are: aesenc performs one round of AES encryption. One encryption round is composed of these steps: substitute bytes, shift rows, mix columns, and xor the round key. aesenclast performs the final encryption round, which is the same as above, except omitting the mix columns (which is only needed for the next encryption round). aesdec performs one round of AES decryption aesdeclast performs the final AES decryption round aeskeygenassist Helps expand the user-provided key into a "key schedule" of keys, one per round aesimc performs an "inverse mixed columns" operation to convert the encryption key schedule into a decryption key schedule pclmulqdq Not a AESNI instruction, but performs "carryless multiply" operations to acclerate AES GCM mode. Since the AESNI instructions are implemented in hardware, they take a constant number of cycles and are not vulnerable to side-channel timing attacks that attempt to discern some bits of data from the time taken to encrypt or decrypt the data. Solaris x86 and OpenSSL Software Optimizations Having X86 AESNI hardware crypto instructions is all well and good, but how do we access it? The software is available with Solaris 11 and is used automatically if you are running Solaris x86 on a AESNI-capable processor. AESNI is used internally in the kernel through kernel crypto modules and is available in user space through the PKCS#11 library. For OpenSSL on Solaris 11, AESNI crypto is available directly with a new built-in OpenSSL 1.0 engine, called the "aesni engine." This is in lieu of the extra overhead of going through the Solaris OpenSSL pkcs11 engine, which accesses Solaris crypto and digest operations. Instead, AESNI assembly is included directly in the new aesni engine. Instead of including the aesni engine in a separate library in /lib/openssl/engines/, the aesni engine is "built-in", meaning it is included directly in OpenSSL's libcrypto.so.1.0.0 library. This reduces overhead and the need to manually specify the aesni engine. Since the engine is built-in (that is, in libcrypto.so.1.0.0), the openssl -engine command line flag or API call is not needed to access the engine—the aesni engine is used automatically on AESNI hardware. Ciphers and Digests supported by OpenSSL aesni engine The Openssl aesni engine auto-detects if it's running on AESNI hardware and uses AESNI encryption instructions for these ciphers: AES-128-CBC, AES-192-CBC, AES-256-CBC, AES-128-CFB128, AES-192-CFB128, AES-256-CFB128, AES-128-CTR, AES-192-CTR, AES-256-CTR, AES-128-ECB, AES-192-ECB, AES-256-ECB, AES-128-OFB, AES-192-OFB, and AES-256-OFB. Implementation of the OpenSSL aesni engine The AESNI assembly language routines are not a part of the regular Openssl 1.0.0 release. AESNI is a part of the "HEAD" ("development" or "unstable") branch of OpenSSL, for future release. But AESNI is also available as a separate patch provided by Intel to the OpenSSL project for OpenSSL 1.0.0. A minimal amount of "glue" code in the aesni engine works between the OpenSSL libcrypto.so.1.0.0 library and the assembly functions. The aesni engine code is separate from the base OpenSSL code and requires patching only a few source files to use it. That means OpenSSL can be more easily updated to future versions without losing the performance from the built-in aesni engine. OpenSSL aesni engine Performance Here's some graphs of aesni engine performance I measured by running openssl speed -evp $algorithm where $algorithm is aes-128-cbc, aes-192-cbc, and aes-256-cbc. These are using the 64-bit version of openssl on the same AESNI hardware, a Sun Fire X4170 M2 with a Intel Xeon E5620 @2.40GHz, running Solaris 11 FCS. "Before" is openssl without the aesni engine and "after" is openssl with the aesni engine. The numbers are MBytes/second. OpenSSL aesni engine performance on Sun Fire X4170 M2 (Xeon E5620 @2.40GHz) (Higher is better; "before"=OpenSSL on AESNI without AESNI engine software, "after"=OpenSSL AESNI engine) As you can see the speedup is dramatic for all 3 key lengths and for data sizes from 16 bytes to 8 Kbytes—AESNI is about 7.5-8x faster over hand-coded amd64 assembly (without aesni instructions). Verifying the OpenSSL aesni engine is present The easiest way to determine if you are running the aesni engine is to type "openssl engine" on the command line. No configuration, API, or command line options are needed to use the OpenSSL aesni engine. If you are running on Intel AESNI hardware with Solaris 11 FCS, you'll see this output indicating you are using the aesni engine: intel-westmere $ openssl engine (aesni) Intel AES-NI engine (no-aesni) (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support If you are running on Intel without AESNI hardware you'll see this output indicating the hardware can't support the aesni engine: intel-nehalem $ openssl engine (aesni) Intel AES-NI engine (no-aesni) (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support For Solaris on SPARC or older Solaris OpenSSL software, you won't see any aesni engine line at all. Third-party OpenSSL software (built yourself or from outside Oracle) will not have the aesni engine either. Solaris 11 FCS comes with OpenSSL version 1.0.0e. The output of typing "openssl version" should be "OpenSSL 1.0.0e 6 Sep 2011". 64- and 32-bit OpenSSL OpenSSL comes in both 32- and 64-bit binaries. 64-bit executable is now the default, at /usr/bin/openssl, and OpenSSL 64-bit libraries at /lib/amd64/libcrypto.so.1.0.0 and libssl.so.1.0.0 The 32-bit executable is at /usr/bin/i86/openssl and the libraries are at /lib/libcrytpo.so.1.0.0 and libssl.so.1.0.0. Availability The OpenSSL AESNI engine is available in Solaris 11 x86 for both the 64- and 32-bit versions of OpenSSL. It is not available with Solaris 10. You must have a processor that supports AESNI instructions, otherwise OpenSSL will fallback to the older, slower AES implementation without AESNI. Processors that support AESNI include most Westmere and Sandy Bridge class processor architectures. Some low-end processors (such as for mobile/laptop platforms) do not support AESNI. The easiest way to determine if the processor supports AESNI is with the isainfo -v command—look for "amd64" and "aes" in the output: $ isainfo -v 64-bit amd64 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu Conclusion The Solaris 11 OpenSSL aesni engine provides easy access to powerful Intel AESNI hardware cryptography, in addition to Solaris userland PKCS#11 libraries and Solaris crypto kernel modules.

    Read the article

  • Dynamically creating a Generic Type at Runtime

    - by Rick Strahl
    I learned something new today. Not uncommon, but it's a core .NET runtime feature I simply did not know although I know I've run into this issue a few times and worked around it in other ways. Today there was no working around it and a few folks on Twitter pointed me in the right direction. The question I ran into is: How do I create a type instance of a generic type when I have dynamically acquired the type at runtime? Yup it's not something that you do everyday, but when you're writing code that parses objects dynamically at runtime it comes up from time to time. In my case it's in the bowels of a custom JSON parser. After some thought triggered by a comment today I realized it would be fairly easy to implement two-way Dictionary parsing for most concrete dictionary types. I could use a custom Dictionary serialization format that serializes as an array of key/value objects. Basically I can use a custom type (that matches the JSON signature) to hold my parsed dictionary data and then add it to the actual dictionary when parsing is complete. Generic Types at Runtime One issue that came up in the process was how to figure out what type the Dictionary<K,V> generic parameters take. Reflection actually makes it fairly easy to figure out generic types at runtime with code like this: if (arrayType.GetInterface("IDictionary") != null) { if (arrayType.IsGenericType) { var keyType = arrayType.GetGenericArguments()[0]; var valueType = arrayType.GetGenericArguments()[1]; … } } The GetArrayType method gets passed a type instance that is the array or array-like object that is rendered in JSON as an array (which includes IList, IDictionary, IDataReader and a few others). In my case the type passed would be something like Dictionary<string, CustomerEntity>. So I know what the parent container class type is. Based on the the container type using it's then possible to use GetGenericTypeArguments() to retrieve all the generic types in sequential order of definition (ie. string, CustomerEntity). That's the easy part. Creating a Generic Type and Providing Generic Parameters at RunTime The next problem is how do I get a concrete type instance for the generic type? I know what the type name and I have a type instance is but it's generic, so how do I get a type reference to keyvaluepair<K,V> that is specific to the keyType and valueType above? Here are a couple of things that come to mind but that don't work (and yes I tried that unsuccessfully first): Type elementType = typeof(keyvalue<keyType, valueType>); Type elementType = typeof(keyvalue<typeof(keyType), typeof(valueType)>); The problem is that this explicit syntax expects a type literal not some dynamic runtime value, so both of the above won't even compile. I turns out the way to create a generic type at runtime is using a fancy bit of syntax that until today I was completely unaware of: Type elementType = typeof(keyvalue<,>).MakeGenericType(keyType, valueType); The key is the type(keyvalue<,>) bit which looks weird at best. It works however and produces a non-generic type reference. You can see the difference between the full generic type and the non-typed (?) generic type in the debugger: The nonGenericType doesn't show any type specialization, while the elementType type shows the string, CustomerEntity (truncated above) in the type name. Once the full type reference exists (elementType) it's then easy to create an instance. In my case the parser parses through the JSON and when it completes parsing the value/object it creates a new keyvalue<T,V> instance. Now that I know the element type that's pretty trivial with: // Objects start out null until we find the opening tag resultObject = Activator.CreateInstance(elementType); Here the result object is picked up by the JSON array parser which creates an instance of the child object (keyvalue<K,V>) and then parses and assigns values from the JSON document using the types  key/value property signature. Internally the parser then takes each individually parsed item and adds it to a list of  List<keyvalue<K,V>> items. Parsing through a Generic type when you only have Runtime Type Information When parsing of the JSON array is done, the List needs to be turned into a defacto Dictionary<K,V>. This should be easy since I know that I'm dealing with an IDictionary, and I know the generic types for the key and value. The problem is again though that this needs to happen at runtime which would mean using several Convert.ChangeType() calls in the code to dynamically cast at runtime. Yuk. In the end I decided the easier and probably only slightly slower way to do this is a to use the dynamic type to collect the items and assign them to avoid all the dynamic casting madness: else if (IsIDictionary) { IDictionary dict = Activator.CreateInstance(arrayType) as IDictionary; foreach (dynamic item in items) { dict.Add(item.key, item.value); } return dict; } This code creates an instance of the generic dictionary type first, then loops through all of my custom keyvalue<K,V> items and assigns them to the actual dictionary. By using Dynamic here I can side step all the explicit type conversions that would be required in the three highlighted areas (not to mention that this nested method doesn't have access to the dictionary item generic types here). Static <- -> Dynamic Dynamic casting in a static language like C# is a bitch to say the least. This is one of the few times when I've cursed static typing and the arcane syntax that's required to coax types into the right format. It works but it's pretty nasty code. If it weren't for dynamic that last bit of code would have been a pretty ugly as well with a bunch of Convert.ChangeType() calls to litter the code. Fortunately this type of type convulsion is rather rare and reserved for system level code. It's not every day that you create a string to object parser after all :-)© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Web Platform Installer bundles for Visual Studio 2010 SP1 - and how you can build your own WebPI bundles

    - by Jon Galloway
    Visual Studio SP1 is  now available via the Web Platform Installer, which means you've got three options: Download the 1.5 GB ISO image Run the 750KB Web Installer (which figures out what you need to download) Install via Web PI Note: I covered some tips for installing VS2010 SP1 last week - including some that apply to all of these, such as removing options you don't use prior to installing the service pack to decrease the installation time and download size. Two Visual Studio 2010 SP1 Web PI packages There are actually two WebPI packages for VS2010 SP1. There's the standard Visual Studio 2010 SP1 package [Web PI link], which includes (quoting ScottGu's post): VS2010 2010 SP1 ASP.NET MVC 3 (runtime + tools support) IIS 7.5 Express SQL Server Compact Edition 4.0 (runtime + tools support) Web Deployment 2.0 The notes on that package sum it up pretty well: Looking for the latest everything? Look no further. This will get you Visual Studio 2010 Service Pack 1 and the RTM releases of ASP.NET MVC 3, IIS 7.5 Express, SQL Server Compact 4.0 with tooling, and Web Deploy 2.0. It's the value meal of Microsoft products. Tell your friends! Note: This bundle includes the Visual Studio 2010 SP1 web installer, which will dynamically determine the appropriate service pack components to download and install. This is typically in the range of 200-500 MB and will take 30-60 minutes to install, depending on your machine configuration. There is also a Visual Studio 2010 SP1 Core package [Web PI link], which only includes only the SP without any of the other goodies (MVC3, IIS Express, etc.). If you're doing any web development, I'd highly recommend the main pack since it the other installs are small, simple installs, but if you're working in another space, you might want the core package. Installing via the Web Platform Installer I generally like to go with the Web PI when possible since it simplifies most software installations due to things like: Smart dependency management - installing apps or tools which have software dependencies will automatically figure out which dependencies you don't have and add them to the list (which you can review before install) Simultaneous download and install - if your install includes more than one package, it will automatically pull the dependencies first and begin installing them while downloading the others Lists the latest downloads - no need to search around, as they're all listed based on a live feed Includes open source applications - a lot of popular open source applications are included as well as Microsoft software and tools No worries about reinstallation - WebPI installations detect what you've got installed, so for instance if you've got MVC 3 installed you don't need to worry about the VS2010 SP1 package install messing anything up In addition to the links I included above, you can install the WebPI from http://www.microsoft.com/web/downloads/platform.aspx, and if you have Web PI installed you can just tap the Windows key and type "Web Platform" to bring it up in the Start search list. You'll see Visual Studio SP1 listed in the spotlight list as shown below. That's the standard package, which includes MVC 3 / IIS 7.5 Express / SQL Compact / Web Deploy. If you just want the core install, you can use the search box in the upper right corner, typing in "Visual Studio SP1" as shown. Core Install: Use Web PI or the Visual Studio Web Installer? I think the big advantage of using Web PI to install VS 2010 SP1 is that it includes the other new bits. If you're going to install the SP1 core, I don't think there's as much advantage to using Web PI, as the Web PI Core install just downloads the Visual Studio Web Installer anyways. I think Web PI makes it a little easier to find the download, but not a lot. The Visual Studio Web Installer checks dependencies, so there's no big advantage there. If you do happen to hit any problems installing Visual Studio SP1 via Web PI, I'd recommend running the Visual Studio Web Installer, then running the Web PI VS 2010 SP1 package to get all the other goodies. I talked to one person who hit some random snag, recommended that, and it worked out. Custom Web Platform Installer bundles You can create links that will launch the Web Platform Installer with a custom list of tools. You can see an example of this by clicking through on the install button at http://asp.net/downloads (cancelling the installation dialog). You'll see this in the address bar: http://www.microsoft.com/web/gallery/install.aspx?appsxml=&appid=MVC3;ASPNET;NETFramework4;SQLExpress;VWD Notice that the appid querystring parameter includes a semicolon delimited list, and you can make your own custom Web PI links with your own desired app list. I can think of a lot of cases where that would be handy: linking to a recommended software configuration from a software project or product, setting up a recommended / documented / supported install list for a software development team or IT shop, etc. For instance, here's a link that installs just VS2010 SP1 Core and the SQL CE tools: http://www.microsoft.com/web/gallery/install.aspx?appsxml=&appid=VS2010SP1Core;SQLCETools Note: If you've already got all or some of the products installed, the display will reflect that. On my dev box which has the full SP1 package, here's what the above link gives me: Here's another example - on a fresh box I created a link to install MVC 3 and the Web Farm Framework (http://www.microsoft.com/web/gallery/install.aspx?appsxml=&appid=MVC3;WebFarmFramework) and got the following items added to the cart: But where do I get the App ID's? Aha, that's the trick. You can link to a list of cool packages, but you need to know the App ID's to link to them. To figure that out, I turned on tracing in Web Platform Installer  (also handy if you're ever having trouble with a WebPI install) and from the trace logs saw that the list of packages is pulled from an XML file: DownloadManager Information: 0 : Loading product xml from: https://go.microsoft.com/?linkid=9763242 DownloadManager Verbose: 0 : Connecting to https://go.microsoft.com/?linkid=9763242 with (partial) headers: Referer: wpi://2.1.0.0/Microsoft Windows NT 6.1.7601 Service Pack 1 If-Modified-Since: Wed, 09 Mar 2011 14:15:27 GMT User-Agent:Platform-Installer/3.0.3.0(Microsoft Windows NT 6.1.7601 Service Pack 1) DownloadManager Information: 0 : https://go.microsoft.com/?linkid=9763242 responded with 302 DownloadManager Information: 0 : Response headers: HTTP/1.1 302 Found Cache-Control: private Content-Length: 175 Content-Type: text/html; charset=utf-8 Expires: Wed, 09 Mar 2011 22:52:28 GMT Location: https://www.microsoft.com/web/webpi/3.0/webproductlist.xml Server: Microsoft-IIS/7.5 X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Wed, 09 Mar 2011 22:53:27 GMT Browsing to https://www.microsoft.com/web/webpi/3.0/webproductlist.xml shows the full list. You can search through that in your browser / text editor if you'd like, open it in Excel as an XML table, etc. Here's a list of the App ID's as of today: SMO SMO32 PHP52ForIISExpress PHP53ForIISExpress StaticContent DefaultDocument DirectoryBrowse HTTPErrors HTTPRedirection ASPNET NETExtensibility ASP CGI ISAPIExtensions ISAPIFilters ServerSideIncludes HTTPLogging LoggingTools RequestMonitor Tracing CustomLogging ODBCLogging BasicAuthentication WindowsAuthentication DigestAuthentication ClientCertificateMappingAuthentication IISClientCertificateMappingAuthentication URLAuthorization RequestFiltering IPSecurity StaticContentCompression DynamicContentCompression IISManagementConsole IISManagementScriptsAndTools ManagementService MetabaseAndIIS6Compatibility WASProcessModel WASNetFxEnvironment WASConfigurationAPI IIS6WPICompatibility IIS6ScriptingTools IIS6ManagementConsole LegacyFTPServer FTPServer WebDAV LegacyFTPManagementConsole FTPExtensibility AdminPack AdvancedLogging WebFarmFrameworkNonLoc ExternalCacheNonLoc WebFarmFramework WebFarmFrameworkv2 WebFarmFrameworkv2_beta ExternalCache ECacheUpdate ARRv1 ARRv2Beta1 ARRv2Beta2 ARRv2RC ARRv2NonLoc ARRv2 ARRv2Update MVC MVCBeta MVCRC1 MVCRC2 DBManager DbManagerUpdate DynamicIPRestrictions DynamicIPRestrictionsUpdate DynamicIPRestrictionsLegacy DynamicIPRestrictionsBeta2 FTPOOB IISPowershellSnapin RemoteManager SEOToolkit VS2008RTM MySQL SQLDriverPHP52IIS SQLDriverPHP53IIS SQLDriverPHP52IISExpress SQLDriverPHP53IISExpress SQLExpress SQLManagementStudio SQLExpressAdv SQLExpressTools UrlRewrite UrlRewrite2 UrlRewrite2NonLoc UrlRewrite2RC UrlRewrite2Beta UrlRewrite10 UrlScan MVC3Installer MVC3 MVC3LocInstaller MVC3Loc MVC2 VWD VWD2010SP1Pack NETFramework4 WebMatrix WebMatrix_v1Refresh IISExpress IISExpress_v1 IIS7 AspWebPagesVS AspWebPagesVS_1_0 Plan9 Plan9Loc WebMatrix_WHP SQLCE SQLCETools SQLCEVSTools SQLCEVSTools_4_0 SQLCEVSToolsInstaller_4_0 SQLCEVSToolsInstallerNew_4_0 SQLCEVSToolsInstallerRepair_EN_4_0 SQLCEVSToolsInstallerRepair_JA_4_0 SQLCEVSToolsInstallerRepair_FR_4_0 SQLCEVSToolsInstallerRepair_DE_4_0 SQLCEVSToolsInstallerRepair_ES_4_0 SQLCEVSToolsInstallerRepair_IT_4_0 SQLCEVSToolsInstallerRepair_RU_4_0 SQLCEVSToolsInstallerRepair_KO_4_0 SQLCEVSToolsInstallerRepair_ZH_CN_4_0 SQLCEVSToolsInstallerRepair_ZH_TW_4_0 VWD2008 WebDAVOOB WDeploy WDeploy_v2 WDeployNoSMO WDeploy11 WinCache52 WinCache53 NETFramework35 WindowsImagingComponent VC9Redist NETFramework20SP2 WindowsInstaller31 PowerShell PowerShellMsu PowerShell2 WindowsInstaller45 FastCGIUpdate FastCGIBackport FastCGIIIS6 IIS51 IIS60 SQLNativeClient SQLNativeClient2008 SQLNativeClient2005 SQLCLRTypes SQLCLRTypes32 SMO_10_1 MySQLConnector PHP52 PHP53 PHPManager VSVWD2010Feature VWD2010WebFeature_0 VWD2010WebFeature_1 VWD2010WebFeature_2 VS2010SP1Prerequisite RIAServicesToolkitMay2010 Silverlight4Toolkit Silverlight4Tools VSLS SSMAMySQL WebsitePanel VS2010SP1Core VS2010SP1Installer VS2010SP1Pack MissingVWDOrVSVWD2010Feature VB2010Beta2Express VCS2010Beta2Express VC2010Beta2Express RIAServicesToolkitApr2010 VS2010Beta1 VS2010RC VS2010Beta2 VS2010Beta2Express VS2k8RTM VSCPP2k8RTM VSVB2k8RTM VSCS2k8RTM VSVWDFeature LegacyWinCache SQLExpress2005 SSMS2005

    Read the article

  • A Communication System for XAML Applications

    - by psheriff
    In any application, you want to keep the coupling between any two or more objects as loose as possible. Coupling happens when one class contains a property that is used in another class, or uses another class in one of its methods. If you have this situation, then this is called strong or tight coupling. One popular design pattern to help with keeping objects loosely coupled is called the Mediator design pattern. The basics of this pattern are very simple; avoid one object directly talking to another object, and instead use another class to mediate between the two. As with most of my blog posts, the purpose is to introduce you to a simple approach to using a message broker, not all of the fine details. IPDSAMessageBroker Interface As with most implementations of a design pattern, you typically start with an interface or an abstract base class. In this particular instance, an Interface will work just fine. The interface for our Message Broker class just contains a single method “SendMessage” and one event “MessageReceived”. public delegate void MessageReceivedEventHandler( object sender, PDSAMessageBrokerEventArgs e); public interface IPDSAMessageBroker{  void SendMessage(PDSAMessageBrokerMessage msg);   event MessageReceivedEventHandler MessageReceived;} PDSAMessageBrokerMessage Class As you can see in the interface, the SendMessage method requires a type of PDSAMessageBrokerMessage to be passed to it. This class simply has a MessageName which is a ‘string’ type and a MessageBody property which is of the type ‘object’ so you can pass whatever you want in the body. You might pass a string in the body, or a complete Customer object. The MessageName property will help the receiver of the message know what is in the MessageBody property. public class PDSAMessageBrokerMessage{  public PDSAMessageBrokerMessage()  {  }   public PDSAMessageBrokerMessage(string name, object body)  {    MessageName = name;    MessageBody = body;  }   public string MessageName { get; set; }   public object MessageBody { get; set; }} PDSAMessageBrokerEventArgs Class As our message broker class will be raising an event that others can respond to, it is a good idea to create your own event argument class. This class will inherit from the System.EventArgs class and add a couple of additional properties. The properties are the MessageName and Message. The MessageName property is simply a string value. The Message property is a type of a PDSAMessageBrokerMessage class. public class PDSAMessageBrokerEventArgs : EventArgs{  public PDSAMessageBrokerEventArgs()  {  }   public PDSAMessageBrokerEventArgs(string name,     PDSAMessageBrokerMessage msg)  {    MessageName = name;    Message = msg;  }   public string MessageName { get; set; }   public PDSAMessageBrokerMessage Message { get; set; }} PDSAMessageBroker Class Now that you have an interface class and a class to pass a message through an event, it is time to create your actual PDSAMessageBroker class. This class implements the SendMessage method and will also create the event handler for the delegate created in your Interface. public class PDSAMessageBroker : IPDSAMessageBroker{  public void SendMessage(PDSAMessageBrokerMessage msg)  {    PDSAMessageBrokerEventArgs args;     args = new PDSAMessageBrokerEventArgs(      msg.MessageName, msg);     RaiseMessageReceived(args);  }   public event MessageReceivedEventHandler MessageReceived;   protected void RaiseMessageReceived(    PDSAMessageBrokerEventArgs e)  {    if (null != MessageReceived)      MessageReceived(this, e);  }} The SendMessage method will take a PDSAMessageBrokerMessage object as an argument. It then creates an instance of a PDSAMessageBrokerEventArgs class, passing to the constructor two items: the MessageName from the PDSAMessageBrokerMessage object and also the object itself. It may seem a little redundant to pass in the message name when that same message name is part of the message, but it does make consuming the event and checking for the message name a little cleaner – as you will see in the next section. Create a Global Message Broker In your WPF application, create an instance of this message broker class in the App class located in the App.xaml file. Create a public property in the App class and create a new instance of that class in the OnStartUp event procedure as shown in the following code: public partial class App : Application{  public PDSAMessageBroker MessageBroker { get; set; }   protected override void OnStartup(StartupEventArgs e)  {    base.OnStartup(e);     MessageBroker = new PDSAMessageBroker();  }} Sending and Receiving Messages Let’s assume you have a user control that you load into a control on your main window and you want to send a message from that user control to the main window. You might have the main window display a message box, or put a string into a status bar as shown in Figure 1. Figure 1: The main window can receive and send messages The first thing you do in the main window is to hook up an event procedure to the MessageReceived event of the global message broker. This is done in the constructor of the main window: public MainWindow(){  InitializeComponent();   (Application.Current as App).MessageBroker.     MessageReceived += new MessageReceivedEventHandler(       MessageBroker_MessageReceived);} One piece of code you might not be familiar with is accessing a property defined in the App class of your XAML application. Within the App.Xaml file is a class named App that inherits from the Application object. You access the global instance of this App class by using Application.Current. You cast Application.Current to ‘App’ prior to accessing any of the public properties or methods you defined in the App class. Thus, the code (Application.Current as App).MessageBroker, allows you to get at the MessageBroker property defined in the App class. In the MessageReceived event procedure in the main window (shown below) you can now check to see if the MessageName property of the PDSAMessageBrokerEventArgs is equal to “StatusBar” and if it is, then display the message body into the status bar text block control. void MessageBroker_MessageReceived(object sender,   PDSAMessageBrokerEventArgs e){  switch (e.MessageName)  {    case "StatusBar":      tbStatus.Text = e.Message.MessageBody.ToString();      break;  }} In the Page 1 user control’s Loaded event procedure you will send the message “StatusBar” through the global message broker to any listener using the following code: private void UserControl_Loaded(object sender,  RoutedEventArgs e){  // Send Status Message  (Application.Current as App).MessageBroker.    SendMessage(new PDSAMessageBrokerMessage("StatusBar",      "This is Page 1"));} Since the main window is listening for the message ‘StatusBar’, it will display the value “This is Page 1” in the status bar at the bottom of the main window. Sending a Message to a User Control The previous example sent a message from the user control to the main window. You can also send messages from the main window to any listener as well. Remember that the global message broker is really just a broadcaster to anyone who has hooked into the MessageReceived event. In the constructor of the user control named ucPage1 you can hook into the global message broker’s MessageReceived event. You can then listen for any messages that are sent to this control by using a similar switch-case structure like that in the main window. public ucPage1(){  InitializeComponent();   // Hook to the Global Message Broker  (Application.Current as App).MessageBroker.    MessageReceived += new MessageReceivedEventHandler(      MessageBroker_MessageReceived);} void MessageBroker_MessageReceived(object sender,  PDSAMessageBrokerEventArgs e){  // Look for messages intended for Page 1  switch (e.MessageName)  {    case "ForPage1":      MessageBox.Show(e.Message.MessageBody.ToString());      break;  }} Once the ucPage1 user control has been loaded into the main window you can then send a message using the following code: private void btnSendToPage1_Click(object sender,  RoutedEventArgs e){  PDSAMessageBrokerMessage arg =     new PDSAMessageBrokerMessage();   arg.MessageName = "ForPage1";  arg.MessageBody = "Message For Page 1";   // Send a message to Page 1  (Application.Current as App).MessageBroker.SendMessage(arg);} Since the MessageName matches what is in the ucPage1 MessageReceived event procedure, ucPage1 can do anything in response to that event. It is important to note that when the message gets sent it is sent to all MessageReceived event procedures, not just the one that is looking for a message called “ForPage1”. If the user control ucPage1 is not loaded and this message is broadcast, but no other code is listening for it, then it is simply ignored. Remove Event Handler In each class where you add an event handler to the MessageReceived event you need to make sure to remove those event handlers when you are done. Failure to do so can cause a strong reference to the class and thus not allow that object to be garbage collected. In each of your user control’s make sure in the Unloaded event to remove the event handler. private void UserControl_Unloaded(object sender, RoutedEventArgs e){  if (_MessageBroker != null)    _MessageBroker.MessageReceived -=         _MessageBroker_MessageReceived;} Problems with Message Brokering As with most “global” classes or classes that hook up events to other classes, garbage collection is something you need to consider. Just the simple act of hooking up an event procedure to a global event handler creates a reference between your user control and the message broker in the App class. This means that even when your user control is removed from your UI, the class will still be in memory because of the reference to the message broker. This can cause messages to still being handled even though the UI is not being displayed. It is up to you to make sure you remove those event handlers as discussed in the previous section. If you don’t, then the garbage collector cannot release those objects. Instead of using events to send messages from one object to another you might consider registering your objects with a central message broker. This message broker now becomes a collection class into which you pass an object and what messages that object wishes to receive. You do end up with the same problem however. You have to un-register your objects; otherwise they still stay in memory. To alleviate this problem you can look into using the WeakReference class as a method to store your objects so they can be garbage collected if need be. Discussing Weak References is beyond the scope of this post, but you can look this up on the web. Summary In this blog post you learned how to create a simple message broker system that will allow you to send messages from one object to another without having to reference objects directly. This does reduce the coupling between objects in your application. You do need to remember to get rid of any event handlers prior to your objects going out of scope or you run the risk of having memory leaks and events being called even though you can no longer access the object that is responding to that event. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “A Communication System for XAML Applications” from the drop down list.

    Read the article

  • ASP.NET MVC 3 Hosting :: Rolling with Razor in MVC v3 Preview

    - by mbridge
    Razor is an alternate view engine for asp.net MVC.  It was introduced in the “WebMatrix” tool and has now been released as part of the asp.net MVC 3 preview 1.  Basically, Razor allows us to replace the clunky <% %> syntax with a much cleaner coding model, which integrates very nicely with HTML.  Additionally, it provides some really nice features for master page type scenarios and you don’t lose access to any of the features you are currently familiar with, such as HTML helper methods. First, download and install the ASP.NET MVC Preview 1.  You can find this at http://www.microsoft.com/downloads/details.aspx?FamilyID=cb42f741-8fb1-4f43-a5fa-812096f8d1e8&displaylang=en. Now, follow these steps to create your first asp.net mvc project using Razor: 1. Open Visual Studio 2010 2. Create a new project.  Select File->New->Project (Shift Control N) 3. You will see the list of project types which should look similar to what’s shown:   4. Select “ASP.NET MVC 3 Web Application (Razor).”  Set the application name to RazorTest and the path to c:projectsRazorTest for this tutorial. If you select accidently select ASPX, you will end up with the standard asp.net view engine and template, which isn’t what you want. 5. For this tutorial, and ONLY for this tutorial, select “No, do not create a unit test project.”  In general, you should create and use a unit test project.  Code without unit tests is kind of like diet ice cream.  It just isn’t very good. Now, once we have this done, our brand new project will be created.    In all likelihood, Visual Studio will leave you looking at the “HomeController.cs” class, as shown below: Immediately, you should notice one difference.  The Index action used to look like: public ActionResult Index () { ViewData[“Message”] = “Welcome to ASP.Net MVC!”; Return View(); } While this will still compile and run just fine, ASP.Net MVC 3 has a much nicer way of doing this: public ActionResult Index() { ViewModel.Message = “Welcome to ASP.Net MVC!”; Return View(); } Instead of using ViewData we are using the new ViewModel object, which uses the new dynamic data typing of .Net 4.0 to allow us to express ourselves much more cleanly.  This isn’t a tutorial on ALL of MVC 3, but the ViewModel concept is one we will need as we dig into Razor. What comes in the box? When we create a project using the ASP.Net MVC 3 Template with Razor, we get a standard project setup, just like we did in ASP.NET MVC 2.0 but with some differences.  Instead of seeing “.aspx” view files and “.ascx” files, we see files with the “.cshtml” which is the default razor extension.  Before we discuss the details of a razor file, one thing to keep in mind is that since this is an extremely early preview, intellisense is not currently enabled with the razor view engine.  This is promised as an updated before the final release.  Just like with the aspx view engine, the convention of the folder name for a set of views matching the controller name without the word “Controller” still stands.  Similarly, each action in the controller will usually have a corresponding view file in the appropriate view directory.  Remember, in asp.net MVC, convention over configuration is key to successful development! The initial template organizes views in the following folders, located in the project under Views: - Account – The default account management views used by the Account controller.  Each file represents a distinct view. - Home – Views corresponding to the appropriate actions within the home controller. - Shared – This contains common view objects used by multiple views.  Within here, master pages are stored, as well as partial page views (user controls).  By convention, these partial views are named “_XXXPartial.cshtml” where XXX is the appropriate name, such as _LogonPartial.cshtml.  Additionally, display templates are stored under here. With this in mind, let us take a look at the index.cshtml file under the home view directory.  When you open up index.cshtml you should see 1:   @inherits System.Web.Mvc.WebViewPage 2:  @{ 3:          View.Title = "Home Page"; 4:       LayoutPage = "~/Views/Shared/_Layout.cshtml"; 5:   } 6:  <h2>@View.Message</h2> 7:  <p> 8:     To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC     9:    Website">http://asp.net/mvc</a>. 10:  </p> So looking through this, we observe the following facts: Line 1 imports the base page that all views (using Razor) are based on, which is System.Web.Mvc.WebViewPage.  Note that this is different than System.Web.MVC.ViewPage which is used by asp.net MVC 2.0 Also note that instead of the <% %> syntax, we use the very simple ‘@’ sign.  The View Engine contains enough context sensitive logic that it can even distinguish between @ in code and @ in an email.  It’s a very clean markup.  Line 2 introduces the idea of a code block in razor.  A code block is a scoping mechanism just like it is in a normal C# class.  It is designated by @{… }  and any C# code can be placed in between.  Note that this is all server side code just like it is when using the aspx engine and <% %>.  Line 3 allows us to set the page title in the client page’s file.  This is a new feature which I’ll talk more about when we get to master pages, but it is another of the nice things razor brings to asp.net mvc development. Line 4 is where we specify our “master” page, but as you can see, you can place it almost anywhere you want, because you tell it where it is located.  A Layout Page is similar to a master page, but it gains a bit when it comes to flexibility.  Again, we’ll come back to this in a later installment.  Line 6 and beyond is where we display the contents of our view.  No more using <%: %> intermixed with code.  Instead, we get to use very clean syntax such as @View.Message.  This is a lot easier to read than <%:@View.Message%> especially when intermixed with html.  For example: <p> My name is @View.Name and I live at @View.Address </p> Compare this to the equivalent using the aspx view engine <p> My name is <%:View.Name %> and I live at <%: View.Address %> </p> While not an earth shaking simplification, it is easier on the eyes.  As  we explore other features, this clean markup will become more and more valuable.

    Read the article

  • The DOS DEBUG Environment

    - by MarkPearl
    Today I thought I would go back in time and have a look at the DEBUG command that has been available since the beginning of dawn in DOS, MS-DOS and Microsoft Windows. up to today I always knew it was there, but had no clue on how to use it so for those that are interested this might be a great geek party trick to pull out when you want the awe the younger generation and want to show them what “real” programming is about. But wait, you will have to do it relatively quickly as it seems like DEBUG was finally dumped from the Windows group in Windows 7. Not to worry, pull out that Windows XP box which will get you even more geek points and you can still poke DEBUG a bit. So, for those that are interested and want to find out a bit about the history of DEBUG read the wiki link here. That all put aside, lets get our hands dirty.. How to Start DEBUG in Windows Make sure your version of Windows supports DEBUG. Open up a console window Make a directory where you want to play with debug – in my instance I called it C221 Enter the directory and type Debug You will get a response with a – as illustrated in the image below…   The commands available in DEBUG There are several commands available in DEBUG. The most common ones are A (Assemble) R (Register) T (Trace) G (Go) D (Dump or Display) U (Unassemble) E (Enter) P (Proceed) N (Name) L (Load) W (Write) H (Hexadecimal) I (Input) O (Output) Q (Quit) I am not going to cover all these commands, but what I will do is go through a few of them briefly. A is for Assemble Command (to write code) The A command translates assembly language statements into machine code. It is quite useful for writing small assembly programs. Below I have written a very basic assembly program. The code typed out is as follows mov ax,0015 mov cx,0023 sub cx,ax mov [120],al mov cl,[120]A nop R is for Register (to jump to a point in memory) The r command turns out to be one of the most frequent commands you will use in DEBUG. It allows you to view the contents of registers and to change their values. It can be used with the following combinations… R – Displays the contents of all the registers R f – Displays the flags register R register_name – Displays the contents of a specific register All three methods are illustrated in the image above T is for Trace (To execute a program step by step) The t command allows us to execute the program step by step. Before we can trace the program we need to point back to the beginning of the program. We do this by typing in r ip, which moves us back to memory point 100. We then type trace which executes the first line of code (line 100) (As shown in the image below starting from the red arrow). You can see from the above image that the register AX now contains 0015 as per our instruction mov ax,0015 You can also see that the IP points to line 0103 which has the MOV CX,0023 command If we type t again it will now execute the second line of the program which moves 23 in the cx register. Again, we can see that the line of code was executed and that the CX register now holds the value of 23. What I would like to highlight now is the section underlined in red. These are the status flags. The ones we are going to look at now are 1st (NV), 4th (PL), 5th (NZ) & 8th (NC) NV means no overflow, the alternate would be OV PL means that the sign of the previous arithmetic operation was Plus, the alternate would be NG (Negative) NZ means that the results of the previous arithmetic operation operation was Not Zero, the alternate would be ZR NC means that No final Carry resulted from the previous arithmetic operation. CY means that there was a final Carry. We could now follow this process of entering the t command until the entire program is executed line by line. G is for Go (To execute a program up to a certain line number) So we have looked at executing a program line by line, which is fine if your program is minuscule BUT totally unpractical if we have any decent sized program. A quicker way to run some lines of code is to use the G command. The ‘g’ command executes a program up to a certain specified point. It can be used in connection with the the reset IP command. You would set your initial point and then run the G command with the line you want to end on. P is for Proceed (Similar to trace but slightly more streamlined) Another command similar to trace is the proceed command. All that the p command does is if it is called and it encounters a CALL, INT or LOOP command it terminates the program execution. In the example below I modified our example program to include an int 20 at the end of it as illustrated in the image below… Then when executing the code when I encountered the int 20 command I typed the P command and the program terminated normally (illustrated below). D is for Dump (or for those more polite Display) So, we have all these assembly lines of code, but if you have ever opened up an exe or com file in a text/hex editor, it looks nothing like assembly code. The D command is a way that we can see what our code looks like in memory (or in a hex editor). If we examined the image above, we can see that Debug is storing our assembly code with each instruction following immediately after the previous one. For instance in memory address 110 we have int and 111 we have 20. If we examine the dump of memory we can see at memory point 110 CD is stored and at memory point 111 20 is stored. U is for Unassemble (or Convert Machine code to Assembly Code) So up to now we have gone through a bunch of commands, but probably one of the most useful is the U command. Let’s say we don’t understand machine code so well and so instead we want to see it in its equivalent assembly code. We can type the U command followed by the start memory point, followed by the end memory point and it will show us the assembly code equivalent of the machine code. E is for a bunch of things… The E command can be used for a bunch of things… One example is to enter data or machine code instructions directly into memory. It can also be used to display the contents of memory locations. I am not going to worry to much about it in this post. N / L / W is for Name, Load & Write So we have written out assembly code in debug, and now we want to save it to disk, or write it as a com file or load it. This is where the N, L & W command come in handy. The n command is used to give a name to the executable program file and is pretty simple to use. The w command is a bit trickier. It saves to disk all the memory between point bx and point cx so you need to specify the bx memory address and the cx memory address for it to write your code. Let’s look at an example illustrated below. You do this by calling the r command followed by the either bx or cx. We can then go to the directory where we were working and will see the new file with the name we specified. The L command is relatively simple. You would first specify the name of the file you would like to load using the N command, and then call the L command. Q is for Quit The last command that I am going to write about in this post is the Q command. Simply put, calling the Q command exits DEBUG. Commands we did not Cover Out of the standard DEBUG commands we covered A, T, G, D, U, E, P, R, N, L & W. The ones we did not cover were H, I & O – I might make mention of these in a later post, but for the basics they are not really needed. Some Useful Resources Please note this post is based on the COS2213 handouts for UNISA A Guide to DEBUG - http://mirror.href.com/thestarman/asm/debug/debug.htm#NT

    Read the article

  • Generic Http Module

    - by MartinF
    The problem I am trying to make a generic http module in asp.net C# for handling roles defined by an enum which i want to be able to change by a generic parameter. This will make it possible to use the generic module with any kind of enum defined for each project. The module hooks into the Authenticate event of the FormsAuthenticationModule, and is called on each request to the website. The module exposes public events which could be defined in the global.asax. But i cant seem to figure out how to make the generic http module work like a non generic module. There is 3 main problems. I cant register the generic http module in the web.config like any other module as i cant specify the generic parameter, or is possible somehow ? The way to solve that as far as i can figure out is to create a non-generic http module that intializes the generic HttpModule (the generic parameter is defined in a custom section for the module in the web.config). But that introduces the next problem. I cant find out how to make the public events exposed by the generic module available to hook into through the global.asax as you would normally do with a non-generic module by just making a public method with the name like ModuleClassName_PublicEventName. The init() method on the http module gets an reference to the HttpApplication object created in the global.asax. I dont know if it somehow could be possible with reflection to search for the methods and if they are defined in the global.asax (HttpApplication super class) hook them up with the correct event handler ? or if any methods on the HttpApplication object can be used? How would i store and later get a reference to the generic module created in the non-generic module ? I can get the non-generic module with HttpContext.Current.ApplicationInstance.Modules.Get("TheModule"); but is there any way i can store a reference to the generic module in the non-generic module (cant figure out how it should be possible), or store it somewhere else so i can always get it? If I can get a reference to the generic module from the global.asax etc. the events mentioned in nr. 2 can be manually wired to the methods. Thoughts and other possible solutions Instead of registering the module in the web.config it can be manually initialized by overridding the Init method of the HttpApplication and calling the Init method on the module. But that will introduce some new problems. The module will no longer be added to the the ModulesCollection. So I will need to store a reference somewhere else. This could be done with a property in the global.asax, and by implementing an interface, or by creating an generic abstract base type inheriting from HttpApplication, that the global.asax could inherit from. In the generic abstract base type i could also override the init method. It will still not automatically hook up methods in the global.asax with events in the generic module. If it is possible with reflection to search for defined methods in the super type of the HttpApplication it could be automatically done that way. But i can wire the methods in the global.asax with the events in the generic module manually either in the Init method or anywhere else by getting reference to the generic module. It doesnst really need to implement the IHttpModule interface if i choose to manually initalize the generic module. I could just aswell move all the code to the abstract base type inheriting from the HttpApplication. I would prefer to register the module simply by defining it in the web.config as it will be the easiest and most natural / logical solution. Also it would be great if it could be kept as a HttpModule instead of having to define a an abstract base type inheriting from HttpApplication, else it will be more thighed up and not as loose and plugable as i wanted it to be (but maybe it is not possible). Another alternative would be to make it all static. As far as i can figure out i would have to somehow make sure that only one method can be added to the public static events, so it wont add a reference each time a new instance of the global.asax is created. I simply cant find out what is the best solution. I have been messing around with this and thinking about it for days now. Maybe there is an option that i havent thought of ? Hope anyone out there can help me.

    Read the article

  • Solaris X86 AESNI OpenSSL Engine

    - by danx
    Solaris X86 AESNI OpenSSL Engine Cryptography is a major component of secure e-commerce. Since cryptography is compute intensive and adds a significant load to applications, such as SSL web servers (https), crypto performance is an important factor. Providing accelerated crypto hardware greatly helps these applications and will help lead to a wider adoption of cryptography, and lower cost, in e-commerce and other applications. The Intel Westmere microprocessor has six new instructions to acclerate AES encryption. They are called "AESNI" for "AES New Instructions". These are unprivileged instructions, so no "root", other elevated access, or context switch is required to execute these instructions. These instructions are used in a new built-in OpenSSL 1.0 engine available in Solaris 11, the aesni engine. Previous Work Previously, AESNI instructions were introduced into the Solaris x86 kernel and libraries. That is, the "aes" kernel module (used by IPsec and other kernel modules) and the Solaris pkcs11 library (for user applications). These are available in Solaris 10 10/09 (update 8) and above, and Solaris 11. The work here is to add the aesni engine to OpenSSL. X86 AESNI Instructions Intel's Xeon 5600 is one of the processors that support AESNI. This processor is used in the Sun Fire X4170 M2 As mentioned above, six new instructions acclerate AES encryption in processor silicon. The new instructions are: aesenc performs one round of AES encryption. One encryption round is composed of these steps: substitute bytes, shift rows, mix columns, and xor the round key. aesenclast performs the final encryption round, which is the same as above, except omitting the mix columns (which is only needed for the next encryption round). aesdec performs one round of AES decryption aesdeclast performs the final AES decryption round aeskeygenassist Helps expand the user-provided key into a "key schedule" of keys, one per round aesimc performs an "inverse mixed columns" operation to convert the encryption key schedule into a decryption key schedule pclmulqdq Not a AESNI instruction, but performs "carryless multiply" operations to acclerate AES GCM mode. Since the AESNI instructions are implemented in hardware, they take a constant number of cycles and are not vulnerable to side-channel timing attacks that attempt to discern some bits of data from the time taken to encrypt or decrypt the data. Solaris x86 and OpenSSL Software Optimizations Having X86 AESNI hardware crypto instructions is all well and good, but how do we access it? The software is available with Solaris 11 and is used automatically if you are running Solaris x86 on a AESNI-capable processor. AESNI is used internally in the kernel through kernel crypto modules and is available in user space through the PKCS#11 library. For OpenSSL on Solaris 11, AESNI crypto is available directly with a new built-in OpenSSL 1.0 engine, called the "aesni engine." This is in lieu of the extra overhead of going through the Solaris OpenSSL pkcs11 engine, which accesses Solaris crypto and digest operations. Instead, AESNI assembly is included directly in the new aesni engine. Instead of including the aesni engine in a separate library in /lib/openssl/engines/, the aesni engine is "built-in", meaning it is included directly in OpenSSL's libcrypto.so.1.0.0 library. This reduces overhead and the need to manually specify the aesni engine. Since the engine is built-in (that is, in libcrypto.so.1.0.0), the openssl -engine command line flag or API call is not needed to access the engine—the aesni engine is used automatically on AESNI hardware. Ciphers and Digests supported by OpenSSL aesni engine The Openssl aesni engine auto-detects if it's running on AESNI hardware and uses AESNI encryption instructions for these ciphers: AES-128-CBC, AES-192-CBC, AES-256-CBC, AES-128-CFB128, AES-192-CFB128, AES-256-CFB128, AES-128-CTR, AES-192-CTR, AES-256-CTR, AES-128-ECB, AES-192-ECB, AES-256-ECB, AES-128-OFB, AES-192-OFB, and AES-256-OFB. Implementation of the OpenSSL aesni engine The AESNI assembly language routines are not a part of the regular Openssl 1.0.0 release. AESNI is a part of the "HEAD" ("development" or "unstable") branch of OpenSSL, for future release. But AESNI is also available as a separate patch provided by Intel to the OpenSSL project for OpenSSL 1.0.0. A minimal amount of "glue" code in the aesni engine works between the OpenSSL libcrypto.so.1.0.0 library and the assembly functions. The aesni engine code is separate from the base OpenSSL code and requires patching only a few source files to use it. That means OpenSSL can be more easily updated to future versions without losing the performance from the built-in aesni engine. OpenSSL aesni engine Performance Here's some graphs of aesni engine performance I measured by running openssl speed -evp $algorithm where $algorithm is aes-128-cbc, aes-192-cbc, and aes-256-cbc. These are using the 64-bit version of openssl on the same AESNI hardware, a Sun Fire X4170 M2 with a Intel Xeon E5620 @2.40GHz, running Solaris 11 FCS. "Before" is openssl without the aesni engine and "after" is openssl with the aesni engine. The numbers are MBytes/second. OpenSSL aesni engine performance on Sun Fire X4170 M2 (Xeon E5620 @2.40GHz) (Higher is better; "before"=OpenSSL on AESNI without AESNI engine software, "after"=OpenSSL AESNI engine) As you can see the speedup is dramatic for all 3 key lengths and for data sizes from 16 bytes to 8 Kbytes—AESNI is about 7.5-8x faster over hand-coded amd64 assembly (without aesni instructions). Verifying the OpenSSL aesni engine is present The easiest way to determine if you are running the aesni engine is to type "openssl engine" on the command line. No configuration, API, or command line options are needed to use the OpenSSL aesni engine. If you are running on Intel AESNI hardware with Solaris 11 FCS, you'll see this output indicating you are using the aesni engine: intel-westmere $ openssl engine (aesni) Intel AES-NI engine (no-aesni) (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support If you are running on Intel without AESNI hardware you'll see this output indicating the hardware can't support the aesni engine: intel-nehalem $ openssl engine (aesni) Intel AES-NI engine (no-aesni) (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support For Solaris on SPARC or older Solaris OpenSSL software, you won't see any aesni engine line at all. Third-party OpenSSL software (built yourself or from outside Oracle) will not have the aesni engine either. Solaris 11 FCS comes with OpenSSL version 1.0.0e. The output of typing "openssl version" should be "OpenSSL 1.0.0e 6 Sep 2011". 64- and 32-bit OpenSSL OpenSSL comes in both 32- and 64-bit binaries. 64-bit executable is now the default, at /usr/bin/openssl, and OpenSSL 64-bit libraries at /lib/amd64/libcrypto.so.1.0.0 and libssl.so.1.0.0 The 32-bit executable is at /usr/bin/i86/openssl and the libraries are at /lib/libcrytpo.so.1.0.0 and libssl.so.1.0.0. Availability The OpenSSL AESNI engine is available in Solaris 11 x86 for both the 64- and 32-bit versions of OpenSSL. It is not available with Solaris 10. You must have a processor that supports AESNI instructions, otherwise OpenSSL will fallback to the older, slower AES implementation without AESNI. Processors that support AESNI include most Westmere and Sandy Bridge class processor architectures. Some low-end processors (such as for mobile/laptop platforms) do not support AESNI. The easiest way to determine if the processor supports AESNI is with the isainfo -v command—look for "amd64" and "aes" in the output: $ isainfo -v 64-bit amd64 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu Conclusion The Solaris 11 OpenSSL aesni engine provides easy access to powerful Intel AESNI hardware cryptography, in addition to Solaris userland PKCS#11 libraries and Solaris crypto kernel modules.

    Read the article

  • More Animation - Self Dismissing Dialogs

    - by Duncan Mills
    In my earlier articles on animation, I discussed various slide, grow and  flip transitions for items and containers.  In this article I want to discuss a fade animation and specifically the use of fades and auto-dismissal for informational dialogs.  If you use a Mac, you may be familiar with Growl as a notification system, and the nice way that messages that are informational just fade out after a few seconds. So in this blog entry I wanted to discuss how we could make an ADF popup behave in the same way. This can be an effective way of communicating information to the user without "getting in the way" with modal alerts. This of course, has been done before, but everything I've seen previously requires something like JQuery to be in the mix when we don't really need it to be.  The solution I've put together is nice and generic and will work with either <af:panelWindow> or <af:dialog> as a the child of the popup. In terms of usage it's pretty simple to use we  just need to ensure that the popup itself has clientComponent is set to true and includes the animation JavaScript (animateFadingPopup) on a popupOpened event: <af:popup id="pop1" clientComponent="true">   <af:panelWindow title="A Fading Message...">    ...  </af:panelWindow>   <af:clientListener method="animateFadingPopup" type="popupOpened"/> </af:popup>   The popup can be invoked in the normal way using showPopupBehavior or JavaScript, no special code is required there. As a further twist you can include an additional clientAttribute called preFadeDelay to define a delay before the fade itself starts (the default is 5 seconds) . To set the delay to just 2 seconds for example: <af:popup ...>   ...   <af:clientAttribute name="preFadeDelay" value="2"/>   <af:clientListener method="animateFadingPopup" type="popupOpened"/>  </af:popup> The Animation Styles  As before, we have a couple of CSS Styles which define the animation, I've put these into the skin in my case, and, as in the other articles, I've only defined the transitions for WebKit browsers (Chrome, Safari) at the moment. In this case, the fade is timed at 5 seconds in duration. .popupFadeReset {   opacity: 1; } .popupFadeAnimate {   opacity: 0;   -webkit-transition: opacity 5s ease-in-out; } As you can see here, we are achieving the fade by simply setting the CSS opacity property. The JavaScript The final part of the puzzle is, of course, the JavaScript, there are four functions, these are generic (apart from the Style names which, if you've changed above, you'll need to reflect here): The initial function invoked from the popupOpened event,  animateFadingPopup which starts a timer and provides the initial delay before we start to fade the popup. The function that applies the fade animation to the popup - initiatePopupFade. The callback function - closeFadedPopup used to reset the style class and correctly hide the popup so that it can be invoked again and again.   A utility function - findFadeContainer, which is responsible for locating the correct child component of the popup to actually apply the style to. Function - animateFadingPopup This function, as stated is the one hooked up to the popupOpened event via a clientListener. Because of when the code is called it does not actually matter how you launch the popup, or if the popup is re-used from multiple places. All usages will get the fade behavior. /**  * Client listener which will kick off the animation to fade the dialog and register  * a callback to correctly reset the popup once the animation is complete  * @param event  */ function animateFadingPopup(event) { var fadePopup = event.getSource();   var fadeCandidate = false;   //Ensure that the popup is initially Opaque   //This handles the situation where the user has dismissed   //the popup whilst it was in the process of fading   var fadeContainer = findFadeContainer(fadePopup);   if (fadeContainer != null) {     fadeCandidate = true;     fadeContainer.setStyleClass("popupFadeReset");   }   //Only continue if we can actually fade this popup   if (fadeCandidate) {   //See if a delay has been specified     var waitTimeSeconds = event.getSource().getProperty('preFadeDelay');     //Default to 5 seconds if not supplied     if (waitTimeSeconds == undefined) {     waitTimeSeconds = 5;     }     // Now call the fade after the specified time     var fadeFunction = function () {     initiatePopupFade(fadePopup);     };     var fadeDelayTimer = setTimeout(fadeFunction, (waitTimeSeconds * 1000));   } } The things to note about this function is the initial check that we have to do to ensure that the container is currently visible and reset it's style to ensure that it is.  This is to handle the situation where the popup has begun the fade, and yet the user has still explicitly dismissed the popup before it's complete and in doing so has prevented the callback function (described later) from executing. In this particular situation the initial display of the dialog will be (apparently) missing it's normal animation but at least it becomes visible to the user (and most users will probably not notice this difference in any case). You'll notice that the style that we apply to reset the  opacity - popupFadeReset, is not applied to the popup component itself but rather the dialog or panelWindow within it. More about that in the description of the next function findFadeContainer(). Finally, assuming that we have a suitable candidate for fading, a JavaScript  timer is started using the specified preFadeDelay wait time (or 5 seconds if that was not supplied). When this timer expires then the main animation styleclass will be applied using the initiatePopupFade() function Function - findFadeContainer As a component, the <af:popup> does not support styleClass attribute, so we can't apply the animation style directly.  Instead we have to look for the container within the popup which defines the window object that can have a style attached.  This is achieved by the following code: /**  * The thing we actually fade will be the only child  * of the popup assuming that this is a dialog or window  * @param popup  * @return the component, or null if this is not valid for fading  */ function findFadeContainer(popup) { var children = popup.getDescendantComponents();   var fadeContainer = children[0];   if (fadeContainer != undefined) {   var compType = fadeContainer.getComponentType();     if (compType == "oracle.adf.RichPanelWindow" || compType == "oracle.adf.RichDialog") {     return fadeContainer;     }   }   return null; }  So what we do here is to grab the first child component of the popup and check its type. Here I decided to limit the fade behaviour to only <af:dialog> and <af:panelWindow>. This was deliberate.  If  we apply the fade to say an <af:noteWindow> you would see the text inside the balloon fade, but the balloon itself would hang around until the fade animation was over and then hide.  It would of course be possible to make the code smarter to walk up the DOM tree to find the correct <div> to apply the style to in order to hide the whole balloon, however, that means that this JavaScript would then need to have knowledge of the generated DOM structure, something which may change from release to release, and certainly something to avoid. So, all in all, I think that this is an OK restriction and frankly it's windows and dialogs that I wanted to fade anyway, not balloons and menus. You could of course extend this technique and handle the other types should you really want to. One thing to note here is the selection of the first (children[0]) child of the popup. It does not matter if there are non-visible children such as clientListener before the <af:dialog> or <af:panelWindow> within the popup, they are not included in this array, so picking the first element in this way seems to be fine, no matter what the underlying ordering is within the JSF source.  If you wanted a super-robust version of the code you might want to iterate through the children array of the popup to check for the right type, again it's up to you.  Function -  initiatePopupFade  On to the actual fading. This is actually very simple and at it's heart, just the application of the popupFadeAnimate style to the correct component and then registering a callback to execute once the fade is done. /**  * Function which will kick off the animation to fade the dialog and register  * a callback to correctly reset the popup once the animation is complete  * @param popup the popup we are animating  */ function initiatePopupFade(popup) { //Only continue if the popup has not already been dismissed    if (popup.isPopupVisible()) {   //The skin styles that define the animation      var fadeoutAnimationStyle = "popupFadeAnimate";     var fadeAnimationResetStyle = "popupFadeReset";     var fadeContainer = findFadeContainer(popup);     if (fadeContainer != null) {     var fadeContainerReal = AdfAgent.AGENT.getElementById(fadeContainer.getClientId());       //Define the callback this will correctly reset the popup once it's disappeared       var fadeCallbackFunction = function (event) {       closeFadedPopup(popup, fadeContainer, fadeAnimationResetStyle);         event.target.removeEventListener("webkitTransitionEnd", fadeCallbackFunction);       };       //Initiate the fade       fadeContainer.setStyleClass(fadeoutAnimationStyle);       //Register the callback to execute once fade is done       fadeContainerReal.addEventListener("webkitTransitionEnd", fadeCallbackFunction, false);     }   } } I've added some extra checks here though. First of all we only start the whole process if the popup is still visible. It may be that the user has closed the popup before the delay timer has finished so there is no need to start animating in that case. Again we use the findFadeContainer() function to locate the correct component to apply the style to, and additionally we grab the DOM id that represents that container.  This physical ID is required for the registration of the callback function. The closeFadedPopup() call is then registered on the callback so as to correctly close the now transparent (but still there) popup. Function -  closeFadedPopup The final function just cleans things up: /**  * Callback function to correctly cancel and reset the style in the popup  * @param popup id of the popup so we can close it properly  * @param contatiner the window / dialog within the popup to actually style  * @param resetStyle the syle that sets the opacity back to solid  */ function closeFadedPopup(popup, container, resetStyle) { container.setStyleClass(resetStyle);   popup.cancel(); }  First of all we reset the style to make the popup contents opaque again and then we cancel the popup.  This will ensure that any of your user code that is waiting for a popup cancelled event will actually get the event, additionally if you have done this as a modal window / dialog it will ensure that the glasspane is dismissed and you can interact with the UI again.  What's Next? There are several ways in which this technique could be used, I've been working on a popup here, but you could apply the same approach to in-line messages. As this code (in the popup case) is generic it will make s pretty nice declarative component and maybe, if I get time, I'll look at constructing a formal Growl component using a combination of this technique, and active data push. Also, I'm sure the above code can be improved a little too.  Specifically things like registering a popup cancelled listener to handle the style reset so that we don't loose the subtle animation that takes place when the popup is opened in that situation where the user has closed the in-fade dialog.

    Read the article

  • ASP.NET MVC 2 throws exception for ‘favicon.ico’

    - by nmarun
    I must be on fire or something – third blog in 2 days… awesome! Before I begin, in case you’re wondering, favicon.ico is the small image that appears to the left of your web address, once the page loads. In order to learn more about MVC or any thing for that matter, it’s better to look at the source itself. Since MVC is open source (at least some part of it is), I started looking at the source code that’s available for download. While doing so, I hit Steve Sanderson’s blog site where he explains in great detail the way to debug your app using ASP.NET MVC source code. For those who are not aware, Steve Sanderson’s book - Pro ASP.NET MVC Framework, is one of the best books to learn about MVC. Alrighty, I followed the article and I hit F5 to debug the default / unchanged MVC project. I put a breakpoint in the DefaultControllerFactory.cs, CreateController() method. To know a little more about this class and the method, read this. Sure enough, the control stopped at the breakpoint and I hit F5 again and the page rendered just fine. But then what’s this? The breakpoint was hit again, as if something else was being requested. I now hovered my mouse over the ‘controllerName’ parameter and it says – favicon.ico. This by itself was more than enough for me to raise my eye-brows, but what happened next just took the ground below my feet. Oh, oh, I’m sorry I’m just typing, no code, no image, so here are a couple of screen captures. The first one shows the request for the Home controller; I get ‘Home’ when I hover over the parameter: And here’s the one that shows the same for call for ‘favicon.ico’. So, I step through the code and when the control reaches line 91 – GetControllerInstance() method, I step in. This is when I had the ‘ground-losing’ experience. Wow, an exception is being thrown for this file and that too in RTM. For some reason MVC thinks, this as a controller and tries to run it through the MvcHandler and it hits this snag. So it seems like this will happen for any MVC 2 site and this did not happen for me in the previous version of MVC. Before I get to how to resolve it, here’s another way of reproducing this exception. Revert back all your changes that you did as mentioned in Steve’s blog above. Now, add a class to your MVC project and call it say, MyControllerFactory and let this inherit from DefaultControllerFactory class. (Read this for details on the DefaultControllerFactory class is and how it is used in a different context). Add an override for the CreateController() method and for the sake of this blog, just copy the same content from the DefaultControllerFactory class. The last step is to tell your MVC app to use the MyControllerFactory class instead of the default one. To do this, go to your Global.asax.cs file and add line 6 of the snippet below: 1: protected void Application_Start() 2: { 3: AreaRegistration.RegisterAllAreas(); 4:   5: RegisterRoutes(RouteTable.Routes); 6: ControllerBuilder.Current.SetControllerFactory(new MyControllerFactory()); 7: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, you’re ready to reproduce the issue. Just F5 the project and when you hit the overridden CreateController() method for the second time, this is what it looks like for me: And continuing further gives me the same exception. I believe this is something that MS should fix, as not having ‘favicon.ico’ file will be common for most of the applications. So I think the when you create an MVC project, line 6 should be added by default by Visual Studio itself: 1: public class MvcApplication : System.Web.HttpApplication 2: { 3: public static void RegisterRoutes(RouteCollection routes) 4: { 5: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); 6: routes.IgnoreRoute("favicon.ico"); 7:   8: routes.MapRoute( 9: "Default", // Route name 10: "{controller}/{action}/{id}", // URL with parameters 11: new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults 12: ); 13: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } There it is, that’s the solution to avoid the exception altogether. I tried this both IE8 and Firefox browsers and was able to successfully reproduce the error. Hope someone will look at this issue and find a fix. Just before I finish up, I found another ‘bug’, if you want to call it, with Visual Studio 2008. Remember how you could change what browser you want your application to run in by just right clicking on the .aspx file and choosing ‘Browse with…’? Seems like that’s missing when you’re working with an MVC project. In order to test the above bug in the other browser, I had to load a classic ASP.NET project, change the settings and then run my MVC project. Felt kinda ‘icky’, for lack of a better word.

    Read the article

  • .NET to iOS: From WinForms to the iPad

    - by RobertChipperfield
    One of the great things about working at Red Gate is getting to play with new technology - and right now, that means mobile. A few weeks ago, we decided that a little research into the tablet computing arena was due, and purely from a numbers point of view, that suggested the iPad as a good target device. A quick trip to iPhoneDevCon in San Diego later, and Marine and I came back full of ideas, and with some concept of how iOS development was meant to work. Here's how we went from there to the release of Stacks & Heaps, our geeky take on the classic "Snakes & Ladders" game. Step 1: Buy a Mac I've played with many operating systems in my time: from the original BBC Model B, through DOS, Windows, Linux, and others, but I'd so far managed to avoid buying fruit-flavoured computer hardware! If you want to develop for the iPhone, iPad or iPod Touch, that's the first thing that needs to change. If you've not used OS X before, the first thing you'll realise is that everything is different! In the interests of avoiding a flame war in the comments section, I'll only go so far as to say that a lot of my Windows-flavoured muscle memory no longer worked. If you're in the UK, you'll also realise your keyboard is lacking a # key, and that " and @ are the other way around from normal. The wonderful Ukelele keyboard layout editor restores some sanity here, as long as you don't look at the keyboard when you're typing. I couldn't give up the PC entirely, but a handy application called Synergy comes to the rescue - it lets you share a single keyboard and mouse between multiple machines. There's a few limitations: Alt-Tab always seems to go to the Mac, and Windows 7's UAC dialogs require the local mouse for security reasons, but it gets you a long way at least. Step 2: Register as an Apple Developer You can register as an Apple Developer free of charge, and that lets you download XCode and the iOS SDK. You also get the iPhone / iPad emulator, which is handy, since you'll need to be a paid member before you can deploy your apps to a real device. You can either enroll as an individual, or as a company. They both cost the same ($99/year), but there's a few differences between them. If you register as a company, you can add multiple developers to your team (all for the same $99 - not $99 per developer), and you get to use your company name in the App Store. However, you'll need to send off significantly more documentation to Apple, and I suspect the process takes rather longer than for an individual, where they just need to verify some credit card details. Here's a tip: if you're registering as a company, do so as early as possible. The approval process can take a while to complete, so get the application in in plenty of time. Step 3: Learn to love the square brackets! Objective-C is the language of the iPad. C and C++ are also supported, and if you're doing some serious game development, you'll probably spend most of your time in C++ talking OpenGL, but for forms-based apps, you'll be interacting with a lot of the Objective-C SDK. Like shifting from Ctrl-C to Cmd-C, it feels a little odd at first, with the familiar string.format(.) turning into: NSString *myString = [NSString stringWithFormat:@"Hello world, it's %@", [NSDate date]]; Thankfully XCode's auto-complete is normally passable, if not up to Visual Studio's standards, which coupled with a huge amount of content on Stack Overflow means you'll soon get to grips with the API. You'll need to get used to some terminology changes, though; here's an incomplete approximation: Coming from a .NET background, there's some luxuries you no longer have developing Objective C in XCode: Generics! Remember back in .NET 1.1, when all collections were just objects? Yup, we're back there now. ReSharper. Or, more generally, very much refactoring support. The not-many-keystrokes to rename a class, its file, and al references to it in Visual Studio turns into a much more painful experience in XCode. Garbage collection. This is actually rather less of an issue than you might expect: if you follow the rules, the reference counting provided by Objective C gets you a long way without too much pain. Circular references are their usual problematic self, though. Decent exception handling. You do have exceptions, but they're nowhere near as widely used. Generally, if something goes wrong, you get nil (see translation table above) back. Which brings me on to. Calling a method on a nil object isn't a failure - it just returns nil itself! There's many arguments for and against this, but personally I fall into the "stuff should fail as quickly and explicitly as possible" camp. Less specifically, I found that there's more chance of code failing at runtime rather than getting caught at compile-time: using the @selector(.) syntax to pass a method signature isn't (can't be) checked at compile-time, so the first you know about a typo is a crash when you try and call it. The solution to this is of course lots of great testing, both automated and manual, but I still find comfort in provably correct type safety being enforced in addition to testing. Step 4: Submit to the App Store Assuming you want to distribute to more than a handful of devices, you're going to need to submit your app to the Apple App Store. There's a few gotchas in terms of getting builds signed with the right certificates, and you'll be bouncing around between XCode and iTunes Connect a fair bit, but eventually you get everything checked off the to-do list, and are ready to upload your first binary! With some amount of anticipation, I pressed the Upload button in XCode, ready to release our creation into the world, but was instead greeted by an error informing me my XML file was malformed. Uh. A little Googling later, and it turned out that a simple rename from "Stacks&Heaps.app" to "StacksAndHeaps.app" worked around an XML escaping bug, and we were good to go. The next step is to wait for approval (or otherwise). After a couple of weeks of intensive development, this part is agonising. Did we make it? The Apple jury is still out at the moment, but our fingers are firmly crossed! In the meantime, you can see some screenshots and leave us your email address if you'd like us to get in touch when it does go live at the MobileFoo website. Step 5: Profit! Actually, that wasn't the idea here: Stacks & Heaps is free; there's no adverts, and we're not going to sell all your data either. So why did we do it? We wanted to get an idea of what it's like to move from coding for a desktop environment, to something completely different. We don't know whether in a year's time, the iPad will still be the dominant force, or whether Android will have smoothed out some bugs, tweaked the performance, and polished the UI, but I think it's a fairly sure bet that the tablet form factor is here to stay. We want to meet people who are using it, start chatting to them, and find out about some of the pain they're feeling. What better way to do that than do it ourselves, and get to write a cool game in the process?

    Read the article

  • IntelliSense for Razor Hosting in non-Web Applications

    - by Rick Strahl
    When I posted my Razor Hosting article a couple of weeks ago I got a number of questions on how to get IntelliSense to work inside of Visual Studio while editing your templates. The answer to this question is mainly dependent on how Visual Studio recognizes assemblies, so a little background is required. If you open a template just on its own as a standalone file by clicking on it say in Explorer, Visual Studio will open up with the template in the editor, but you won’t get any IntelliSense on any of your related assemblies that you might be using by default. It’ll give Intellisense on base System namespace, but not on your imported assembly types. This makes sense: Visual Studio has no idea what the assembly associations for the single file are. There are two options available to you to make IntelliSense work for templates: Add the templates as included files to your non-Web project Add a BIN folder to your template’s folder and add all assemblies required there Including Templates in your Host Project By including templates into your Razor hosting project, Visual Studio will pick up the project’s assembly references and make IntelliSense available for any of the custom types in your project and on your templates. To see this work I moved the \Templates folder from the samples from the Debug\Bin folder into the project root and included the templates in the WinForm sample project. Here’s what this looks like in Visual Studio after the templates have been included:   Notice that I take my original example and type cast the Context object to the specific type that it actually represents – namely CustomContext – by using a simple code block: @{ CustomContext Model = Context as CustomContext; } After that assignment my Model local variable is in scope and IntelliSense works as expected. Note that you also will need to add any namespaces with the using command in this case: @using RazorHostingWinForm which has to be defined at the very top of a Razor document. BTW, while you can only pass in a single Context 'parameter’ to the template with the default template I’ve provided realize that you can also assign a complex object to Context. For example you could have a container object that references a variety of other objects which you can then cast to the appropriate types as needed: @{ ContextContainer container = Context as ContextContainer; CustomContext Model = container.Model; CustomDAO DAO = container.DAO; } and so forth. IntelliSense for your Custom Template Notice also that you can get IntelliSense for the top level template by specifying an inherits tag at the top of the document: @inherits RazorHosting.RazorTemplateFolderHost By specifying the above you can then get IntelliSense on your base template’s properties. For example, in my base template there are Request and Response objects. This is very useful especially if you end up creating custom templates that include your custom business objects as you can get effectively see full IntelliSense from the ‘page’ level down. For Html Help Builder for example, I’d have a Help object on the page and assuming I have the references available I can see all the way into that Help object without even having to do anything fancy. Note that the @inherits key is a GREAT and easy way to override the base template you normally specify as the default template. It allows you to create a custom template and as long as it inherits from the base template it’ll work properly. Since the last post I’ve also made some changes in the base template that allow hooking up some simple initialization logic so it gets much more easy to create custom templates and hook up custom objects with an IntializeTemplate() hook function that gets called with the Context and a Configuration object. These objects are objects you can pass in at runtime from your host application and then assign to custom properties on your template. For example the default implementation for RazorTemplateFolderHost does this: public override void InitializeTemplate(object context, object configurationData) { // Pick up configuration data and stuff into Request object RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration; this.Request.TemplatePath = config.TemplatePath; this.Request.TemplateRelativePath = config.TemplateRelativePath; // Just use the entire ConfigData as the model, but in theory // configData could contain many objects or values to set on // template properties this.Model = config.ConfigData as TModel; } to set up a strongly typed Model and the Request object. You can do much more complex hookups here of course and create complex base template pages that contain all the objects that you need in your code with strong typing. Adding a Bin folder to your Template’s Root Path Including templates in your host project works if you own the project and you’re the only one modifying the templates. However, if you are distributing the Razor engine as a templating/scripting solution as part of your application or development tool the original project is likely not available and so that approach is not practical. Another option you have is to add a Bin folder and add all the related assemblies into it. You can also add a Web.Config file with assembly references for any GAC’d assembly references that need to be associated with the templates. Between the web.config and bin folder Visual Studio can figure out how to provide IntelliSense. The Bin folder should contain: The RazorHosting.dll Your host project’s EXE or DLL – renamed to .dll if it’s an .exe Any external (bin folder) dependent assemblies Note that you most likely also want a reference to the host project if it contains references that are going to be used in templates. Visual Studio doesn’t recognize an EXE reference so you have to rename the EXE to DLL to make it work. Apparently the binary signature of EXE and DLL files are identical and it just works – learn something new everyday… For GAC assembly references you can add a web.config file to your template root. The Web.config file then should contain any full assembly references to GAC components: <configuration> <system.web> <compilation debug="true"> <assemblies> <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add assembly="System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> <add assembly="System.Web.Extensions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </assemblies> </compilation> </system.web> </configuration> And with that you should get full IntelliSense. Note that if you add a BIN folder and you also have the templates in your Visual Studio project Visual Studio will complain about reference conflicts as it’s effectively seeing both the project references and the ones in the bin folder. So it’s probably a good idea to use one or the other but not both at the same time :-) Seeing IntelliSense in your Razor templates is a big help for users of your templates. If you’re shipping an application level scripting solution especially it’ll be real useful for your template consumers/users to be able to get some quick help on creating customized templates – after all that’s what templates are all about – easy customization. Making sure that everything is referenced in your bin folder and web.config is a good idea and it’s great to see that Visual Studio (and presumably WebMatrix/Visual Web Developer as well) will be able to pick up your custom IntelliSense in Razor templates.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Razor  

    Read the article

  • Fast block placement algorithm, advice needed?

    - by James Morris
    I need to emulate the window placement strategy of the Fluxbox window manager. As a rough guide, visualize randomly sized windows filling up the screen one at a time, where the rough size of each results in an average of 80 windows on screen without any window overlapping another. It is important to note that windows will close and the space that closed windows previously occupied becomes available once more for the placement of new windows. The window placement strategy has three binary options: Windows build horizontal rows or vertical columns (potentially) Windows are placed from left to right or right to left Windows are placed from top to bottom or bottom to top Why is the algorithm a problem? It needs to operate to the deadlines of a real time thread in an audio application. At this moment I am only concerned with getting a fast algorithm, don't concern yourself over the implications of real time threads and all the hurdles in programming that that brings. So far I have two choices which I have built loose prototypes for: 1) A port of the Fluxbox placement algorithm into my code. The problem with this is, the client (my program) gets kicked out of the audio server (JACK) when I try placing the worst case scenario of 256 blocks using the algorithm. This algorithm performs over 14000 full (linear) scans of the list of blocks already placed when placing the 256th window. 2) My alternative approach. Only partially implemented, this approach uses a data structure for each area of rectangular free unused space (the list of windows can be entirely separate, and is not required for testing of this algorithm). The data structure acts as a node in a doubly linked list (with sorted insertion), as well as containing the coordinates of the top-left corner, and the width and height. Furthermore, each block data structure also contains four links which connect to each immediately adjacent (touching) block on each of the four sides. IMPORTANT RULE: Each block may only touch with one block per side. The problem with this approach is, it's very complex. I have implemented the straightforward cases where 1) space is removed from one corner of a block, 2) splitting neighbouring blocks so that the IMPORTANT RULE is adhered to. The less straightforward case, where the space to be removed can only be found within a column or row of boxes, is only partially implemented - if one of the blocks to be removed is an exact fit for width (ie column) or height (ie row) then problems occur. And don't even mention the fact this only checks columns one box wide, and rows one box tall. I've implemented this algorithm in C - the language I am using for this project (I've not used C++ for a few years and am uncomfortable using it after having focused all my attention to C development, it's a hobby). The implementation is 700+ lines of code (including plenty of blank lines, brace lines, comments etc). The implementation only works for the horizontal-rows + left-right + top-bottom placement strategy. So I've either got to add some way of making this +700 lines of code work for the other 7 placement strategy options, or I'm going to have to duplicate those +700 lines of code for the other seven options. Neither of these is attractive, the first, because the existing code is complex enough, the second, because of bloat. The algorithm is not even at a stage where I can use it in the real time worst case scenario, because of missing functionality, so I still don't know if it actually performs better or worse than the first approach. What else is there? I've skimmed over and discounted: Bin Packing algorithms: their emphasis on optimal fit does not match the requirements of this algorithm. Recursive Bisection Placement algorithms: sounds promising, but these are for circuit design. Their emphasis is optimal wire length. Both of these, especially the latter, all elements to be placed/packs are known before the algorithm begins. I need an algorithm which works accumulatively with what it is given to do when it is told to do it. What are your thoughts on this? How would you approach it? What other algorithms should I look at? Or even what concepts should I research seeing as I've not studied computer science/software engineering? Please ask questions in comments if further information is needed. [edit] If it makes any difference, the units for the coordinates will not be pixels. The units are unimportant, but the grid where windows/blocks/whatever can be placed will be 127 x 127 units.

    Read the article

  • Establishing WebLogic Server HTTPS Trust of IIS Using a Microsoft Local Certificate Authority

    - by user647124
    Everyone agrees that self-signed and demo certificates for SSL and HTTPS should never be used in production and preferred not to be used elsewhere. Most self-signed and demo certificates are provided by vendors with the intention that they are used only to integrate within the same environment. In a vendor’s perfect world all application servers in a given enterprise are from the same vendor, which makes this lack of interoperability in a non-production environment an advantage. For us working in the real world, where not only do we not use a single vendor everywhere but have to make do with self-signed certificates for all but production, testing HTTPS between an IIS ASP.NET service provider and a WebLogic J2EE consumer application can be very frustrating to set up. It was for me, especially having found many blogs and discussion threads where various solutions were described but did not quite work and were all mostly similar but just a little bit different. To save both you and my future (who always seems to forget the hardest-won lessons) all of the pain and suffering, I am recording the steps that finally worked here for reference and sanity. How You Know You Need This The first cold clutches of dread that tells you it is going to be a long day is when you attempt to a WSDL published by IIS in WebLogic over HTTPS and you see the following: <Jul 30, 2012 2:51:31 PM EDT> <Warning> <Security> <BEA-090477> <Certificate chain received from myserver.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure.> weblogic.wsee.wsdl.WsdlException: Failed to read wsdl file from url due to -- javax.net.ssl.SSLKeyException: [Security:090477]Certificate chain received from myserver02.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure. The above is what started a three day sojourn into searching for a solution. Even people who had solved it before would tell me how they did, and then shrug when I demonstrated that the steps did not end in the success they claimed I would experience. Rather than torture you with the details of everything I did that did not work, here is what finally did work. Export the Certificates from IE First, take the offending WSDL URL and paste it into IE (if you have an internal Microsoft CA, you have IE, even if you don’t use it in favor of some other browser). To state the semi-obvious, if you received the error above there is a certificate configured for the IIS host of the service and the SSL port has been configured properly. Otherwise there would be a different error, usually about the site not found or connection failed. Once the WSDL loads, to the right of the address bar there will be a lock icon. Click the lock and then click View Certificates in the resulting dialog (if you do not have a lock icon but do have a Certificate Error message, see http://support.microsoft.com/kb/931850 for steps to install the certificate then you can continue from the point of finding the lock icon). Figure 1: View Certificates in IE Next, select the Details tab in the resulting dialog Figure 2: Use Certificate Details to Export Certificate Click Copy to File, then Next, then select the Base-64 encoded option for the format Figure 3: Select the Base-64 encoded option for the format For the sake of simplicity, I choose to save this to the root of the WebLogic domain. It will work from anywhere, but later you will need to type in the full path rather than just the certificate name if you save it elsewhere. Figure 4: Browse to Save Location Figure 5: Save the Certificate to the Domain Root for Convenience This is the point where I ran into some confusion. Some articles mentioned exporting the entire chain of certificates. This supposedly works for some types of certificates, or if you have a few other tools and the time to learn them. For the SSL experts out there, they already have these tools, know how to use them well, and should not be wasting their time reading this article meant for folks who just want to get things wired up and back to unit testing and development. For the rest of us, the easiest way to make sure things will work is to just export all the links in the chain individually and let WebLogic Server worry about re-assembling them into a chain (which it does quite nicely). While perhaps not the most elegant solution, the multi-step process is easy to repeat and uses only tools that are immediately available and require no learning curve. So… Next, go to Tools then Internet Options then the Content tab and click Certificates. Go to the Trust Root Certificate Authorities tab and find the certificate root for your Microsoft CA cert (look for the Issuer of the certificate you exported earlier). Figure 6: Trusted Root Certification Authorities Tab Export this one the same way as before, with a different name Figure 7: Use a Unique Name for Each Certificate Repeat this once more for the Intermediate Certificate tab. Import the Certificates to the WebLogic Domain Now, open an command prompt, navigate to [WEBLOGIC_DOMAIN_ROOT]\bin and execute setDomainEnv. You should then be in the root of the domain. If not, CD to the domain root. Assuming you saved the certificate in the domain root, execute the following: keytool -importcert -alias [ALIAS-1] -trustcacerts -file [FULL PATH TO .CER 1] -keystore truststore.jks -storepass [PASSWORD] An example with the variables filled in is: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password After several lines out output you will be prompted with: Trust this certificate? [no]: The correct answer is ‘yes’ (minus the quotes, of course). You’ll you know you were successful if the response is: Certificate was added to keystore If not, check your typing, as that is generally the source of an error at this point. Repeat this for all three of the certificates you exported, changing the [ALIAS-1] and [FULL PATH TO .CER 1] value each time. For example: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-2 -trustcacerts -file microsftcertRoot.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-3 -trustcacerts -file microsftcertIntermediate.cer -keystore truststore.jks -storepass password In the above we created a new JKS key store. You can re-use an existing one by changing the name of the JKS file to one you already have and change the password to the one that matches that JKS file. For the DemoTrust.jks  that is included with WebLogic the password is DemoTrustKeyStorePassPhrase. An example here would be: keytool -importcert -alias IIS-1 -trustcacerts -file microsoft.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftRoot.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftInter.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase Whichever keystore you use, you can check your work with: keytool -list -keystore truststore.jks -storepass password Where “truststore.jks” and “password” can be replaced appropriately if necessary. The output will look something like this: Figure 8: Output from keytool -list -keystore Update the WebLogic Keystore Configuration If you used an existing keystore rather than creating a new one, you can restart your WebLogic Server and skip the rest of this section. For those of us who created a new one because that is the instructions we found online… Next, we need to tell WebLogic to use the JKS file (truststore.jks) we just created. Log in to the WebLogic Server Administration Console and navigate to Servers > AdminServer > Configuration > Keystores. Scroll down to “Custom Trust Keystore:” and change the value to “truststore.jks” and the value of “Custom Trust Keystore Passphrase:” and “Confirm Custom Trust Keystore Passphrase:” to the password you used when earlier, then save your changes. You will get a nice message similar to the following: Figure 9: To Be Safe, Restart Anyways The “No restarts are necessary” is somewhat of an exaggeration. If you want to be able to use the keystore you may need restart the server(s). To save myself aggravation, I always do. Your mileage may vary. Conclusion That should get you there. If there are some erroneous steps included for your situation in particular, I will offer up a semi-apology as the process described above does not take long at all and if there is one step that could be dropped from it, is still much faster than trying to figure this out from other sources.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121  | Next Page >