Search Results

Search found 13752 results on 551 pages for 'ip protocol'.

Page 543/551 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • HTML E-Mail as fileattachment

    - by johnny
    I have a Problem with Outlook 2010. I sent an E-Mail with a Contactform with this Code: $message = ' <html> <head> <title>Anfrage ('.$cfg->get('global.page.title').')</title> <style type="text/css"> body { background:#FFFFFF; color:#000000; } #tbl td { background:#F0F0F0; vertical-align:top; } #tbl2 td { background:#E0E0E0; vertical-align:top; } </style> </head> <body> <p>Mail von der Webseite '.$cfg->get('global.page.title').'</p> <table id="tbl"> <tr> <td>Absender</td> <td>'.htmlspecialchars($_POST['name']).' ('.htmlspecialchars(trim($_POST['email'])).')</td> </tr> <tr id="tbl2"> <td>Betreff:</td> <td>'.htmlspecialchars($_POST["topic"]).'</td> </tr> <tr> <td>Nachricht:</td> <td>'.nl2br(htmlspecialchars($_POST["message"])).'</td> </tr> </table> </body> </html>'; $absender = $_POST['name'].' <'.$_POST['email'].'>'; $header = "From: $absender\n"; $header .= "Reply-To: $absender\n"; $header .= "X-Mailer: PHP/" . phpversion(). "\n"; $header .= "X-Sender-IP: " . $_SERVER["REMOTE_ADDR"] . "\n"; $header .= "Content-Type: text/html; Charset=utf-8"; $send_mail = mail($cfg->get('contact.toMailAdress'), "Anfrage (".$cfg->get('global.page.title').")", $message, $header); //$send_mail = mail("[email protected]", "Anfrage (".$cfg->get('global.page.title').")", $message, $header); $_SESSION['kontakt_form_time'] = time(); $tpl->assign("mail_sent", $send_mail); When I sent the email, doesn't shows the message. it generates a File named [NAME].h. The Message is in this File. How can I fix that, that the message shows in the E-Mail. Is this a Problem about the settings in Outlook?

    Read the article

  • How to get Rid of this strange Border on JButton in Windows System look and Feel?

    - by Timo J.
    I have a small Problem with the Layout of my Frame. As You could see in a picture (if i would have my 10 rep ;) ) there is a small Border around my JButtons. I'm already searching long time on this topic and i finally decided to ask for help, i'm just out of ideas. Is this Part of the Windows Theme which shouldn't be changed? It just doesnt fit into my current Layout, as I'm Listing TextBoxes and Comboboxes on Page Axis without any Border. I would be very happy if there is solution for this issue. Thanks in advance! EDIT 1: I do not mean the Focus Border. I like the Highlighting of a focussed Element. What i mean is the Border in the Background Color which causes a small distance of background-colored space beetween a JButton and another Element. (Picture on Personnal Webspace: http://tijamocobs.no-ip.biz/border_jbutton.png) EDIT 2: I'm using the Windows 8 Look and Feel but saw this Problem on Windows 7 Look and Feel, too. short Example import javax.swing.BoxLayout; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.UIManager; import javax.swing.UnsupportedLookAndFeelException; public class TestFrame extends JFrame { public TestFrame(){ setLayout(new BoxLayout(getContentPane(), BoxLayout.PAGE_AXIS)); for (int i = 0; i < 3; i++) { JButton btn = new JButton("."); // not working: // btn.setBorder(null); // btn.setBorder(BorderFactory.createEmptyBorder()); add(btn); } pack(); } public static void main(String[] args) throws ClassNotFoundException, InstantiationException, IllegalAccessException, UnsupportedLookAndFeelException { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); TestFrame tiss = new TestFrame(); tiss.setVisible(true); } } Example Code (long version): import java.awt.BorderLayout; import java.awt.Container; import java.awt.Dimension; import javax.swing.Box; import javax.swing.BoxLayout; import javax.swing.JButton; import javax.swing.JComboBox; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.UIManager; import javax.swing.UnsupportedLookAndFeelException; public class TestFrame extends JFrame { public TestFrame(){ JPanel muh = new JPanel(); muh.setLayout(new BoxLayout(muh, BoxLayout.PAGE_AXIS)); for (int i = 0; i < 3; i++) { Container c = new JPanel(); c.setLayout(new BoxLayout(c, BoxLayout.LINE_AXIS)); Box bx = Box.createHorizontalBox(); final String[] tmp = {"anything1","anything2"}; JComboBox<String> cmbbx = new JComboBox<String>(tmp); cmbbx.setMinimumSize(new Dimension(80,20)); bx.add(cmbbx); JButton btn = new JButton("."); btn.setMinimumSize(new Dimension(cmbbx.getMinimumSize().height,cmbbx.getMinimumSize().height)); btn.setPreferredSize(new Dimension(30,30)); btn.setMaximumSize(new Dimension(30,30)); bx.add(btn); c.setMaximumSize(new Dimension(Integer.MAX_VALUE,30)); c.add(new JLabel("Just anything")); c.add(bx); muh.add(c); } add(muh,BorderLayout.CENTER); pack(); } public static void main(String[] args) throws ClassNotFoundException, InstantiationException, IllegalAccessException, UnsupportedLookAndFeelException { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); TestFrame tiss = new TestFrame(); tiss.setVisible(true); } }

    Read the article

  • How to figure out who owns a worker thread that is still running when my app exits?

    - by Dave
    Not long after upgrading to VS2010, my application won't shut down cleanly. If I close the app and then hit pause in the IDE, I see this: The problem is, there's no context. The call stack just says [External code], which isn't too helpful. Here's what I've done so far to try to narrow down the problem: deleted all extraneous plugins to minimize the number of worker threads launched set breakpoints in my code anywhere I create worker threads (and delegates + BeginInvoke, since I think they are labeled "Worker Thread" in the debugger anyway). None were hit. set IsBackground = true for all threads While I could do the next brute force step, which is to roll my code back to a point where this didn't happen and then look over all of the change logs, this isn't terribly efficient. Can anyone recommend a better way to figure this out, given the notable lack of information presented by the debugger? The only other things I can think of include: read up on WinDbg and try to use it to stop anytime a thread is started. At least, I thought that was possible... :) comment out huge blocks of code until the app closes properly, then start uncommenting until it doesn't. UPDATE Perhaps this information will be of use. I decided to use WinDbg and attach to my application. I then closed it, and switched to thread 0 and dumped the stack contents. Here's what I have: ThreadCount: 6 UnstartedThread: 0 BackgroundThread: 1 PendingThread: 0 DeadThread: 4 Hosted Runtime: no PreEmptive GC Alloc Lock ID OSID ThreadOBJ State GC Context Domain Count APT Exception 0 1 1c70 005a65c8 6020 Enabled 02dac6e0:02dad7f8 005a03c0 0 STA 2 2 1b20 005b1980 b220 Enabled 00000000:00000000 005a03c0 0 MTA (Finalizer) XXXX 3 08504048 19820 Enabled 00000000:00000000 005a03c0 0 Ukn XXXX 4 08504540 19820 Enabled 00000000:00000000 005a03c0 0 Ukn XXXX 5 08516a90 19820 Enabled 00000000:00000000 005a03c0 0 Ukn XXXX 6 08517260 19820 Enabled 00000000:00000000 005a03c0 0 Ukn 0:008> ~0s eax=c0674960 ebx=00000000 ecx=00000000 edx=00000000 esi=0040f320 edi=005a65c8 eip=76c37e47 esp=0040f23c ebp=0040f258 iopl=0 nv up ei pl nz na po nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000202 USER32!NtUserGetMessage+0x15: 76c37e47 83c404 add esp,4 0:000> !clrstack OS Thread Id: 0x1c70 (0) Child SP IP Call Site 0040f274 76c37e47 [InlinedCallFrame: 0040f274] 0040f270 6baa8976 DomainBoundILStubClass.IL_STUB_PInvoke(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32)*** WARNING: Unable to verify checksum for C:\Windows\assembly\NativeImages_v4.0.30319_32\WindowsBase\d17606e813f01376bd0def23726ecc62\WindowsBase.ni.dll 0040f274 6ba924c5 [InlinedCallFrame: 0040f274] MS.Win32.UnsafeNativeMethods.IntGetMessageW(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32) 0040f2c4 6ba924c5 MS.Win32.UnsafeNativeMethods.GetMessageW(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32) 0040f2dc 6ba8e5f8 System.Windows.Threading.Dispatcher.GetMessage(System.Windows.Interop.MSG ByRef, IntPtr, Int32, Int32) 0040f318 6ba8d579 System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame) 0040f368 6ba8d2a1 System.Windows.Threading.Dispatcher.PushFrame(System.Windows.Threading.DispatcherFrame) 0040f374 6ba7fba0 System.Windows.Threading.Dispatcher.Run() 0040f380 62e6ccbb System.Windows.Application.RunDispatcher(System.Object)*** WARNING: Unable to verify checksum for C:\Windows\assembly\NativeImages_v4.0.30319_32\PresentationFramewo#\7f91eecda3ff7ce478146b6458580c98\PresentationFramework.ni.dll 0040f38c 62e6c8ff System.Windows.Application.RunInternal(System.Windows.Window) 0040f3b0 62e6c682 System.Windows.Application.Run(System.Windows.Window) 0040f3c0 62e6c30b System.Windows.Application.Run() 0040f3cc 001f00bc MyApplication.App.Main() [C:\code\trunk\MyApplication\obj\Debug\GeneratedInternalTypeHelper.g.cs @ 24] 0040f608 66c421db [GCFrame: 0040f608] EDIT -- not sure if this helps, but the main thread's call stack looks like this: [Managed to Native Transition] > WindowsBase.dll!MS.Win32.UnsafeNativeMethods.GetMessageW(ref System.Windows.Interop.MSG msg, System.Runtime.InteropServices.HandleRef hWnd, int uMsgFilterMin, int uMsgFilterMax) + 0x15 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.GetMessage(ref System.Windows.Interop.MSG msg, System.IntPtr hwnd, int minMessage, int maxMessage) + 0x48 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame frame = {System.Windows.Threading.DispatcherFrame}) + 0x85 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.PushFrame(System.Windows.Threading.DispatcherFrame frame) + 0x49 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.Run() + 0x4c bytes PresentationFramework.dll!System.Windows.Application.RunDispatcher(object ignore) + 0x17 bytes PresentationFramework.dll!System.Windows.Application.RunInternal(System.Windows.Window window) + 0x6f bytes PresentationFramework.dll!System.Windows.Application.Run(System.Windows.Window window) + 0x26 bytes PresentationFramework.dll!System.Windows.Application.Run() + 0x1b bytes I did a search on it and found some posts related to WPF GUIs hanging, and maybe that'll give me some more clues.

    Read the article

  • python socket related question.

    - by paul
    Hello,All im totally new to socket programming in python. i was read some tutorial and manual, but i didn't found what i want to make python related socket script in manual or tutorial. i want to make socket script which can send some info to server and also receive some info from server. For example, i want to send my login information to server, and want to receive result reply from server. but i have no idea..how to send my login information(id and password) to server. i was captured with wireshark, some process to send login info to server. and i was found port number is 5300 and server ip is 58.225.56.152 and i was send id is 'aaaaaaa' and password 'bbbbbbb' and i was received 'USER NOT FOUND' result from server. how can i make this kind of process with python socket ? if anyone help me some reference or some example or anything help much appreciate! 0000 00 50 56 f2 c8 cc 00 0c 29 a8 f8 c0 08 00 45 00 .PV.....).....E. 0010 00 e2 2a 19 40 00 80 06 d0 55 c0 a8 cb 85 3a e1 ..*[email protected]....:. 0020 38 98 05 f3 15 9a b9 86 62 7b 0d ab 0f ba 50 18 8.......b{....P. 0030 fa f0 26 14 00 00 50 54 3f 09 a2 91 7f 13 00 00 ..&...PT?....... 0040 00 1f 14 00 02 00 00 00 00 00 00 00 07 00 00 00 ................ 0050 61 61 61 61 61 61 61 50 54 3f 09 a2 91 7f 8b 00 aaaaaaaPT?...... 0060 00 00 1f 15 00 08 00 00 00 07 00 00 00 61 61 61 .............aaa 0070 61 61 61 61 07 00 00 00 62 62 62 62 62 62 62 01 aaaa....bbbbbbb. 0080 00 00 00 31 02 00 00 00 4b 52 0f 00 00 00 31 39 ...1....KR....19 0090 32 2e 31 36 38 2e 32 30 33 2e 31 33 33 30 00 00 2.168.203.1330.. 00a0 00 4d 69 63 72 6f 73 6f 66 74 20 57 69 6e 64 6f .Microsoft Windo 00b0 77 73 20 58 50 20 50 72 6f 66 65 73 73 69 6f 6e ws XP Profession 00c0 61 6c 20 53 65 72 76 69 63 65 20 50 61 63 6b 20 al Service Pack 00d0 32 14 00 00 00 31 30 30 31 33 30 30 35 33 31 35 2....10013005315 00e0 37 38 33 37 32 30 31 32 33 03 00 00 00 34 37 30 783720123....470 0000 00 0c 29 a8 f8 c0 00 50 56 f2 c8 cc 08 00 45 00 ..)....PV.....E. 0010 00 28 ae 37 00 00 80 06 8c f1 3a e1 38 98 c0 a8 .(.7......:.8... 0020 cb 85 15 9a 05 f3 0d ab 0f ba b9 86 63 35 50 10 ............c5P. 0030 fa f0 5f 8e 00 00 00 00 00 00 00 00 .._......... 0000 00 0c 29 a8 f8 c0 00 50 56 f2 c8 cc 08 00 45 00 ..)....PV.....E. 0010 00 4c ae 38 00 00 80 06 8c cc 3a e1 38 98 c0 a8 .L.8......:.8... 0020 cb 85 15 9a 05 f3 0d ab 0f ba b9 86 63 35 50 18 ............c5P. 0030 fa f0 3e 75 00 00 50 54 3f 09 a2 91 7f 16 00 00 ..>u..PT?....... 0040 00 1f 18 00 01 00 00 00 0e 00 00 00 55 73 65 72 ............User 0050 20 4e 6f 74 20 46 6f 75 6e 64 Not Found

    Read the article

  • Javascript and Twitter API rate limitation? (Changing variable values in a loop)

    - by Pablo
    Hello, I have adapted an script from an example of http://github.com/remy/twitterlib. It´s a script that makes one query each 10 seconds to my Twitter timeline, to get only the messages that begin with a musical notation. It´s already working, but I don´t know it is the better way to do this... The Twitter API has a rate limit of 150 IP access per hour (queries from the same user). At this time, my Twitter API is blocked at 25 minutes because the 10 seconds frecuency between posts. If I set up a frecuency of 25 seconds between post, I am below the rate limit per hour, but the first 10 posts are shown so slowly. I think this way I can guarantee to be below the Twitter API rate limit and show the first 10 posts at normal speed: For the first 10 posts, I would like to set a frecuency of 5 seconds between queries. For the rest of the posts, I would like to set a frecuency of 25 seconds between queries. I think if making somewhere in the code a loop with the previous sentences, setting the "frecuency" value from 5000 to 25000 after the 10th query (or after 50 seconds, it´s the same), that´s it... Can you help me on modify this code below to make it work? Thank you in advance. var Queue = function (delay, callback) { var q = [], timer = null, processed = {}, empty = null, ignoreRT = twitterlib.filter.format('-"RT @"'); function process() { var item = null; if (q.length) { callback(q.shift()); } else { this.stop(); setTimeout(empty, 5000); } return this; } return { push: function (item) { var green = [], i; if (!(item instanceof Array)) { item = [item]; } if (timer == null && q.length == 0) { this.start(); } for (i = 0; i < item.length; i++) { if (!processed[item[i].id] && twitterlib.filter.match(item[i], ignoreRT)) { processed[item[i].id] = true; q.push(item[i]); } } q = q.sort(function (a, b) { return a.id > b.id; }); return this; }, start: function () { if (timer == null) { timer = setInterval(process, delay); } return this; }, stop: function () { clearInterval(timer); timer = null; return this; }, empty: function (fn) { empty = fn; return this; }, q: q, next: process }; }; $.extend($.expr[':'], { below: function (a, i, m) { var y = m[3]; return $(a).offset().top y; } }); function renderTweet(data) { var html = ''; html += ''; html += twitterlib.ify.clean(data.text); html += ''; since_id = data.id; return html; } function passToQueue(data) { if (data.length) { twitterQueue.push(data.reverse()); } } var frecuency = 10000; // The lapse between each new Queue var since_id = 1; var run = function () { twitterlib .timeline('twitteruser', { filter : "'?'", limit: 10 }, passToQueue) }; var twitterQueue = new Queue(frecuency, function (item) { var tweet = $(renderTweet(item)); var tweetClone = tweet.clone().hide().css({ visibility: 'hidden' }).prependTo('#tweets').slideDown(1000); tweet.css({ top: -200, position: 'absolute' }).prependTo('#tweets').animate({ top: 0 }, 1000, function () { tweetClone.css({ visibility: 'visible' }); $(this).remove(); }); $('#tweets p:below(' + window.innerHeight + ')').remove(); }).empty(run); run();

    Read the article

  • Socket php server not showing messages sent from android client

    - by Mj1992
    Hi I am a newbie in these kind of stuff but here's what i want to do. I am trying to implement a chat application in which users will send their queries from the website and as soon as the messages are sent by the website users.It will appear in the android mobile application of the site owner who will answer their queries .In short I wanna implement a live chat. Now right now I am just simply trying to send messages from android app to php server. But when I run my php script from dreamweaver in chrome the browser keeps on loading and doesn't shows any output when I send message from the client. Sometimes it happened that the php script showed some outputs which I have sent from the android(client).But i don't know when it works and when it does not. So I want to show those messages in the php script as soon as I send those messages from client and vice versa(did not implemented the vice versa for client but help will be appreciated). Here's what I've done till now. php script: <?php set_time_limit (0); $address = '127.0.0.1'; $port = 1234; $sock = socket_create(AF_INET, SOCK_STREAM, 0); socket_bind($sock, $address, $port) or die('Could not bind to address'); socket_listen($sock); $client = socket_accept($sock); $welcome = "Roll up, roll up, to the greatest show on earth!\n? "; socket_write($client, $welcome,strlen($welcome)) or die("Could not send connect string\n"); do{ $input=socket_read($client,1024,1) or die("Could not read input\n"); echo "User Says: \n\t\t\t".$input; if (trim($input) != "") { echo "Received input: $input\n"; if(trim($input)=="END") { socket_close($spawn); break; } } else{ $output = strrev($input) . "\n"; socket_write($spawn, $output . "? ", strlen (($output)+2)) or die("Could not write output\n"); echo "Sent output: " . trim($output) . "\n"; } } while(true); socket_close($sock); echo "Socket Terminated"; ?> Android Code: public class ServerClientActivity extends Activity { private Button bt; private TextView tv; private Socket socket; private String serverIpAddress = "127.0.0.1"; private static final int REDIRECTED_SERVERPORT = 1234; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); bt = (Button) findViewById(R.id.myButton); tv = (TextView) findViewById(R.id.myTextView); try { InetAddress serverAddr = InetAddress.getByName(serverIpAddress); socket = new Socket(serverAddr, REDIRECTED_SERVERPORT); } catch (UnknownHostException e1) { e1.printStackTrace(); } catch (IOException e1) { e1.printStackTrace(); } bt.setOnClickListener(new OnClickListener() { public void onClick(View v) { try { EditText et = (EditText) findViewById(R.id.EditText01); String str = et.getText().toString(); PrintWriter out = new PrintWriter(new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())),true); out.println(str); Log.d("Client", "Client sent message"); } catch (UnknownHostException e) { tv.setText(e.getMessage()); e.printStackTrace(); } catch (IOException e) { tv.setText(e.getMessage()); e.printStackTrace(); } catch (Exception e) { tv.setText(e.getMessage()); e.printStackTrace(); } } }); } } I've just pasted the onclick button event code for Android.Edit text is the textbox where I am going to enter my text. The ip address and port are same as in php script.

    Read the article

  • -Java- Swing GUI - Moving around components specifically with layouts

    - by Xemiru Scarlet Sanzenin
    I'm making a little test GUI for something I'm making. However, problems occur with the positioning of the panels. public winInit() { super("Chatterbox - Login"); try { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); } catch(ClassNotFoundException e) { } catch (InstantiationException e) { } catch (IllegalAccessException e) { } catch (UnsupportedLookAndFeelException e) { } setSize(300,135); pn1 = new JPanel(); pn2 = new JPanel(); pn3 = new JPanel(); l1 = new JLabel("Username"); l2 = new JLabel("Password"); l3 = new JLabel("Random text here"); l4 = new JLabel("Server Address"); l5 = new JLabel("No address set."); i1 = new JTextField(10); p1 = new JPasswordField(10); b1 = new JButton("Connect"); b2 = new JButton("Register"); b3 = new JButton("Set IP"); l4.setBounds(10, 12, getDim(l4).width, getDim(l4).height); l1.setBounds(10, 35, getDim(l1).width, getDim(l1).height); l2.setBounds(10, 60, getDim(l2).width, getDim(l2).height); l3.setBounds(10, 85, getDim(l3).width, getDim(l3).height); l5.setBounds(l4.getBounds().width + 14, 12, l5.getPreferredSize().width, l5.getPreferredSize().height); l5.setForeground(Color.gray); i1.setBounds(getDim(l1).width + 15, 35, getDim(i1).width, getDim(i1).height); p1.setBounds(getDim(l1).width + 15, 60, getDim(p1).width, getDim(p1).height); b1.setBounds(getDim(l1).width + getDim(i1).width + 23, 34, getDim(b2).width, getDim(b1).height - 5); b2.setBounds(getDim(l1).width + getDim(i1).width + 23, 60, getDim(b2).width, getDim(b2).height - 5); b3.setBounds(getDim(l1).width + getDim(i1).width + 23, 10, etDim(b2).width, getDim(b3).height - 5); b1.addActionListener(clickButton); b2.addActionListener(clickButton); b3.addActionListener(clickButton); pn1.setLayout(new FlowLayout(FlowLayout.RIGHT)); pn2.setLayout(new FlowLayout(FlowLayout.RIGHT)); pn1.add(l1); pn1.add(i1); pn1.add(b1); pn2.add(l2); pn2.add(p1); pn2.add(b2); add(pn1); add(pn2); } I am attempting to use FlowLayout to position the panels in the way desired. I'd use BorderLayout while adding, but the vertical spacing is too far away when I just use directions closest to one another. The output of this code is to create a window, 300,150, place whatever's in the two panels in the exact same spaces. Yes, I realize there's useless code there with setBounds(), but that was just me screwing around with Absolute Positioning, which wasn't working out for me either.

    Read the article

  • What's New in ASP.NET 4

    - by Navaneeth
    The .NET Framework version 4 includes enhancements for ASP.NET 4 in targeted areas. Visual Studio 2010 and Microsoft Visual Web Developer Express also include enhancements and new features for improved Web development. This document provides an overview of many of the new features that are included in the upcoming release. This topic contains the following sections: ASP.NET Core Services ASP.NET Web Forms ASP.NET MVC Dynamic Data ASP.NET Chart Control Visual Web Developer Enhancements Web Application Deployment with Visual Studio 2010 Enhancements to ASP.NET Multi-Targeting ASP.NET Core Services ASP.NET 4 introduces many features that improve core ASP.NET services such as output caching and session state storage. Extensible Output Caching Since the time that ASP.NET 1.0 was released, output caching has enabled developers to store the generated output of pages, controls, and HTTP responses in memory. On subsequent Web requests, ASP.NET can serve content more quickly by retrieving the generated output from memory instead of regenerating the output from scratch. However, this approach has a limitation — generated content always has to be stored in memory. On servers that experience heavy traffic, the memory requirements for output caching can compete with memory requirements for other parts of a Web application. ASP.NET 4 adds extensibility to output caching that enables you to configure one or more custom output-cache providers. Output-cache providers can use any storage mechanism to persist HTML content. These storage options can include local or remote disks, cloud storage, and distributed cache engines. Output-cache provider extensibility in ASP.NET 4 lets you design more aggressive and more intelligent output-caching strategies for Web sites. For example, you can create an output-cache provider that caches the "Top 10" pages of a site in memory, while caching pages that get lower traffic on disk. Alternatively, you can cache every vary-by combination for a rendered page, but use a distributed cache so that the memory consumption is offloaded from front-end Web servers. You create a custom output-cache provider as a class that derives from the OutputCacheProvider type. You can then configure the provider in the Web.config file by using the new providers subsection of the outputCache element For more information and for examples that show how to configure the output cache, see outputCache Element for caching (ASP.NET Settings Schema). For more information about the classes that support caching, see the documentation for the OutputCache and OutputCacheProvider classes. By default, in ASP.NET 4, all HTTP responses, rendered pages, and controls use the in-memory output cache. The defaultProvider attribute for ASP.NET is AspNetInternalProvider. You can change the default output-cache provider used for a Web application by specifying a different provider name for defaultProvider attribute. In addition, you can select different output-cache providers for individual control and for individual requests and programmatically specify which provider to use. For more information, see the HttpApplication.GetOutputCacheProviderName(HttpContext) method. The easiest way to choose a different output-cache provider for different Web user controls is to do so declaratively by using the new providerName attribute in a page or control directive, as shown in the following example: <%@ OutputCache Duration="60" VaryByParam="None" providerName="DiskCache" %> Preloading Web Applications Some Web applications must load large amounts of data or must perform expensive initialization processing before serving the first request. In earlier versions of ASP.NET, for these situations you had to devise custom approaches to "wake up" an ASP.NET application and then run initialization code during the Application_Load method in the Global.asax file. To address this scenario, a new application preload manager (autostart feature) is available when ASP.NET 4 runs on IIS 7.5 on Windows Server 2008 R2. The preload feature provides a controlled approach for starting up an application pool, initializing an ASP.NET application, and then accepting HTTP requests. It lets you perform expensive application initialization prior to processing the first HTTP request. For example, you can use the application preload manager to initialize an application and then signal a load-balancer that the application was initialized and ready to accept HTTP traffic. To use the application preload manager, an IIS administrator sets an application pool in IIS 7.5 to be automatically started by using the following configuration in the applicationHost.config file: <applicationPools> <add name="MyApplicationPool" startMode="AlwaysRunning" /> </applicationPools> Because a single application pool can contain multiple applications, you specify individual applications to be automatically started by using the following configuration in the applicationHost.config file: <sites> <site name="MySite" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PrewarmMyCache" > <!-- Additional content --> </application> </site> </sites> <!-- Additional content --> <serviceAutoStartProviders> <add name="PrewarmMyCache" type="MyNamespace.CustomInitialization, MyLibrary" /> </serviceAutoStartProviders> When an IIS 7.5 server is cold-started or when an individual application pool is recycled, IIS 7.5 uses the information in the applicationHost.config file to determine which Web applications have to be automatically started. For each application that is marked for preload, IIS7.5 sends a request to ASP.NET 4 to start the application in a state during which the application temporarily does not accept HTTP requests. When it is in this state, ASP.NET instantiates the type defined by the serviceAutoStartProvider attribute (as shown in the previous example) and calls into its public entry point. You create a managed preload type that has the required entry point by implementing the IProcessHostPreloadClient interface, as shown in the following example: public class CustomInitialization : System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // Perform initialization. } } After your initialization code runs in the Preload method and after the method returns, the ASP.NET application is ready to process requests. Permanently Redirecting a Page Content in Web applications is often moved over the lifetime of the application. This can lead to links to be out of date, such as the links that are returned by search engines. In ASP.NET, developers have traditionally handled requests to old URLs by using the Redirect method to forward a request to the new URL. However, the Redirect method issues an HTTP 302 (Found) response (which is used for a temporary redirect). This results in an extra HTTP round trip. ASP.NET 4 adds a RedirectPermanent helper method that makes it easy to issue HTTP 301 (Moved Permanently) responses, as in the following example: RedirectPermanent("/newpath/foroldcontent.aspx"); Search engines and other user agents that recognize permanent redirects will store the new URL that is associated with the content, which eliminates the unnecessary round trip made by the browser for temporary redirects. Session State Compression By default, ASP.NET provides two options for storing session state across a Web farm. The first option is a session state provider that invokes an out-of-process session state server. The second option is a session state provider that stores data in a Microsoft SQL Server database. Because both options store state information outside a Web application's worker process, session state has to be serialized before it is sent to remote storage. If a large amount of data is saved in session state, the size of the serialized data can become very large. ASP.NET 4 introduces a new compression option for both kinds of out-of-process session state providers. By using this option, applications that have spare CPU cycles on Web servers can achieve substantial reductions in the size of serialized session state data. You can set this option using the new compressionEnabled attribute of the sessionState element in the configuration file. When the compressionEnabled configuration option is set to true, ASP.NET compresses (and decompresses) serialized session state by using the .NET Framework GZipStreamclass. The following example shows how to set this attribute. <sessionState mode="SqlServer" sqlConnectionString="data source=dbserver;Initial Catalog=aspnetstate" allowCustomSqlDatabase="true" compressionEnabled="true" /> ASP.NET Web Forms Web Forms has been a core feature in ASP.NET since the release of ASP.NET 1.0. Many enhancements have been in this area for ASP.NET 4, such as the following: The ability to set meta tags. More control over view state. Support for recently introduced browsers and devices. Easier ways to work with browser capabilities. Support for using ASP.NET routing with Web Forms. More control over generated IDs. The ability to persist selected rows in data controls. More control over rendered HTML in the FormView and ListView controls. Filtering support for data source controls. Enhanced support for Web standards and accessibility Setting Meta Tags with the Page.MetaKeywords and Page.MetaDescription Properties Two properties have been added to the Page class: MetaKeywords and MetaDescription. These two properties represent corresponding meta tags in the HTML rendered for a page, as shown in the following example: <head id="Head1" runat="server"> <title>Untitled Page</title> <meta name="keywords" content="keyword1, keyword2' /> <meta name="description" content="Description of my page" /> </head> These two properties work like the Title property does, and they can be set in the @ Page directive. For more information, see Page.MetaKeywords and Page.MetaDescription. Enabling View State for Individual Controls A new property has been added to the Control class: ViewStateMode. You can use this property to disable view state for all controls on a page except those for which you explicitly enable view state. View state data is included in a page's HTML and increases the amount of time it takes to send a page to the client and post it back. Storing more view state than is necessary can cause significant decrease in performance. In earlier versions of ASP.NET, you could reduce the impact of view state on a page's performance by disabling view state for specific controls. But sometimes it is easier to enable view state for a few controls that need it instead of disabling it for many that do not need it. For more information, see Control.ViewStateMode. Support for Recently Introduced Browsers and Devices ASP.NET includes a feature that is named browser capabilities that lets you determine the capabilities of the browser that a user is using. Browser capabilities are represented by the HttpBrowserCapabilities object which is stored in the HttpRequest.Browser property. Information about a particular browser's capabilities is defined by a browser definition file. In ASP.NET 4, these browser definition files have been updated to contain information about recently introduced browsers and devices such as Google Chrome, Research in Motion BlackBerry smart phones, and Apple iPhone. Existing browser definition files have also been updated. For more information, see How to: Upgrade an ASP.NET Web Application to ASP.NET 4 and ASP.NET Web Server Controls and Browser Capabilities. The browser definition files that are included with ASP.NET 4 are shown in the following list: •blackberry.browser •chrome.browser •Default.browser •firefox.browser •gateway.browser •generic.browser •ie.browser •iemobile.browser •iphone.browser •opera.browser •safari.browser A New Way to Define Browser Capabilities ASP.NET 4 includes a new feature referred to as browser capabilities providers. As the name suggests, this lets you build a provider that in turn lets you write custom code to determine browser capabilities. In ASP.NET version 3.5 Service Pack 1, you define browser capabilities in an XML file. This file resides in a machine-level folder or an application-level folder. Most developers do not need to customize these files, but for those who do, the provider approach can be easier than dealing with complex XML syntax. The provider approach makes it possible to simplify the process by implementing a common browser definition syntax, or a database that contains up-to-date browser definitions, or even a Web service for such a database. For more information about the new browser capabilities provider, see the What's New for ASP.NET 4 White Paper. Routing in ASP.NET 4 ASP.NET 4 adds built-in support for routing with Web Forms. Routing is a feature that was introduced with ASP.NET 3.5 SP1 and lets you configure an application to use URLs that are meaningful to users and to search engines because they do not have to specify physical file names. This can make your site more user-friendly and your site content more discoverable by search engines. For example, the URL for a page that displays product categories in your application might look like the following example: http://website/products.aspx?categoryid=12 By using routing, you can use the following URL to render the same information: http://website/products/software The second URL lets the user know what to expect and can result in significantly improved rankings in search engine results. the new features include the following: The PageRouteHandler class is a simple HTTP handler that you use when you define routes. You no longer have to write a custom route handler. The HttpRequest.RequestContext and Page.RouteData properties make it easier to access information that is passed in URL parameters. The RouteUrl expression provides a simple way to create a routed URL in markup. The RouteValue expression provides a simple way to extract URL parameter values in markup. The RouteParameter class makes it easier to pass URL parameter values to a query for a data source control (similar to FormParameter). You no longer have to change the Web.config file to enable routing. For more information about routing, see the following topics: ASP.NET Routing Walkthrough: Using ASP.NET Routing in a Web Forms Application How to: Define Routes for Web Forms Applications How to: Construct URLs from Routes How to: Access URL Parameters in a Routed Page Setting Client IDs The new ClientIDMode property makes it easier to write client script that references HTML elements rendered for server controls. Increasing use of Microsoft Ajax makes the need to do this more common. For example, you may have a data control that renders a long list of products with prices and you want to use client script to make a Web service call and update individual prices in the list as they change without refreshing the entire page. Typically you get a reference to an HTML element in client script by using the document.GetElementById method. You pass to this method the value of the id attribute of the HTML element you want to reference. In the case of elements that are rendered for ASP.NET server controls earlier versions of ASP.NET could make this difficult or impossible. You were not always able to predict what id values ASP.NET would generate, or ASP.NET could generate very long id values. The problem was especially difficult for data controls that would generate multiple rows for a single instance of the control in your markup. ASP.NET 4 adds two new algorithms for generating id attributes. These algorithms can generate id attributes that are easier to work with in client script because they are more predictable and that are easier to work with because they are simpler. For more information about how to use the new algorithms, see the following topics: ASP.NET Web Server Control Identification Walkthrough: Making Data-Bound Controls Easier to Access from JavaScript Walkthrough: Making Controls Located in Web User Controls Easier to Access from JavaScript How to: Access Controls from JavaScript by ID Persisting Row Selection in Data Controls The GridView and ListView controls enable users to select a row. In previous versions of ASP.NET, row selection was based on the row index on the page. For example, if you select the third item on page 1 and then move to page 2, the third item on page 2 is selected. In most cases, is more desirable not to select any rows on page 2. ASP.NET 4 supports Persisted Selection, a new feature that was initially supported only in Dynamic Data projects in the .NET Framework 3.5 SP1. When this feature is enabled, the selected item is based on the row data key. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. This is a much more natural behavior than the behavior in earlier versions of ASP.NET. Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature in the GridView control, for example, by setting the EnablePersistedSelection property, as shown in the following example: <asp:GridView id="GridView2" runat="server" PersistedSelection="true"> </asp:GridView> FormView Control Enhancements The FormView control is enhanced to make it easier to style the content of the control with CSS. In previous versions of ASP.NET, the FormView control rendered it contents using an item template. This made styling more difficult in the markup because unexpected table row and table cell tags were rendered by the control. The FormView control supports RenderOuterTable, a property in ASP.NET 4. When this property is set to false, as show in the following example, the table tags are not rendered. This makes it easier to apply CSS style to the contents of the control. <asp:FormView ID="FormView1" runat="server" RenderTable="false"> For more information, see FormView Web Server Control Overview. ListView Control Enhancements The ListView control, which was introduced in ASP.NET 3.5, has all the functionality of the GridView control while giving you complete control over the output. This control has been made easier to use in ASP.NET 4. The earlier version of the control required that you specify a layout template that contained a server control with a known ID. The following markup shows a typical example of how to use the ListView control in ASP.NET 3.5. <asp:ListView ID="ListView1" runat="server"> <LayoutTemplate> <asp:PlaceHolder ID="ItemPlaceHolder" runat="server"></asp:PlaceHolder> </LayoutTemplate> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> In ASP.NET 4, the ListView control does not require a layout template. The markup shown in the previous example can be replaced with the following markup: <asp:ListView ID="ListView1" runat="server"> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> For more information, see ListView Web Server Control Overview. Filtering Data with the QueryExtender Control A very common task for developers who create data-driven Web pages is to filter data. This traditionally has been performed by building Where clauses in data source controls. This approach can be complicated, and in some cases the Where syntax does not let you take advantage of the full functionality of the underlying database. To make filtering easier, a new QueryExtender control has been added in ASP.NET 4. This control can be added to EntityDataSource or LinqDataSource controls in order to filter the data returned by these controls. Because the QueryExtender control relies on LINQ, but you do not to need to know how to write LINQ queries to use the query extender. The QueryExtender control supports a variety of filter options. The following lists QueryExtender filter options. Term Definition SearchExpression Searches a field or fields for string values and compares them to a specified string value. RangeExpression Searches a field or fields for values in a range specified by a pair of values. PropertyExpression Compares a specified value to a property value in a field. If the expression evaluates to true, the data that is being examined is returned. OrderByExpression Sorts data by a specified column and sort direction. CustomExpression Calls a function that defines custom filter in the page. For more information, see QueryExtenderQueryExtender Web Server Control Overview. Enhanced Support for Web Standards and Accessibility Earlier versions of ASP.NET controls sometimes render markup that does not conform to HTML, XHTML, or accessibility standards. ASP.NET 4 eliminates most of these exceptions. For details about how the HTML that is rendered by each control meets accessibility standards, see ASP.NET Controls and Accessibility. CSS for Controls that Can be Disabled In ASP.NET 3.5, when a control is disabled (see WebControl.Enabled), a disabled attribute is added to the rendered HTML element. For example, the following markup creates a Label control that is disabled: <asp:Label id="Label1" runat="server"   Text="Test" Enabled="false" /> In ASP.NET 3.5, the previous control settings generate the following HTML: <span id="Label1" disabled="disabled">Test</span> In HTML 4.01, the disabled attribute is not considered valid on span elements. It is valid only on input elements because it specifies that they cannot be accessed. On display-only elements such as span elements, browsers typically support rendering for a disabled appearance, but a Web page that relies on this non-standard behavior is not robust according to accessibility standards. For display-only elements, you should use CSS to indicate a disabled visual appearance. Therefore, by default ASP.NET 4 generates the following HTML for the control settings shown previously: <span id="Label1" class="aspNetDisabled">Test</span> You can change the value of the class attribute that is rendered by default when a control is disabled by setting the DisabledCssClass property. CSS for Validation Controls In ASP.NET 3.5, validation controls render a default color of red as an inline style. For example, the following markup creates a RequiredFieldValidator control: <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server"   ErrorMessage="Required Field" ControlToValidate="RadioButtonList1" /> ASP.NET 3.5 renders the following HTML for the validator control: <span id="RequiredFieldValidator1"   style="color:Red;visibility:hidden;">RequiredFieldValidator</span> By default, ASP.NET 4 does not render an inline style to set the color to red. An inline style is used only to hide or show the validator, as shown in the following example: <span id="RequiredFieldValidator1"   style"visibility:hidden;">RequiredFieldValidator</span> Therefore, ASP.NET 4 does not automatically show error messages in red. For information about how to use CSS to specify a visual style for a validation control, see Validating User Input in ASP.NET Web Pages. CSS for the Hidden Fields Div Element ASP.NET uses hidden fields to store state information such as view state and control state. These hidden fields are contained by a div element. In ASP.NET 3.5, this div element does not have a class attribute or an id attribute. Therefore, CSS rules that affect all div elements could unintentionally cause this div to be visible. To avoid this problem, ASP.NET 4 renders the div element for hidden fields with a CSS class that you can use to differentiate the hidden fields div from others. The new classvalue is shown in the following example: <div class="aspNetHidden"> CSS for the Table, Image, and ImageButton Controls By default, in ASP.NET 3.5, some controls set the border attribute of rendered HTML to zero (0). The following example shows HTML that is generated by the Table control in ASP.NET 3.5: <table id="Table2" border="0"> The Image control and the ImageButton control also do this. Because this is not necessary and provides visual formatting information that should be provided by using CSS, the attribute is not generated in ASP.NET 4. CSS for the UpdatePanel and UpdateProgress Controls In ASP.NET 3.5, the UpdatePanel and UpdateProgress controls do not support expando attributes. This makes it impossible to set a CSS class on the HTMLelements that they render. In ASP.NET 4 these controls have been changed to accept expando attributes, as shown in the following example: <asp:UpdatePanel runat="server" class="myStyle"> </asp:UpdatePanel> The following HTML is rendered for this markup: <div id="ctl00_MainContent_UpdatePanel1" class="expandoclass"> </div> Eliminating Unnecessary Outer Tables In ASP.NET 3.5, the HTML that is rendered for the following controls is wrapped in a table element whose purpose is to apply inline styles to the entire control: FormView Login PasswordRecovery ChangePassword If you use templates to customize the appearance of these controls, you can specify CSS styles in the markup that you provide in the templates. In that case, no extra outer table is required. In ASP.NET 4, you can prevent the table from being rendered by setting the new RenderOuterTable property to false. Layout Templates for Wizard Controls In ASP.NET 3.5, the Wizard and CreateUserWizard controls generate an HTML table element that is used for visual formatting. In ASP.NET 4 you can use a LayoutTemplate element to specify the layout. If you do this, the HTML table element is not generated. In the template, you create placeholder controls to indicate where items should be dynamically inserted into the control. (This is similar to how the template model for the ListView control works.) For more information, see the Wizard.LayoutTemplate property. New HTML Formatting Options for the CheckBoxList and RadioButtonList Controls ASP.NET 3.5 uses HTML table elements to format the output for the CheckBoxList and RadioButtonList controls. To provide an alternative that does not use tables for visual formatting, ASP.NET 4 adds two new options to the RepeatLayout enumeration: UnorderedList. This option causes the HTML output to be formatted by using ul and li elements instead of a table. OrderedList. This option causes the HTML output to be formatted by using ol and li elements instead of a table. For examples of HTML that is rendered for the new options, see the RepeatLayout enumeration. Header and Footer Elements for the Table Control In ASP.NET 3.5, the Table control can be configured to render thead and tfoot elements by setting the TableSection property of the TableHeaderRow class and the TableFooterRow class. In ASP.NET 4 these properties are set to the appropriate values by default. CSS and ARIA Support for the Menu Control In ASP.NET 3.5, the Menu control uses HTML table elements for visual formatting, and in some configurations it is not keyboard-accessible. ASP.NET 4 addresses these problems and improves accessibility in the following ways: The generated HTML is structured as an unordered list (ul and li elements). CSS is used for visual formatting. The menu behaves in accordance with ARIA standards for keyboard access. You can use arrow keys to navigate menu items. (For information about ARIA, see Accessibility in Visual Studio and ASP.NET.) ARIA role and property attributes are added to the generated HTML. (Attributes are added by using JavaScript instead of included in the HTML, to avoid generating HTML that would cause markup validation errors.) Styles for the Menu control are rendered in a style block at the top of the page, instead of inline with the rendered HTML elements. If you want to use a separate CSS file so that you can modify the menu styles, you can set the Menu control's new IncludeStyleBlock property to false, in which case the style block is not generated. Valid XHTML for the HtmlForm Control In ASP.NET 3.5, the HtmlForm control (which is created implicitly by the <form runat="server"> tag) renders an HTML form element that has both name and id attributes. The name attribute is deprecated in XHTML 1.1. Therefore, this control does not render the name attribute in ASP.NET 4. Maintaining Backward Compatibility in Control Rendering An existing ASP.NET Web site might have code in it that assumes that controls are rendering HTML the way they do in ASP.NET 3.5. To avoid causing backward compatibility problems when you upgrade the site to ASP.NET 4, you can have ASP.NET continue to generate HTML the way it does in ASP.NET 3.5 after you upgrade the site. To do so, you can set the controlRenderingCompatibilityVersion attribute of the pages element to "3.5" in the Web.config file of an ASP.NET 4 Web site, as shown in the following example: <system.web>   <pages controlRenderingCompatibilityVersion="3.5"/> </system.web> If this setting is omitted, the default value is the same as the version of ASP.NET that the Web site targets. (For information about multi-targeting in ASP.NET, see .NET Framework Multi-Targeting for ASP.NET Web Projects.) ASP.NET MVC ASP.NET MVC helps Web developers build compelling standards-based Web sites that are easy to maintain because it decreases the dependency among application layers by using the Model-View-Controller (MVC) pattern. MVC provides complete control over the page markup. It also improves testability by inherently supporting Test Driven Development (TDD). Web sites created using ASP.NET MVC have a modular architecture. This allows members of a team to work independently on the various modules and can be used to improve collaboration. For example, developers can work on the model and controller layers (data and logic), while the designer work on the view (presentation). For tutorials, walkthroughs, conceptual content, code samples, and a complete API reference, see ASP.NET MVC 2. Dynamic Data Dynamic Data was introduced in the .NET Framework 3.5 SP1 release in mid-2008. This feature provides many enhancements for creating data-driven applications, such as the following: A RAD experience for quickly building a data-driven Web site. Automatic validation that is based on constraints defined in the data model. The ability to easily change the markup that is generated for fields in the GridView and DetailsView controls by using field templates that are part of your Dynamic Data project. For ASP.NET 4, Dynamic Data has been enhanced to give developers even more power for quickly building data-driven Web sites. For more information, see ASP.NET Dynamic Data Content Map. Enabling Dynamic Data for Individual Data-Bound Controls in Existing Web Applications You can use Dynamic Data features in existing ASP.NET Web applications that do not use scaffolding by enabling Dynamic Data for individual data-bound controls. Dynamic Data provides the presentation and data layer support for rendering these controls. When you enable Dynamic Data for data-bound controls, you get the following benefits: Setting default values for data fields. Dynamic Data enables you to provide default values at run time for fields in a data control. Interacting with the database without creating and registering a data model. Automatically validating the data that is entered by the user without writing any code. For more information, see Walkthrough: Enabling Dynamic Data in ASP.NET Data-Bound Controls. New Field Templates for URLs and E-mail Addresses ASP.NET 4 introduces two new built-in field templates, EmailAddress.ascx and Url.ascx. These templates are used for fields that are marked as EmailAddress or Url using the DataTypeAttribute attribute. For EmailAddress objects, the field is displayed as a hyperlink that is created by using the mailto: protocol. When users click the link, it opens the user's e-mail client and creates a skeleton message. Objects typed as Url are displayed as ordinary hyperlinks. The following example shows how to mark fields. [DataType(DataType.EmailAddress)] public object HomeEmail { get; set; } [DataType(DataType.Url)] public object Website { get; set; } Creating Links with the DynamicHyperLink Control Dynamic Data uses the new routing feature that was added in the .NET Framework 3.5 SP1 to control the URLs that users see when they access the Web site. The new DynamicHyperLink control makes it easy to build links to pages in a Dynamic Data site. For information, see How to: Create Table Action Links in Dynamic Data Support for Inheritance in the Data Model Both the ADO.NET Entity Framework and LINQ to SQL support inheritance in their data models. An example of this might be a database that has an InsurancePolicy table. It might also contain CarPolicy and HousePolicy tables that have the same fields as InsurancePolicy and then add more fields. Dynamic Data has been modified to understand inherited objects in the data model and to support scaffolding for the inherited tables. For more information, see Walkthrough: Mapping Table-per-Hierarchy Inheritance in Dynamic Data. Support for Many-to-Many Relationships (Entity Framework Only) The Entity Framework has rich support for many-to-many relationships between tables, which is implemented by exposing the relationship as a collection on an Entity object. New field templates (ManyToMany.ascx and ManyToMany_Edit.ascx) have been added to provide support for displaying and editing data that is involved in many-to-many relationships. For more information, see Working with Many-to-Many Data Relationships in Dynamic Data. New Attributes to Control Display and Support Enumerations The DisplayAttribute has been added to give you additional control over how fields are displayed. The DisplayNameAttribute attribute in earlier versions of Dynamic Data enabled you to change the name that is used as a caption for a field. The new DisplayAttribute class lets you specify more options for displaying a field, such as the order in which a field is displayed and whether a field will be used as a filter. The attribute also provides independent control of the name that is used for the labels in a GridView control, the name that is used in a DetailsView control, the help text for the field, and the watermark used for the field (if the field accepts text input). The EnumDataTypeAttribute class has been added to let you map fields to enumerations. When you apply this attribute to a field, you specify an enumeration type. Dynamic Data uses the new Enumeration.ascx field template to create UI for displaying and editing enumeration values. The template maps the values from the database to the names in the enumeration. Enhanced Support for Filters Dynamic Data 1.0 had built-in filters for Boolean columns and foreign-key columns. The filters did not let you specify the order in which they were displayed. The new DisplayAttribute attribute addresses this by giving you control over whether a column appears as a filter and in what order it will be displayed. An additional enhancement is that filtering support has been rewritten to use the new QueryExtender feature of Web Forms. This lets you create filters without requiring knowledge of the data source control that the filters will be used with. Along with these extensions, filters have also been turned into template controls, which lets you add new ones. Finally, the DisplayAttribute class mentioned earlier allows the default filter to be overridden, in the same way that UIHint allows the default field template for a column to be overridden. For more information, see Walkthrough: Filtering Rows in Tables That Have a Parent-Child Relationship and QueryableFilterRepeater. ASP.NET Chart Control The ASP.NET chart server control enables you to create ASP.NET pages applications that have simple, intuitive charts for complex statistical or financial analysis. The chart control supports the following features: Data series, chart areas, axes, legends, labels, titles, and more. Data binding. Data manipulation, such as copying, splitting, merging, alignment, grouping, sorting, searching, and filtering. Statistical formulas and financial formulas. Advanced chart appearance, such as 3-D, anti-aliasing, lighting, and perspective. Events and customizations. Interactivity and Microsoft Ajax. Support for the Ajax Content Delivery Network (CDN), which provides an optimized way for you to add Microsoft Ajax Library and jQuery scripts to your Web applications. For more information, see Chart Web Server Control Overview. Visual Web Developer Enhancements The following sections provide information about enhancements and new features in Visual Studio 2010 and Visual Web Developer Express. The Web page designer in Visual Studio 2010 has been enhanced for better CSS compatibility, includes additional support for HTML and ASP.NET markup snippets, and features a redesigned version of IntelliSense for JScript. Improved CSS Compatibility The Visual Web Developer designer in Visual Studio 2010 has been updated to improve CSS 2.1 standards compliance. The designer better preserves HTML source code and is more robust than in previous versions of Visual Studio. HTML and JScript Snippets In the HTML editor, IntelliSense auto-completes tag names. The IntelliSense Snippets feature auto-completes whole tags and more. In Visual Studio 2010, IntelliSense snippets are supported for JScript, alongside C# and Visual Basic, which were supported in earlier versions of Visual Studio. Visual Studio 2010 includes over 200 snippets that help you auto-complete common ASP.NET and HTML tags, including required attributes (such as runat="server") and common attributes specific to a tag (such as ID, DataSourceID, ControlToValidate, and Text). You can download additional snippets, or you can write your own snippets that encapsulate the blocks of markup that you or your team use for common tasks. For more information on HTML snippets, see Walkthrough: Using HTML Snippets. JScript IntelliSense Enhancements In Visual 2010, JScript IntelliSense has been redesigned to provide an even richer editing experience. IntelliSense now recognizes objects that have been dynamically generated by methods such as registerNamespace and by similar techniques used by other JavaScript frameworks. Performance has been improved to analyze large libraries of script and to display IntelliSense with little or no processing delay. Compatibility has been significantly increased to support almost all third-party libraries and to support diverse coding styles. Documentation comments are now parsed as you type and are immediately leveraged by IntelliSense. Web Application Deployment with Visual Studio 2010 For Web application projects, Visual Studio now provides tools that work with the IIS Web Deployment Tool (Web Deploy) to automate many processes that had to be done manually in earlier versions of ASP.NET. For example, the following tasks can now be automated: Creating an IIS application on the destination computer and configuring IIS settings. Copying files to the destination computer. Changing Web.config settings that must be different in the destination environment. Propagating changes to data or data structures in SQL Server databases that are used by the Web application. For more information about Web application deployment, see ASP.NET Deployment Content Map. Enhancements to ASP.NET Multi-Targeting ASP.NET 4 adds new features to the multi-targeting feature to make it easier to work with projects that target earlier versions of the .NET Framework. Multi-targeting was introduced in ASP.NET 3.5 to enable you to use the latest version of Visual Studio without having to upgrade existing Web sites or Web services to the latest version of the .NET Framework. In Visual Studio 2008, when you work with a project targeted for an earlier version of the .NET Framework, most features of the development environment adapt to the targeted version. However, IntelliSense displays language features that are available in the current version, and property windows display properties available in the current version. In Visual Studio 2010, only language features and properties available in the targeted version of the .NET Framework are shown. For more information about multi-targeting, see the following topics: .NET Framework Multi-Targeting for ASP.NET Web Projects ASP.NET Side-by-Side Execution Overview How to: Host Web Applications That Use Different Versions of the .NET Framework on the Same Server How to: Deploy Web Site Projects Targeted for Earlier Versions of the .NET Framework

    Read the article

  • Opening Skype, Opera, OpenOffice logs me off

    - by anjanesh
    Whats common among Skype, Opera, OpenOffice in Ubuntu ? Whenever I open these applications I get logged off and shows back me the login screen. This started happening since the 10.10 upgrade. Forgot to mention : Yes, its x64.Each time I open these applications, the UI shows and then crashes. I started each app & logged the last few lines of /var/log/syslog after each crash. Looks like something to do with sound drivers ? Opera :Jan 8 09:33:20 al-ubuntu pulseaudio[11532]: pid.c: Daemon already running. Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: snd_pcm_avail_delay() returned strange values: delay 0 is less than avail 8. Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Most likely this is a bug in the ALSA driver 'snd_hda_intel'. Please report this issue to the ALSA developers. Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: snd_pcm_dump(): Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Soft volume PCM Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Control: PCM Playback Volume Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: min_dB: -51 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: max_dB: 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: resolution: 256 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Its setup is: Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stream : CAPTURE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: format : S16_LE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: subformat : STD Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: channels : 2 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: rate : 44100 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: msbits : 16 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: buffer_size : 88192 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_size : 44096 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_time : 999909 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_step : 1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: avail_min : 87310 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_event : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: start_threshold : -1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_threshold: 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_size : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Slave: Hardware PCM card 0 'HDA Intel' device 0 subdevice 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Its setup is: Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stream : CAPTURE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: format : S16_LE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: subformat : STD Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: channels : 2 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: rate : 44100 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: msbits : 16 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: buffer_size : 88192 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_size : 44096 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_time : 999909 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_step : 1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: avail_min : 87310 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_event : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: start_threshold : -1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_threshold: 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_size : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: appl_ptr : 87320 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: hw_ptr : 87320 Jan 8 09:33:22 al-ubuntu kernel: [ 4962.078306] opera[11036]: segfault at 261 ip 0000000000000261 sp 00007fffed7cd9a8 error 14 in opera[400000+122b000] anjanesh@al-ubuntu:~$ SkypeJan 8 09:40:21 al-ubuntu pulseaudio[12602]: pid.c: Daemon already running. Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: snd_pcm_avail_delay() returned strange values: delay 0 is less than avail 8. Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Most likely this is a bug in the ALSA driver 'snd_hda_intel'. Please report this issue to the ALSA developers. Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: snd_pcm_dump(): Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Soft volume PCM Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Control: PCM Playback Volume Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: min_dB: -51 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: max_dB: 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: resolution: 256 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Its setup is: Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stream : CAPTURE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: format : S16_LE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: subformat : STD Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: channels : 2 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: rate : 44100 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: msbits : 16 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: buffer_size : 88192 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_size : 44096 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_time : 999909 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_step : 1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: avail_min : 87310 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_event : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: start_threshold : -1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_threshold: 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_size : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Slave: Hardware PCM card 0 'HDA Intel' device 0 subdevice 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Its setup is: Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stream : CAPTURE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: format : S16_LE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: subformat : STD Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: channels : 2 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: rate : 44100 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: msbits : 16 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: buffer_size : 88192 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_size : 44096 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_time : 999909 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_step : 1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: avail_min : 87310 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_event : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: start_threshold : -1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_threshold: 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_size : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: appl_ptr : 87312 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: hw_ptr : 87312 anjanesh@al-ubuntu:~$ Open OfficeJan 8 09:43:46 al-ubuntu pulseaudio[13157]: pid.c: Daemon already running. Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: snd_pcm_avail_delay() returned strange values: delay 0 is less than avail 16. Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Most likely this is a bug in the ALSA driver 'snd_hda_intel'. Please report this issue to the ALSA developers. Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: snd_pcm_dump(): Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Soft volume PCM Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Control: PCM Playback Volume Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: min_dB: -51 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: max_dB: 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: resolution: 256 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Its setup is: Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stream : CAPTURE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: format : S16_LE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: subformat : STD Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: channels : 2 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: rate : 44100 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: msbits : 16 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: buffer_size : 88192 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_size : 44096 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_time : 999909 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_step : 1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: avail_min : 87310 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_event : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: start_threshold : -1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_threshold: 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_size : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Slave: Hardware PCM card 0 'HDA Intel' device 0 subdevice 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Its setup is: Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stream : CAPTURE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: format : S16_LE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: subformat : STD Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: channels : 2 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: rate : 44100 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: msbits : 16 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: buffer_size : 88192 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_size : 44096 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_time : 999909 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_step : 1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: avail_min : 87310 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_event : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: start_threshold : -1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_threshold: 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_size : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: appl_ptr : 87320 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: hw_ptr : 87320 anjanesh@al-ubuntu:~$

    Read the article

  • 26 Days: Countdown to Oracle OpenWorld 2012

    - by Michael Snow
    Welcome to our countdown to Oracle OpenWorld! Oracle OpenWorld 2012 is just around the corner. In less than 26 days, San Francisco will be invaded by an expected 50,000 people from all over the world. Here on the Oracle WebCenter team, we’ve all been working to help make the experience a great one for all our WebCenter customers. For a sneak peak  – we’ll be spending this week giving you a teaser of what to look forward to if you are joining us in San Francisco from September 30th through October 4th. We have Oracle WebCenter sessions covering all topics imaginable. Take a look and use the tools we provide to build out your schedule in advance and reserve your seats in your favorite sessions.  That gives you plenty of time to plan for your week with us in San Francisco. If unfortunately, your boss denied your request to attend - there are still some ways that you can join in the experience virtually On-Demand. This year - we are expanding even more up North of Market Street and will be taking over Union Square as well. Check out this map of San Francisco to get a sense of how much of a footprint Oracle OpenWorld has grown to this year. With so much to see and so many sessions to learn from - its no wonder that people get excited. Add to that a good mix of fun and all of the possible WebCenter sessions you could attend - you won't want to sleep at all to take full advantage of such an opportunity. We'll also have our annual WebCenter Customer Appreciation reception - stay tuned this week for some more info on registration to make sure you'll be able to join us. If you've been following the America's Cup at all and believe in EXTREME PERFORMANCE you'll definitely want to take a look at this video from last year's OpenWorld Keynote. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Important OpenWorld Links:  Attendee / Presenters Toolkit Oracle Schedule Builder WebCenter Sessions (listed in the catalog under Fusion Middleware as "Portals, Sites, Content, and Collaboration" ) Oracle Music Festival - AMAZING Line up!!  Oracle Customer Appreciation Night -LOOK HERE!! Oracle OpenWorld LIVE On-Demand Here are all the WebCenter sessions broken down by day for your viewing pleasure. Monday, October 1st CON8885 - Simplify CRM Engagement with Contextual Collaboration Are your sales teams disconnected and disengaged? Do you want a tool for easily connecting expertise across your organization and providing visibility into the complete sales process? Do you want a way to enhance and retain organization knowledge? Oracle Social Network is the answer. Attend this session to learn how to make CRM easy, effective, and efficient for use across virtual sales teams. Also learn how Oracle Social Network can drive sales force collaboration with natural conversations throughout the sales cycle, promote sales team productivity through purposeful social networking without the noise, and build cross-team knowledge by integrating conversations with CRM and other business applications. CON8268 - Oracle WebCenter Strategy: Engaging Your Customers. Empowering Your Business Oracle WebCenter is a user engagement platform for social business, connecting people and information. Attend this session to learn about the Oracle WebCenter strategy, and understand where Oracle is taking the platform to help companies engage customers, empower employees, and enable partners. Business success starts with ensuring that everyone is engaged with the right people and the right information and can access what they need through the channel of their choice—Web, mobile, or social. Are you giving customers, employees, and partners the best-possible experience? Come learn how you can! ¶ HOL10208 - Add Social Capabilities to Your Enterprise Applications Oracle Social Network enables you to add real-time collaboration capabilities into your enterprise applications, so that conversations can happen directly within your business systems. In this hands-on lab, you will try out the Oracle Social Network product to collaborate with other attendees, using real-time conversations with document sharing capabilities. Next you will embed social capabilities into a sample Web-based enterprise application, using embedded UI components. Experts will also write simple REST-based integrations, using the Oracle Social Network API to programmatically create social interactions. ¶ CON8893 - Improve Employee Productivity with Intuitive and Social Work Environments Social technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. Forward-thinking organizations today need technologies and infrastructures to help them advance to the next level and integrate social activities with business applications to deliver a user experience that simplifies business processes and enterprise application engagement. Attend this session to hear from an innovative Oracle Social Network customer and learn how you can improve productivity with intuitive and social work environments and empower your employees with innovative social tools to enable contextual access to content and dynamic personalization of solutions. ¶ CON8270 - Oracle WebCenter Content Strategy and Vision Oracle WebCenter provides a strategic content infrastructure for managing documents, images, e-mails, and rich media files. With a single repository, organizations can address any content use case, such as accounts payable, HR onboarding, document management, compliance, records management, digital asset management, or Website management. In this session, learn about future plans for how Oracle WebCenter will address new use cases as well as new integrations with Oracle Fusion Middleware and Oracle Applications, leveraging your investments by making your users more productive and error-free. ¶ CON8269 - Oracle WebCenter Sites Strategy and Vision Oracle’s Web experience management solution, Oracle WebCenter Sites, enables organizations to use the online channel to drive customer acquisition and brand loyalty. It helps marketers and business users easily create and manage contextually relevant, social, interactive online experiences across multiple channels on a global scale. In this session, learn about future plans for how Oracle WebCenter Sites will provide you with the tools, capabilities, and integrations you need in order to continue to address your customers’ evolving requirements for engaging online experiences and keep moving your business forward. ¶ CON8896 - Living with SharePoint SharePoint is a popular platform, but it’s not always the best fit for Oracle customers. In this session, you’ll discover the technical and nontechnical limitations and pitfalls of SharePoint and learn about Oracle alternatives for collaboration, portals, enterprise and Web content management, social computing, and application integration. The presentation shows you how to integrate with SharePoint when business or IT requirements dictate and covers cloud-based (Office 365) and on-premises versions of SharePoint. Presented by a former Microsoft director of SharePoint product management and backed by independent customer research, this session will prepare you to answer the question “Why don’t we just use SharePoint for that?’ the next time it comes up in your organization. ¶ CON7843 - Content-Enabling Enterprise Processes with Oracle WebCenter Organizations today continually strive to automate business processes, reduce costs, and improve efficiency. Many business processes are content-intensive and unstructured, requiring ad hoc collaboration, and distributed in nature, requiring many approvals and generating huge volumes of paper. In this session, learn how Oracle and SYSTIME have partnered to help a customer content-enable its enterprise with Oracle WebCenter Content and Oracle WebCenter Imaging 11g and integrate them with Oracle Applications. ¶ CON6114 - Tape Robotics’ Newest Superhero: Now Fueled by Oracle Software For small, midsize, and rapidly growing businesses that want the most energy-efficient, scalable storage infrastructure to meet their rapidly growing data demands, Oracle’s most recent addition to its award-winning tape portfolio leverages several pieces of Oracle software. With Oracle Linux, Oracle WebLogic, and Oracle Fusion Middleware tools, the library achieves a higher level of usability than previous products while offering customers a familiar interface for management, plus ease of use. This session examines the competitive advantages of the tape library and how Oracle software raises customer satisfaction. Learn how the combination of Oracle engineered systems, Oracle Secure Backup, and Oracle’s StorageTek tape libraries provide end-to-end coverage of your data. ¶ CON9437 - Mobile Access Management With more than five billion mobile devices on the planet and an increasing number of users using their own devices to access corporate data and applications, securely extending identity management to mobile devices has become a hot topic. This session focuses on how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON7815 - Customer Experience Online in Cloud: Oracle WebCenter Sites, Oracle ATG Apps, Oracle Exalogic Oracle WebCenter Sites and Oracle’s ATG product line together can provide a compelling marketing and e-commerce experience. When you couple them with the extreme performance of Oracle Exalogic, you’ll see unmatched scalability that provides you with a true cloud-based solution. In this session, you’ll learn how running Oracle WebCenter Sites and ATG applications on Oracle Exalogic delivers both a private and a public cloud experience. Find out what it takes to get these systems working together and delivering engaging Web experiences. Even if you aren’t considering Oracle Exalogic today, the rich Web experience of Oracle WebCenter, paired with the depth of the ATG product line, can provide your business full support, from merchandising through sale completion. ¶ CON8271 - Oracle WebCenter Portal Strategy and Vision To innovate and keep a competitive edge, organizations need to leverage the power of agile and responsive Web applications. Oracle WebCenter Portal enables you to do just that, by delivering intuitive user experiences for enterprise applications to drive innovation with composite applications and mashups. Attend this session to learn firsthand from customers how Oracle WebCenter Portal extends the value of existing enterprise applications, business processes, and content; delivers a superior business user experience; and maximizes limited IT resources. ¶ CON8880 - The Connected Customer Experience Begins with the Online Channel There’s a lot of talk these days about how to connect the customer journey across various touchpoints—from Websites and e-commerce to call centers and in-store—to provide experiences that are more relevant and engaging and ultimately gain competitive edge. Doing it all at once isn’t a realistic objective, so where do you start? Come to this session, and hear about three steps you can take that can help you begin your journey toward delivering the connected customer experience. You’ll hear how Oracle now has an integrated digital marketing platform for your corporate Website, your e-commerce site, your self-service portal, and your marketing and loyalty campaigns, and you’ll learn what you can do today to begin executing on your customer experience initiatives. ¶ GEN11451 - General Session: Building Mobile Applications with Oracle Cloud With the prevalence of smart mobile devices, companies are facing an increased demand to provide access to data and applications from new channels. However, developing applications for mobile devices poses some unique challenges. Come to this session to learn how Oracle addresses these challenges, offering a simpler way to develop and deploy cross-device mobile applications. See how Oracle Cloud enables you to access applications, data, and services from mobile channels in an easier way.  CON8272 - Oracle Social Network Strategy and Vision One key way of increasing employee productivity is by bringing people, processes, and information together—providing new social capabilities to enable business users to quickly correspond and collaborate on business activities. Oracle WebCenter provides a user engagement platform with social and collaborative technologies to empower business users to focus on their key business processes, applications, and content in the context of their role and process. Attend this session to hear how the latest social capabilities in Oracle Social Network are enabling organizations to transform themselves into social businesses.  --- Tuesday, October 2nd HOL10194 - Enterprise Content Management Simplified: Oracle WebCenter Content’s Next-Generation UI Regardless of the nature of your business, unstructured content underpins many of its daily functions. Whether you are working with traditional presentations, spreadsheets, or text documents—or even with digital assets such as images and multimedia files—your content needs to be accessible and manageable in convenient and intuitive ways to make working with the content easier. Additionally, you need the ability to easily share documents with coworkers to facilitate a collaborative working environment. Come to this session to see how Oracle WebCenter Content’s next-generation user interface helps modern knowledge workers easily manage personal and enterprise documents in a collaborative environment.¶ CON8877 - Develop a Mobile Strategy with Oracle WebCenter: Engage Customers, Employees, and Partners Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables users to have information available at their fingertips, enabling them to take action the moment they make a decision, interact in the moment of convenience, and take advantage of new service offerings in their preferred channels. All your employees have your mobile applications in their pocket; now what are you going to do? It is a critical step for companies to think through what their employees, customers, and partners really need on their devices. Attend this session to see how Oracle WebCenter enables you to better engage your customers, employees, and partners by providing a unified experience across multiple channels. ¶ CON9447 - Enabling Access for Hundreds of Millions of Users How do you grow your business by identifying, authenticating, authorizing, and federating users on the Web, leveraging social identity and the open source OAuth protocol? How do you scale your access management solution to support hundreds of millions of users? With social identity support out of the box, Oracle’s access management solution is also benchmarked for 250-million-user deployment according to real-world customer scenarios. In this session, you will learn about the social identity capability and the 250-million-user benchmark testing of Oracle Access Manager and Oracle Adaptive Access Manager running on Oracle Exalogic and Oracle Exadata. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON2906 - Get Proactive: Best Practices for Maintaining Oracle Fusion Middleware You chose Oracle Fusion Middleware products to help your organization deliver superior business results. Now learn how to take full advantage of your software with all the great tools, resources, and product updates you’re entitled to through Oracle Support. In this session, Oracle product experts provide proven best practices to help you work more efficiently, plan and prepare for upgrades and patching more effectively, and manage risk. Topics include configuration management tools, remote diagnostics, My Oracle Support Community, and My Oracle Support Lifecycle Advisors. New users and Oracle Fusion Middleware experts alike are guaranteed to leave with fresh ideas and practical, easy-to-implement next steps. ¶ CON8878 - Oracle WebCenter’s Cloud Strategy: From Social and Platform Services to Mashups Cloud computing represents a paradigm shift in how we build applications, automate processes, collaborate, and share and in how we secure our enterprise. Additionally, as you adopt cloud-based services in your organization, it’s likely that you will still have many critical on-premises applications running. With these mixed environments, multiple user interfaces, different security, and multiple datasources and content sources, how do you start evolving your strategy to account for these challenges? Oracle WebCenter offers a complete array of technologies enabling you to solve these challenges and prepare you for the cloud. Attend this session to learn how you can use Oracle WebCenter in the cloud as well as create on-premises and cloud application mash-ups. ¶ CON8901 - Optimize Enterprise Business Processes with Oracle WebCenter and Oracle BPM Do you have business processes that span multiple applications? Are you grappling with how to have visibility across these business processes; how to manage content that is associated with these processes; and, most importantly, how to model and optimize these business processes? Attend this session to hear how Oracle WebCenter and Oracle Business Process Management provide a unique set of integrated solutions to provide a composite application dashboard across these business processes and offer a solution for content-centric business processes. ¶ CON8883 - Deliver Engaging Interfaces to Oracle Applications with Oracle WebCenter Critical business processes live within enterprise applications, and application users need to manage and execute these processes as effectively as possible. Oracle provides a comprehensive user engagement platform to increase user productivity and optimize overall processes within Oracle Applications—Oracle E-Business Suite and Oracle’s Siebel, PeopleSoft, and JD Edwards product families—and third-party applications. Attend this session to learn how you can integrate these applications with Oracle WebCenter to deliver composite application dashboards to your end users—whether they are your customers, partners, or employees—for enhanced usability and Web 2.0–enabled enterprise portals.¶ Wednesday, October 3rd CON8895 - Future-Ready Intranets: How Aramark Re-engineered the Application Landscape There are essential techniques and technologies you can use to deliver employee portals that garner higher productivity, improve business efficiency, and increase user engagement. Attend this session to learn how you can leverage Oracle WebCenter Portal as a user engagement platform for bringing together business process management, enterprise content management, and business intelligence into a highly relevant and integrated experience. Hear how Aramark has leveraged Oracle WebCenter Portal and Oracle WebCenter Content to deliver a unified workspace providing simpler navigation and processing, consolidation of tools, easy access to information, integrated search, and single sign-on. ¶ CON8886 - Content Consolidation: Save Money, Increase Efficiency, and Eliminate Silos Organizations are looking for ways to save money and be more efficient. With content in many different places, it’s difficult to know where to look for a document and whether the document is the most current version. With Oracle WebCenter, content can be consolidated into one best-of-breed repository that is secure, scalable, and integrated with your business processes and applications. Users can find the content they need, where they need it, and ensure that it is the right content. This session covers content challenges that affect your business; content consolidation that can lead to savings in storage and administration costs and can lower risks; and how companies are realizing savings. ¶ CON8911 - Improve Online Experiences for Customers and Partners with Self-Service Portals Are you able to provide your customers and partners an easy-to-use online self-service experience? Are you processing high-volume transactions and struggling with call center bottlenecks or back-end systems that won’t integrate, causing order delays and customer frustration? Are you looking to target content such as product and service offerings to your end users? This session shares approaches to providing targeted delivery as well as strategies and best practices for transforming your business by providing an intuitive user experience for your customers and partners. ¶ CON6156 - Top 10 Ways to Integrate Oracle WebCenter Content This session covers 10 common ways to integrate Oracle WebCenter Content with other enterprise applications and middleware. It discusses out-of-the-box modules that provide expanded features in Oracle WebCenter Content—such as enterprise search, SOA, and BPEL—as well as developer tools you can use to create custom integrations. The presentation also gives guidance on which integration option may work best in your environment. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON7817 - Migration to Oracle WebCenter Imaging 11g Customers today continually strive to automate business processes, reduce costs, and improve efficiency. The accounts payable process—which is often distributed in nature, requires many approvals, and generates huge volumes of paper invoices—is automated by many customers. In this session, learn how Oracle and SYSTIME have partnered to help a customer migrate its existing Oracle Imaging and Process Management Release 7.6 to the latest Oracle WebCenter Imaging 11g and integrate it with Oracle’s JD Edwards family of products. ¶ CON8910 - How to Engage Customers Across Web, Mobile, and Social Channels Whether on desktops at the office, on tablets at home, or on mobile phones when on the go, today’s customers are always connected. To engage today’s customers, you need to make the online customer experience connected and consistent across a host of devices and multiple channels, including Web, mobile, and social networks. Managing this multichannel environment can result in lots of headaches without the right tools. Attend this session to learn how Oracle WebCenter Sites solves the challenge of multichannel customer engagement. ¶ HOL10206 - Oracle WebCenter Sites 11g: Transforming the Content Contributor Experience Oracle WebCenter Sites 11g makes it easy for marketers and business users to contribute to and manage Websites with the new visual, contextual, and intuitive Web authoring interface. In this hands-on lab, you will create and manage content for a sports-themed Website, using many of the new and enhanced features of the 11g release. ¶ CON8900 - Building Next-Generation Portals: An Interactive Customer Panel Discussion Social and collaborative technologies have changed how people interact, learn, and collaborate, and providing a modern, social Web presence is imperative to remain competitive in today’s market. Can your business benefit from a more collaborative and interactive portal environment for employees, customers, and partners? Attend this session to hear from Oracle WebCenter Portal customers as they share their strategies and best practices for providing users with a modern experience that adapts to their needs and includes personalized access to content in context. The panel also addresses how customers have benefited from creating next-generation portals by migrating from older portal technologies to Oracle WebCenter Portal. ¶ CON9625 - Taking Control of Oracle WebCenter Security Organizations are increasingly looking to extend their Oracle WebCenter portal for social business, to serve external users and provide seamless access to the right information. In particular, many organizations are extending Oracle WebCenter in a business-to-business scenario requiring secure identification and authorization of business partners and their users. This session focuses on how customers are leveraging, securing, and providing access control to Oracle WebCenter portal and mobile solutions. You will learn best practices and hear real-world examples of how to provide flexible and granular access control for Oracle WebCenter deployments, using Oracle Platform Security Services and Oracle Access Management Suite product offerings. ¶ CON8891 - Extending Social into Enterprise Applications and Business Processes Oracle Social Network is an extensible social platform that enables contextual collaboration within enterprise applications and business processes, providing relevant data from across various enterprise systems in one place. Attend this session to see how an Oracle Social Network customer is integrating multiple applications—such as CRM, HCM, and business processes—into Oracle Social Network and Oracle WebCenter to enable individuals and teams to solve complex cross-organizational business problems more effectively by utilizing the social enterprise. ¶ Thursday, October 4th CON8899 - Becoming a Social Business: Stories from the Front Lines of Change What does it really mean to be a social business? How can you change our organization to embrace social approaches? What pitfalls do you need to avoid? In this lively panel discussion, customer and industry thought leaders in social business explore these topics and more as they share their stories of the good, the bad, and the ugly that can happen when embracing social methods and technologies to improve business success. Using moderated questions and open Q&A from the audience, the panel discusses vital topics such as the critical factors for success, the major issues to avoid, how to gain senior executive support for social efforts, how to handle undesired behavior, and how to measure business impact. It takes a thought-provoking look at becoming a social business from the inside. ¶ CON6851 - Oracle WebCenter and Oracle Business Intelligence Enterprise Edition to Create Vendor Portals Large manufacturers of grocery items routinely find themselves depending on the inventory management expertise of their wholesalers and distributors. Inventory costs can be managed more efficiently by the manufacturers if they have better insight into the inventory levels of items carried by their distributors. This creates a unique opportunity for distributors and wholesalers to leverage this knowledge into a revenue-generating subscription service. Oracle Business Intelligence Enterprise Edition and Oracle WebCenter Portal play a key part in enabling creation of business-managed business intelligence portals for vendors. This session discusses one customer that implemented this by leveraging Oracle WebCenter and Oracle Business Intelligence Enterprise Edition. ¶ CON8879 - Provide a Personalized and Consistent Customer Experience in Your Websites and Portals Your customers engage with your company online in different ways throughout their journey—from prospecting by acquiring information on your corporate Website to transacting through self-service applications on your customer portal—and then the cycle begins again when they look for new products and services. Ensuring that the customer experience is consistent and personalized across online properties—from branding and content to interactions and transactions—can be a daunting task. Oracle WebCenter enables you to speak and interact with your customers with one voice across your Websites and portals by providing an integrated platform for delivery of self-service and engagement that unifies and personalizes the online experience. Learn more in this session. ¶ CON8898 - Land Mines, Potholes, and Dirt Roads: Navigating the Way to ECM Nirvana Ten years ago, people were predicting that by this time in history, we’d be some kind of utopian paperless society. As we all know, we’re not there yet, but are we getting closer? What is keeping companies from driving down the road to enterprise content management bliss? Most people understand that using ECM as a central platform enables organizations to expedite document-centric processes, but most business processes in organizations are still heavily paper-based. Many of these processes could be automated and improved with an ECM platform infrastructure. In this panel discussion, you’ll hear from Oracle WebCenter customers that have already solved some of these challenges as they share their strategies for success and roads to avoid along your journey. ¶ CON8908 - Oracle WebCenter Portal: Creating and Using Content Presenter Templates Oracle WebCenter Portal applications use task flows to display and integrate content stored in the Oracle WebCenter Content server. Among the most flexible task flows is Content Presenter, which renders various types of content on an Oracle WebCenter Portal page. Although Oracle WebCenter Portal comes with a set of predefined Content Presenter templates, developers can create their own templates for specific rendering needs. This session shows the lifecycle of developing Content Presenter task flows, including how to create, package, import, modify at runtime, and use such templates. In addition to simple examples with Oracle Application Development Framework (Oracle ADF) UI elements to render the content, it shows how to use other UI technologies, CSS files, and JavaScript libraries. ¶ CON8897 - Using Web Experience Management to Drive Online Marketing Success Every year, the online channel becomes more imperative for driving organizational top-line revenue, but for many companies, mastering how to best market their products and services in a fast-evolving online world with high customer expectations for personalized experiences can be a complex proposition. Come to this panel discussion, and hear directly from online marketers how they are succeeding today by using Web experience management to drive marketing success, using capabilities such as targeting and optimization, user-generated content, mobile site publishing, and site visitor personalization to deliver engaging online experiences. ¶ CON8892 - Oracle’s Journey to Social Business Social business is a revolution, one that is causing rapidly accelerating change in how companies and customers engage with one another and how employees work together. Oracle’s goal in becoming a social business is to create a socially connected organization in which working collaboratively across geographical locations, lines of business, and management chains is second nature, enabling innovative solutions to business challenges. We can achieve this by connecting the right people, finding the right content, communicating with the right people, collaborating at the right time, and building the right communities in the right context—all ready in the CLOUD. Attend this session to see how Oracle is transforming itself into a social business. ¶  ------------ If you've read all the way to the end here - we are REALLY looking forward to seeing you in San Francisco.

    Read the article

  • Using the West Wind Web Toolkit to set up AJAX and REST Services

    - by Rick Strahl
    I frequently get questions about which option to use for creating AJAX and REST backends for ASP.NET applications. There are many solutions out there to do this actually, but when I have a choice - not surprisingly - I fall back to my own tools in the West Wind West Wind Web Toolkit. I've talked a bunch about the 'in-the-box' solutions in the past so for a change in this post I'll talk about the tools that I use in my own and customer applications to handle AJAX and REST based access to service resources using the West Wind West Wind Web Toolkit. Let me preface this by saying that I like things to be easy. Yes flexible is very important as well but not at the expense of over-complexity. The goal I've had with my tools is make it drop dead easy, with good performance while providing the core features that I'm after, which are: Easy AJAX/JSON Callbacks Ability to return any kind of non JSON content (string, stream, byte[], images) Ability to work with both XML and JSON interchangeably for input/output Access endpoints via POST data, RPC JSON calls, GET QueryString values or Routing interface Easy to use generic JavaScript client to make RPC calls (same syntax, just what you need) Ability to create clean URLS with Routing Ability to use standard ASP.NET HTTP Stack for HTTP semantics It's all about options! In this post I'll demonstrate most of these features (except XML) in a few simple and short samples which you can download. So let's take a look and see how you can build an AJAX callback solution with the West Wind Web Toolkit. Installing the Toolkit Assemblies The easiest and leanest way of using the Toolkit in your Web project is to grab it via NuGet: West Wind Web and AJAX Utilities (Westwind.Web) and drop it into the project by right clicking in your Project and choosing Manage NuGet Packages from anywhere in the Project.   When done you end up with your project looking like this: What just happened? Nuget added two assemblies - Westwind.Web and Westwind.Utilities and the client ww.jquery.js library. It also added a couple of references into web.config: The default namespaces so they can be accessed in pages/views and a ScriptCompressionModule that the toolkit optionally uses to compress script resources served from within the assembly (namely ww.jquery.js and optionally jquery.js). Creating a new Service The West Wind Web Toolkit supports several ways of creating and accessing AJAX services, but for this post I'll stick to the lower level approach that works from any plain HTML page or of course MVC, WebForms, WebPages. There's also a WebForms specific control that makes this even easier but I'll leave that for another post. So, to create a new standalone AJAX/REST service we can create a new HttpHandler in the new project either as a pure class based handler or as a generic .ASHX handler. Both work equally well, but generic handlers don't require any web.config configuration so I'll use that here. In the root of the project add a Generic Handler. I'm going to call this one StockService.ashx. Once the handler has been created, edit the code and remove all of the handler body code. Then change the base class to CallbackHandler and add methods that have a [CallbackMethod] attribute. Here's the modified base handler implementation now looks like with an added HelloWorld method: using System; using Westwind.Web; namespace WestWindWebAjax { /// <summary> /// Handler implements CallbackHandler to provide REST/AJAX services /// </summary> public class SampleService : CallbackHandler { [CallbackMethod] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } } } Notice that the class inherits from CallbackHandler and that the HelloWorld service method is marked up with [CallbackMethod]. We're done here. Services Urlbased Syntax Once you compile, the 'service' is live can respond to requests. All CallbackHandlers support input in GET and POST formats, and can return results as JSON or XML. To check our fancy HelloWorld method we can now access the service like this: http://localhost/WestWindWebAjax/StockService.ashx?Method=HelloWorld&name=Rick which produces a default JSON response - in this case a string (wrapped in quotes as it's JSON): (note by default JSON will be downloaded by most browsers not displayed - various options are available to view JSON right in the browser) If I want to return the same data as XML I can tack on a &format=xml at the end of the querystring which produces: <string>Hello Rick. Time is: 11/1/2011 12:11:13 PM</string> Cleaner URLs with Routing Syntax If you want cleaner URLs for each operation you can also configure custom routes on a per URL basis similar to the way that WCF REST does. To do this you need to add a new RouteHandler to your application's startup code in global.asax.cs one for each CallbackHandler based service you create: protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); } With this code in place you can now add RouteUrl properties to any of your service methods. For the HelloWorld method that doesn't make a ton of sense but here is what a routed clean URL might look like in definition: [CallbackMethod(RouteUrl="stocks/HelloWorld/{name}")] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } The same URL I previously used now becomes a bit shorter and more readable with: http://localhost/WestWindWebAjax/HelloWorld/Rick It's an easy way to create cleaner URLs and still get the same functionality. Calling the Service with $.getJSON() Since the result produced is JSON you can now easily consume this data using jQuery's getJSON method. First we need a couple of scripts - jquery.js and ww.jquery.js in the page: <!DOCTYPE html> <html> <head> <link href="Css/Westwind.css" rel="stylesheet" type="text/css" /> <script src="scripts/jquery.min.js" type="text/javascript"></script> <script src="scripts/ww.jquery.min.js" type="text/javascript"></script> </head> <body> Next let's add a small HelloWorld example form (what else) that has a single textbox to type a name, a button and a div tag to receive the result: <fieldset> <legend>Hello World</legend> Please enter a name: <input type="text" name="txtHello" id="txtHello" value="" /> <input type="button" id="btnSayHello" value="Say Hello (POST)" /> <input type="button" id="btnSayHelloGet" value="Say Hello (GET)" /> <div id="divHelloMessage" class="errordisplay" style="display:none;width: 450px;" > </div> </fieldset> Then to call the HelloWorld method a little jQuery is used to hook the document startup and the button click followed by the $.getJSON call to retrieve the data from the server. <script type="text/javascript"> $(document).ready(function () { $("#btnSayHelloGet").click(function () { $.getJSON("SampleService.ashx", { Method: "HelloWorld", name: $("#txtHello").val() }, function (result) { $("#divHelloMessage") .text(result) .fadeIn(1000); }); });</script> .getJSON() expects a full URL to the endpoint of our service, which is the ASHX file. We can either provide a full URL (SampleService.ashx?Method=HelloWorld&name=Rick) or we can just provide the base URL and an object that encodes the query string parameters for us using an object map that has a property that matches each parameter for the server method. We can also use the clean URL routing syntax, but using the object parameter encoding actually is safer as the parameters will get properly encoded by jQuery. The result returned is whatever the result on the server method is - in this case a string. The string is applied to the divHelloMessage element and we're done. Obviously this is a trivial example, but it demonstrates the basics of getting a JSON response back to the browser. AJAX Post Syntax - using ajaxCallMethod() The previous example allows you basic control over the data that you send to the server via querystring parameters. This works OK for simple values like short strings, numbers and boolean values, but doesn't really work if you need to pass something more complex like an object or an array back up to the server. To handle traditional RPC type messaging where the idea is to map server side functions and results to a client side invokation, POST operations can be used. The easiest way to use this functionality is to use ww.jquery.js and the ajaxCallMethod() function. ww.jquery wraps jQuery's AJAX functions and knows implicitly how to call a CallbackServer method with parameters and parse the result. Let's look at another simple example that posts a simple value but returns something more interesting. Let's start with the service method: [CallbackMethod(RouteUrl="stocks/{symbol}")] public StockQuote GetStockQuote(string symbol) { Response.Cache.SetExpires(DateTime.UtcNow.Add(new TimeSpan(0, 2, 0))); StockServer server = new StockServer(); var quote = server.GetStockQuote(symbol); if (quote == null) throw new ApplicationException("Invalid Symbol passed."); return quote; } This sample utilizes a small StockServer helper class (included in the sample) that downloads a stock quote from Yahoo's financial site via plain HTTP GET requests and formats it into a StockQuote object. Lets create a small HTML block that lets us query for the quote and display it: <fieldset> <legend>Single Stock Quote</legend> Please enter a stock symbol: <input type="text" name="txtSymbol" id="txtSymbol" value="msft" /> <input type="button" id="btnStockQuote" value="Get Quote" /> <div id="divStockDisplay" class="errordisplay" style="display:none; width: 450px;"> <div class="label-left">Company:</div> <div id="stockCompany"></div> <div class="label-left">Last Price:</div> <div id="stockLastPrice"></div> <div class="label-left">Quote Time:</div> <div id="stockQuoteTime"></div> </div> </fieldset> The final result looks something like this:   Let's hook up the button handler to fire the request and fill in the data as shown: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").show().fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, HH:mm EST")); }, onPageError); }); So we point at SampleService.ashx and the GetStockQuote method, passing a single parameter of the input symbol value. Then there are two handlers for success and failure callbacks.  The success handler is the interesting part - it receives the stock quote as a result and assigns its values to various 'holes' in the stock display elements. The data that comes back over the wire is JSON and it looks like this: { "Symbol":"MSFT", "Company":"Microsoft Corpora", "OpenPrice":26.11, "LastPrice":26.01, "NetChange":0.02, "LastQuoteTime":"2011-11-03T02:00:00Z", "LastQuoteTimeString":"Nov. 11, 2011 4:20pm" } which is an object representation of the data. JavaScript can evaluate this JSON string back into an object easily and that's the reslut that gets passed to the success function. The quote data is then applied to existing page content by manually selecting items and applying them. There are other ways to do this more elegantly like using templates, but here we're only interested in seeing how the data is returned. The data in the object is typed - LastPrice is a number and QuoteTime is a date. Note about the date value: JavaScript doesn't have a date literal although the JSON embedded ISO string format used above  ("2011-11-03T02:00:00Z") is becoming fairly standard for JSON serializers. However, JSON parsers don't deserialize dates by default and return them by string. This is why the StockQuote actually returns a string value of LastQuoteTimeString for the same date. ajaxMethodCallback always converts dates properly into 'real' dates and the example above uses the real date value along with a .formatDate() data extension (also in ww.jquery.js) to display the raw date properly. Errors and Exceptions So what happens if your code fails? For example if I pass an invalid stock symbol to the GetStockQuote() method you notice that the code does this: if (quote == null) throw new ApplicationException("Invalid Symbol passed."); CallbackHandler automatically pushes the exception message back to the client so it's easy to pick up the error message. Regardless of what kind of error occurs: Server side, client side, protocol errors - any error will fire the failure handler with an error object parameter. The error is returned to the client via a JSON response in the error callback. In the previous examples I called onPageError which is a generic routine in ww.jquery that displays a status message on the bottom of the screen. But of course you can also take over the error handling yourself: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); }, function (error, xhr) { $("#divErrorDisplay").text(error.message).fadeIn(1000); }); }); The error object has a isCallbackError, message and  stackTrace properties, the latter of which is only populated when running in Debug mode, and this object is returned for all errors: Client side, transport and server side errors. Regardless of which type of error you get the same object passed (as well as the XHR instance optionally) which makes for a consistent error retrieval mechanism. Specifying HttpVerbs You can also specify HTTP Verbs that are allowed using the AllowedHttpVerbs option on the CallbackMethod attribute: [CallbackMethod(AllowedHttpVerbs=HttpVerbs.GET | HttpVerbs.POST)] public string HelloWorld(string name) { … } If you're building REST style API's this might be useful to force certain request semantics onto the client calling. For the above if call with a non-allowed HttpVerb the request returns a 405 error response along with a JSON (or XML) error object result. The default behavior is to allow all verbs access (HttpVerbs.All). Passing in object Parameters Up to now the parameters I passed were very simple. But what if you need to send something more complex like an object or an array? Let's look at another example now that passes an object from the client to the server. Keeping with the Stock theme here lets add a method called BuyOrder that lets us buy some shares for a stock. Consider the following service method that receives an StockBuyOrder object as a parameter: [CallbackMethod] public string BuyStock(StockBuyOrder buyOrder) { var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } public class StockBuyOrder { public string Symbol { get; set; } public int Quantity { get; set; } public DateTime BuyOn { get; set; } public StockBuyOrder() { BuyOn = DateTime.Now; } } This is a contrived do-nothing example that simply echoes back what was passed in, but it demonstrates how you can pass complex data to a callback method. On the client side we now have a very simple form that captures the three values on a form: <fieldset> <legend>Post a Stock Buy Order</legend> Enter a symbol: <input type="text" name="txtBuySymbol" id="txtBuySymbol" value="GLD" />&nbsp;&nbsp; Qty: <input type="text" name="txtBuyQty" id="txtBuyQty" value="10" style="width: 50px" />&nbsp;&nbsp; Buy on: <input type="text" name="txtBuyOn" id="txtBuyOn" value="<%= DateTime.Now.ToString("d") %>" style="width: 70px;" /> <input type="button" id="btnBuyStock" value="Buy Stock" /> <div id="divStockBuyMessage" class="errordisplay" style="display:none"></div> </fieldset> The completed form and demo then looks something like this:   The client side code that picks up the input values and assigns them to object properties and sends the AJAX request looks like this: $("#btnBuyStock").click(function () { // create an object map that matches StockBuyOrder signature var buyOrder = { Symbol: $("#txtBuySymbol").val(), Quantity: $("#txtBuyQty").val() * 1, // number Entered: new Date() } ajaxCallMethod("SampleService.ashx", "BuyStock", [buyOrder], function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError); }); The code creates an object and attaches the properties that match the server side object passed to the BuyStock method. Each property that you want to update needs to be included and the type must match (ie. string, number, date in this case). Any missing properties will not be set but also not cause any errors. Pass POST data instead of Objects In the last example I collected a bunch of values from form variables and stuffed them into object variables in JavaScript code. While that works, often times this isn't really helping - I end up converting my types on the client and then doing another conversion on the server. If lots of input controls are on a page and you just want to pick up the values on the server via plain POST variables - that can be done too - and it makes sense especially if you're creating and filling the client side object only to push data to the server. Let's add another method to the server that once again lets us buy a stock. But this time let's not accept a parameter but rather send POST data to the server. Here's the server method receiving POST data: [CallbackMethod] public string BuyStockPost() { StockBuyOrder buyOrder = new StockBuyOrder(); buyOrder.Symbol = Request.Form["txtBuySymbol"]; ; int qty; int.TryParse(Request.Form["txtBuyQuantity"], out qty); buyOrder.Quantity = qty; DateTime time; DateTime.TryParse(Request.Form["txtBuyBuyOn"], out time); buyOrder.BuyOn = time; // Or easier way yet //FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } Clearly we've made this server method take more code than it did with the object parameter. We've basically moved the parameter assignment logic from the client to the server. As a result the client code to call this method is now a bit shorter since there's no client side shuffling of values from the controls to an object. $("#btnBuyStockPost").click(function () { ajaxCallMethod("SampleService.ashx", "BuyStockPost", [], // Note: No parameters - function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError, // Force all page Form Variables to be posted { postbackMode: "Post" }); }); The client simply calls the BuyStockQuote method and pushes all the form variables from the page up to the server which parses them instead. The feature that makes this work is one of the options you can pass to the ajaxCallMethod() function: { postbackMode: "Post" }); which directs the function to include form variable POST data when making the service call. Other options include PostNoViewState (for WebForms to strip out WebForms crap vars), PostParametersOnly (default), None. If you pass parameters those are always posted to the server except when None is set. The above code can be simplified a bit by using the FormVariableBinder helper, which can unbind form variables directly into an object: FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); which replaces the manual Request.Form[] reading code. It receives the object to unbind into, a string of properties to skip, and an optional prefix which is stripped off form variables to match property names. The component is similar to the MVC model binder but it's independent of MVC. Returning non-JSON Data CallbackHandler also supports returning non-JSON/XML data via special return types. You can return raw non-JSON encoded strings like this: [CallbackMethod(ReturnAsRawString=true,ContentType="text/plain")] public string HelloWorldNoJSON(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } Calling this method results in just a plain string - no JSON encoding with quotes around the result. This can be useful if your server handling code needs to return a string or HTML result that doesn't fit well for a page or other UI component. Any string output can be returned. You can also return binary data. Stream, byte[] and Bitmap/Image results are automatically streamed back to the client. Notice that you should set the ContentType of the request either on the CallbackMethod attribute or using Response.ContentType. This ensures the Web Server knows how to display your binary response. Using a stream response makes it possible to return any of data. Streamed data can be pretty handy to return bitmap data from a method. The following is a method that returns a stock history graph for a particular stock over a provided number of years: [CallbackMethod(ContentType="image/png",RouteUrl="stocks/history/graph/{symbol}/{years}")] public Stream GetStockHistoryGraph(string symbol, int years = 2,int width = 500, int height=350) { if (width == 0) width = 500; if (height == 0) height = 350; StockServer server = new StockServer(); return server.GetStockHistoryGraph(symbol,"Stock History for " + symbol,width,height,years); } I can now hook this up into the JavaScript code when I get a stock quote. At the end of the process I can assign the URL to the service that returns the image into the src property and so force the image to display. Here's the changed code: $("#btnStockQuote").click(function () { var symbol = $("#txtSymbol").val(); ajaxCallMethod("SampleService.ashx", "GetStockQuote", [symbol], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); // display a stock chart $("#imgStockHistory").attr("src", "stocks/history/graph/" + symbol + "/2"); },onPageError); }); The resulting output then looks like this: The charting code uses the new ASP.NET 4.0 Chart components via code to display a bar chart of the 2 year stock data as part of the StockServer class which you can find in the sample download. The ability to return arbitrary data from a service is useful as you can see - in this case the chart is clearly associated with the service and it's nice that the graph generation can happen off a handler rather than through a page. Images are common resources, but output can also be PDF reports, zip files for downloads etc. which is becoming increasingly more common to be returned from REST endpoints and other applications. Why reinvent? Obviously the examples I've shown here are pretty basic in terms of functionality. But I hope they demonstrate the core features of AJAX callbacks that you need to work through in most applications which is simple: return data, send back data and potentially retrieve data in various formats. While there are other solutions when it comes down to making AJAX callbacks and servicing REST like requests, I like the flexibility my home grown solution provides. Simply put it's still the easiest solution that I've found that addresses my common use cases: AJAX JSON RPC style callbacks Url based access XML and JSON Output from single method endpoint XML and JSON POST support, querystring input, routing parameter mapping UrlEncoded POST data support on callbacks Ability to return stream/raw string data Essentially ability to return ANYTHING from Service and pass anything All these features are available in various solutions but not together in one place. I've been using this code base for over 4 years now in a number of projects both for myself and commercial work and it's served me extremely well. Besides the AJAX functionality CallbackHandler provides, it's also an easy way to create any kind of output endpoint I need to create. Need to create a few simple routines that spit back some data, but don't want to create a Page or View or full blown handler for it? Create a CallbackHandler and add a method or multiple methods and you have your generic endpoints.  It's a quick and easy way to add small code pieces that are pretty efficient as they're running through a pretty small handler implementation. I can have this up and running in a couple of minutes literally without any setup and returning just about any kind of data. Resources Download the Sample NuGet: Westwind Web and AJAX Utilities (Westwind.Web) ajaxCallMethod() Documentation Using the AjaxMethodCallback WebForms Control West Wind Web Toolkit Home Page West Wind Web Toolkit Source Code © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery  AJAX   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Create and Consume WCF service using Visual Studio 2010

    - by sreejukg
    In this article I am going to demonstrate how to create a WCF service, that can be hosted inside IIS and a windows application that consume the WCF service. To support service oriented architecture, Microsoft developed the programming model named Windows Communication Foundation (WCF). ASMX was the prior version from Microsoft, was completely based on XML and .Net framework continues to support ASMX web services in future versions also. While ASMX web services was the first step towards the service oriented architecture, Microsoft has made a big step forward by introducing WCF. An overview of planning for WCF can be found from this link http://msdn.microsoft.com/en-us/library/ff649584.aspx . The following are the important differences between WCF and ASMX from an asp.net developer point of view. 1. ASMX web services are easy to write, configure and consume 2. ASMX web services are only hosted in IIS 3. ASMX web services can only use http 4. WCF, can be hosted inside IIS, windows service, console application, WAS(Windows Process Activation Service) etc 5. WCF can be used with HTTP, TCP/IP, MSMQ and other protocols. The detailed difference between ASMX web service and WCF can be found here. http://msdn.microsoft.com/en-us/library/cc304771.aspx Though WCF is a bigger step for future, Visual Studio makes it simpler to create, publish and consume the WCF service. In this demonstration, I am going to create a service named SayHello that accepts 2 parameters such as name and language code. The service will return a hello to user name that corresponds to the language. So the proposed service usage is as follows. Caller: SayHello(“Sreeju”, “en”) -> return value -> Hello Sreeju Caller: SayHello(“???”, “ar”) -> return value -> ????? ??? Caller: SayHello(“Sreeju”, “es”) - > return value -> Hola Sreeju Note: calling an automated translation service is not the intention of this article. If you are interested, you can find bing translator API and can use in your application. http://www.microsofttranslator.com/dev/ So Let us start First I am going to create a Service Application that offer the SayHello Service. Open Visual Studio 2010, Go to File -> New Project, from your preferred language from the templates section select WCF, select WCF service application as the project type, give the project a name(I named it as HelloService), click ok so that visual studio will create the project for you. In this demonstration, I have used C# as the programming language. Visual studio will create the necessary files for you to start with. By default it will create a service with name Service1.svc and there will be an interface named IService.cs. The screenshot for the project in solution explorer is as follows Since I want to demonstrate how to create new service, I deleted Service1.Svc and IService1.cs files from the project by right click the file and select delete. Now in the project there is no service available, I am going to create one. From the solution explorer, right click the project, select Add -> New Item Add new item dialog will appear to you. Select WCF service from the list, give the name as HelloService.svc, and click on the Add button. Now Visual studio will create 2 files with name IHelloService.cs and HelloService.svc. These files are basically the service definition (IHelloService.cs) and the service implementation (HelloService.svc). Let us examine the IHelloService interface. The code state that IHelloService is the service definition and it provides an operation/method (similar to web method in ASMX web services) named DoWork(). Any WCF service will have a definition file as an Interface that defines the service. Let us see what is inside HelloService.svc The code illustrated is implementing the interface IHelloService. The code is self-explanatory; the HelloService class needs to implement all the methods defined in the Service Definition. Let me do the service as I require. Open IHelloService.cs in visual studio, and delete the DoWork() method and add a definition for SayHello(), do not forget to add OperationContract attribute to the method. The modified IHelloService.cs will look as follows Now implement the SayHello method in the HelloService.svc.cs file. Here I wrote the code for SayHello method as follows. I am done with the service. Now you can build and run the service by clicking f5 (or selecting start debugging from the debug menu). Visual studio will host the service in give you a client to test it. The screenshot is as follows. In the left pane, it shows the services available in the server and in right side you can invoke the service. To test the service sayHello, double click on it from the above window. It will ask you to enter the parameters and click on the invoke button. See a sample output below. Now I have done with the service. The next step is to write a service client. Creating a consumer application involves 2 steps. One generating the class and configuration file corresponds to the service. Create a project that utilizes the generated class and configuration file. First I am going to generate the class and configuration file. There is a great tool available with Visual Studio named svcutil.exe, this tool will create the necessary class and configuration files for you. Read the documentation for the svcutil.exe here http://msdn.microsoft.com/en-us/library/aa347733.aspx . Open Visual studio command prompt, you can find it under Start Menu -> All Programs -> Visual Studio 2010 -> Visual Studio Tools -> Visual Studio command prompt Make sure the service is in running state in visual studio. Note the url for the service(from the running window, you can right click and choose copy address). Now from the command prompt, enter the svcutil.exe command as follows. I have mentioned the url and the /d switch – for the directory to store the output files(In this case d:\temp). If you are using windows drive(in my case it is c: ) , make sure you open the command prompt with run as administrator option, otherwise you will get permission error(Only in windows 7 or windows vista). The tool has created 2 files, HelloService.cs and output.config. Now the next step is to create a new project and use the created files and consume the service. Let us do that now. I am going to add a console application to the current solution. Right click solution name in the solution explorer, right click, Add-> New Project Under Visual C#, select console application, give the project a name, I named it TestService Now navigate to d:\temp where I generated the files with the svcutil.exe. Rename output.config to app.config. Next step is to add both files (d:\temp\helloservice.cs and app.config) to the files. In the solution explorer, right click the project, Add -> Add existing item, browse to the d:\temp folder, select the 2 files as mentioned before, click on the add button. Now you need to add a reference to the System.ServiceModel to the project. From solution explorer, right click the references under testservice project, select Add reference. In the Add reference dialog, select the .Net tab, select System.ServiceModel, and click ok Now open program.cs by double clicking on it and add the code to consume the web service to the main method. The modified file looks as follows Right click the testservice project and set as startup project. Click f5 to run the project. See the sample output as follows Publishing WCF service under IIS is similar to publishing ASP.Net application. Publish the application to a folder using Visual studio publishing feature, create a virtual directory and create it as an application. Don’t forget to set the application pool to use ASP.Net version 4. One last thing you need to check is the app.config file you have added to the solution. See the element client under ServiceModel element. There is an endpoint element with address attribute that points to the published service URL. If you permanently host the service under IIS, you can simply change the address parameter to the corresponding one and your application will consume the service. You have seen how easily you can build/consume WCF service. If you need the solution in zipped format, please post your email below.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Configuring Fed Authentication Methods in OIF / IdP

    - by Damien Carru
    In this article, I will provide examples on how to configure OIF/IdP to map OAM Authentication Schemes to Federation Authentication Methods, based on the concepts introduced in my previous entry. I will show examples for the three protocols supported by OIF: SAML 2.0 SSO SAML 1.1 SSO OpenID 2.0 Enjoy the reading! Configuration As I mentioned in my previous article, mapping Federation Authentication Methods to OAM Authentication Schemes is protocol dependent, since the methods are defined in the various protocols (SAML 2.0, SAML 1.1, OpenID 2.0). As such, the WLST commands to set those mappings will involve: Either the SP Partner Profile and affect all Partners referencing that profile, which do not override the Federation Authentication Method to OAM Authentication Scheme mappings Or the SP Partner entry, which will only affect the SP Partner It is important to note that if an SP Partner is configured to define one or more Federation Authentication Method to OAM Authentication Scheme mappings, then all the mappings defined in the SP Partner Profile will be ignored. WLST Commands The two OIF WLST commands that can be used to define mapping Federation Authentication Methods to OAM Authentication Schemes are: addSPPartnerProfileAuthnMethod() to define a mapping on an SP Partner Profile, taking as parameters: The name of the SP Partner Profile The Federation Authentication Method The OAM Authentication Scheme name addSPPartnerAuthnMethod() to define a mapping on an SP Partner , taking as parameters: The name of the SP Partner The Federation Authentication Method The OAM Authentication Scheme name Note: I will discuss in a subsequent article the other parameters of those commands. In the next sections, I will show examples on how to use those methods: For SAML 2.0, I will configure the SP Partner Profile, that will apply all the mappings to SP Partners referencing this profile, unless they override mapping definition For SAML 1.1, I will configure the SP Partner. For OpenID 2.0, I will configure the SP/RP Partner SAML 2.0 Test Setup In this setup, OIF is acting as an IdP and is integrated with a remote SAML 2.0 SP partner identified by AcmeSP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Use BasicScheme as the Authentication Scheme Map BasicSessionScheme  to  the urn:oasis:names:tc:SAML:2.0:ac:classes:Password Federation Authentication Method Use OAMLDAPPluginAuthnScheme as the Authentication Scheme Map OAMLDAPPluginAuthnScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. Also the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via FORM, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> BasicScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner Profile to BasicScheme instead of LDAPScheme. I will use the OIF WLST setSPPartnerProfileDefaultScheme() command and specify which scheme to be used as the default for the SP Partner Profile referenced by AcmeSP (which is saml20-sp-partner-profile in this case: getFedPartnerProfile("AcmeSP", "sp") ): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerProfileDefaultScheme() command:setSPPartnerProfileDefaultScheme("saml20-sp-partner-profile", "BasicScheme") Exit the WLST environment:exit() The user will now be challenged via HTTP Basic Authentication defined in the BasicScheme for AcmeSP. Also, as noted earlier, the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via HTTP Basic Authentication, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> Mapping BasicScheme To change the Federation Authentication Method mapping for the BasicScheme to urn:oasis:names:tc:SAML:2.0:ac:classes:Password instead of urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport for the saml20-sp-partner-profile SAML 2.0 SP Partner Profile (the profile to which my AcmeSP Partner is bound to), I will execute the addSPPartnerProfileAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerProfileAuthnMethod() command:addSPPartnerProfileAuthnMethod("saml20-sp-partner-profile", "urn:oasis:names:tc:SAML:2.0:ac:classes:Password", "BasicScheme") Exit the WLST environment:exit() After authentication via HTTP Basic Authentication, OIF/IdP would now issue an Assertion similar to (see that the AuthnContextClassRef was changed from PasswordProtectedTransport to Password): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:Password                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> OAMLDAPPluginAuthnScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner Profile to OAMLDAPPluginAuthnScheme instead of BasicScheme. I will use the OIF WLST setSPPartnerProfileDefaultScheme() command and specify which scheme to be used as the default for the SP Partner Profile referenced by AcmeSP (which is saml20-sp-partner-profile in this case: getFedPartnerProfile("AcmeSP", "sp") ): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerProfileDefaultScheme() command:setSPPartnerProfileDefaultScheme("saml20-sp-partner-profile", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() The user will now be challenged via FORM defined in the OAMLDAPPluginAuthnScheme for AcmeSP. Contrarily to LDAPScheme and BasicScheme, the OAMLDAPPluginAuthnScheme is not mapped by default to any Federation Authentication Methods. As such, OIF/IdP will not be able to find a Federation Authentication Method and will set the method in the SAML Assertion to the OAM Authentication Scheme name. After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthnContextClassRef set to OAMLDAPPluginAuthnScheme): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef> OAMLDAPPluginAuthnScheme                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> Mapping OAMLDAPPluginAuthnScheme To add the OAMLDAPPluginAuthnScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport mapping, I will execute the addSPPartnerProfileAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerProfileAuthnMethod() command:addSPPartnerProfileAuthnMethod("saml20-sp-partner-profile", "urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from OAMLDAPPluginAuthnScheme to PasswordProtectedTransport): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> SAML 1.1 Test Setup In this setup, OIF is acting as an IdP and is integrated with a remote SAML 1.1 SP partner identified by AcmeSP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Use OAMLDAPPluginAuthnScheme as the Authentication Scheme Map OAMLDAPPluginAuthnScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method Use LDAPScheme as the Authentication Scheme Map LDAPScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. Also the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:1.0:am:password to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via FORM, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> OAMLDAPPluginAuthnScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner to OAMLDAPPluginAuthnScheme instead of LDAPScheme. I will use the OIF WLST setSPPartnerDefaultScheme() command and specify which scheme to be used as the default for the SP Partner: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerDefaultScheme() command:setSPPartnerDefaultScheme("AcmeSP", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() The user will be challenged via FORM defined in the OAMLDAPPluginAuthnScheme for AcmeSP. Contrarily to LDAPScheme, the OAMLDAPPluginAuthnScheme is not mapped by default to any Federation Authentication Methods (in the SP Partner Profile). As such, OIF/IdP will not be able to find a Federation Authentication Method and will set the method in the SAML Assertion to the OAM Authentication Scheme name. After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthenticationMethod set to OAMLDAPPluginAuthnScheme): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="OAMLDAPPluginAuthnScheme">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Mapping OAMLDAPPluginAuthnScheme To map the OAMLDAPPluginAuthnScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:1.0:am:password for this SP Partner only, I will execute the addSPPartnerAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeSP", "urn:oasis:names:tc:SAML:1.0:am:password", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from OAMLDAPPluginAuthnScheme to password): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> LDAPScheme as Authentication Scheme I will now show that by defining a Federation Authentication Mapping at the Partner level, this now ignores all mappings defined at the SP Partner Profile level. For this test, I will switch the default Authentication Scheme for this SP Partner back to LDAPScheme, and the Assertion issued by OIF/IdP will not be able to map this LDAPScheme to a Federation Authentication Method anymore, since A Federation Authentication Method mapping is defined at the SP Partner level and thus the mappings defined at the SP Partner Profile are ignored The LDAPScheme is not listed in the mapping at the Partner level I will use the OIF WLST setSPPartnerDefaultScheme() command and specify which scheme to be used as the default for this SP Partner: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerDefaultScheme() command:setSPPartnerDefaultScheme("AcmeSP", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthenticationMethod set to LDAPScheme): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="LDAPScheme">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Mapping LDAPScheme at Partner Level To fix this issue, we will need to add the LDAPScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:1.0:am:password mapping for this SP Partner only. I will execute the addSPPartnerAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeSP", "urn:oasis:names:tc:SAML:1.0:am:password", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from LDAPScheme to password): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> OpenID 2.0 In the OpenID 2.0 flows, the RP must request use of PAPE, in order for OIF/IdP/OP to include PAPE information. For OpenID 2.0, the configuration will involve mapping a list of OpenID 2.0 policies to a list of Authentication Schemes. The WLST command will take a list of policies, delimited by the ',' character, instead of SAML 2.0 or SAML 1.1 where a single Federation Authentication Method had to be specified. Test Setup In this setup, OIF is acting as an IdP/OP and is integrated with a remote OpenID 2.0 SP/RP partner identified by AcmeRP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Map LDAPScheme to  the http://schemas.openid.net/pape/policies/2007/06/phishing-resistant and http://openid-policies/password-protected policies Federation Authentication Methods (the second one is a custom for this use case) LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. No Federation Authentication Method is defined OOTB for OpenID 2.0, so if the IdP/OP issue an SSO response with a PAPE Response element, it will specify the scheme name instead of Federation Authentication Methods After authentication via FORM, OIF/IdP would issue an SSO Response similar to: https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=LDAPScheme&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D Mapping LDAPScheme To map the LDAP Scheme to the http://schemas.openid.net/pape/policies/2007/06/phishing-resistant and http://openid-policies/password-protected policies Federation Authentication Methods, I will execute the addSPPartnerAuthnMethod() method (the policies will be comma separated): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeRP", "http://schemas.openid.net/pape/policies/2007/06/phishing-resistant,http://openid-policies/password-protected", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from LDAPScheme to the two policies): https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=http%3A%2F%2Fschemas.openid.net%2Fpape%2Fpolicies%2F2007%2F06%2Fphishing-resistant+http%3A%2F%2Fopenid-policies%2Fpassword-protected&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D In the next article, I will cover how OIF/IdP can be configured so that an SP can request a specific Federation Authentication Method to challenge the user during Federation SSO.Cheers,Damien Carru

    Read the article

  • CodePlex Daily Summary for Friday, April 30, 2010

    CodePlex Daily Summary for Friday, April 30, 2010New ProjectsAcres2: This is the new hotnessAsoraX.de - AirlineTycoon: EntwicklungsPhase von AsoraX.deazshop: Ecommerce macros and user controls based on Commerce4Umbraco from Umbraco CMS Project.BioPhotoAnalyzer: Some exercises with F#, WPF and Mathematical methods for quantitative analysis of biological photos, such as those obtained from microscopy, PAGE etcCECS622 - Simulations - South Computing Center: This simulation will be used to study the traffic flow of the South Computing Center (SCC) on the UofL campus. Document.Viewer: Basic document viewer for Windows XP, Vista and Windows 7 that supports FlowDocument, Rich and Plan Text FormatsDotNetNuke 5 Thai Language Pack: DotNetNuke Thai Language PackDotNetNuke Skins Pack: We derive html/css template from www.freecsstemplates.org and apply to DotNetNuke 4 & 5 Skins Pack.Dynamics AX Business Intelligence: Sample code for creating and using Dynamics AX Business Intelligence Easy Chat: Very simple kind of chat; it got DOS interface. It's just to start the server start the client enter the IP, enter your name, and you're ready to go!Html Source Transmitter Control: This web control allows getting a source of a web page, that will displayed before submit. So, developer can store a view of the html page, that wa...HydroServer - CUAHSI Hydrologic Information System Server: HydroServer is a set of software applications for publishing hydrologic datasets on the Internet. HydroServer leverages commercial software such a...Lisp.NET: Lisp.NET is a project that demonstrates how the Embedded Common Lisp (ECL) environment can be loaded and used from .NET. This enables Common Lisp ...Live-Exchange Calendar Sync: Live - Exchange Calendar Sync synchronizes calendars of users between 2 Exchange servers (it can be Live Calendar also). It's developed in C#. It u...MaLoRT : Mac Lovin' RayTracer: RaytracerMojo Workflow Web UI: Mojo Workflow Web UIMoonRover - Enter Island: Simple Graphics consumer application that simulates Moon RoversMVC Music Store: MVC Music Store is a tutorial application built on ASP.NET MVC 2. It's a lightweight sample store which sells albums online, demonstrating ASP.NET ...OpenPOS: OpenPOS, a completely open and free point-of-sale systemPgcDemuxCLI: A modified version of PgcDemux (http://download.videohelp.com/jsoto/dvdtools.htm) that has better CLI support and progress reporting.Sample 01: basic summarySharedEx: App dev in VS2010 RTM and Silverlight 4.0 for Windows Phone 7!SharePoint 2010 Managed Metadata WebPart: This SharePoint 2010 webpart creates a navigation control from a Managed Metadata column assigned to a list/library. The Termset the column relates...SharePoint 2010 Service Manager: The SharePoint Service manager let's you start and stop all the SharePoint 2010 services on your workstation. This is useful if you are running S...Sharp DOM: Sharp DOM is a view engine for ASP.NET MVC platform allowing developers to design extendable and maintenable dynamic HTML layouts using C# 4.0 lang...simple fx: simple framework written in javascriptSimpleGFC.NET: Simple GFC is a .NET library written in C# for simplifying the connection to Google Friend Connect. It is designed to be simple and concise, not a...SPSS & PSPP file library: Library for importing SPSS (now called PASW) & PSPP .sav datafiles, developed in C#. Support for writing and SPSS portable-files will be implemente...TailspinSpyworks - WebForms Sample Application: Tailspin Spyworks demonstrates how extraordinarily simple it is to create powerful, scalable applications for the .NET platform. It shows off how t...VividBed: Business website.卢融凯的个人主页: 卢融凯的个人主页New ReleasesAssemblyVerifier: 4.0.0.0 Release: Updated to target .NET 4.0Bricks' Bane: Bricks' Bane Editor v1.0.0.4: I've added functionality to the editor, after some personal use. Plus I fixed a bug that was occurring on file loading mostly when the file was mod...Dambach Linear Algebra Framework: Linear Algebra Framework - Sneak Peak: This is just to get the code out there into the hands of potential users and to elicit feedback.DevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 4a: DevTreks calculators, analyzers, and story tellers are being upgraded so that they can be extended by third parties using either add-ins (Managed ...DirectQ: Release 1.8.3c: Fixes some more late-breaking bugs and adds some optimizations from the forthcoming 1.8.4 release. No Windows 98 build this time. I may add one a...Document Assembly from a SharePoint Library using the Open XML SDK: Solution for March 2010 rel of the Open XML SDK: Microsoft changed some class names between the December 2009 CTP and the March 2010 release of the Open XML SDK which means that old solution will ...Document.Viewer: 0.9.0: Whats New?: New icon set Bug fix'sDotNetNuke Skins Pack: DNN 80 Skins Pack: This released is the first for DNN 4 & 5 with Skin Token Design (legacy skin support on DNN 4 & 5)DotNetWinService: Release 1.0.0.0: This release includes four different scheduled tasks: 1) TaskURL: Petition a specific URL: when working on a ASP.NET website, it's sometimes ...Hammock for REST: Hammock v1.0.1: v1.0.1 ChangesFixes to improve OAuth stability Fixes for asynchronous timeouts Fixes and simplification for retries v1.0 FeaturesSimple, clean...Home Access Plus+: v4.1.1: v4.1.1 Change Log: Added Enable for Booking Resource to allow Resources to be disabled but not deleted Updated HAP.Config for above File Changes...HouseFly controls: HouseFly controls alpha 0.9.6.0: HouseFly controls release 0.9.6.0 alphaHydroServer - CUAHSI Hydrologic Information System Server: ODM Data Loader 1.1.3: Version 1.1.3 of the ODM Data Loader is only compatible with version 1.1 of ODM This Version Addresses : Memory issues in Version 1.1.2 that were ...Jobping Url Shortener: Source Code v0.1: Source code for the 0.1 release.LinkedIn® for Windows Mobile: LinkedIn for Windows Mobile v0.6: Fixed the crash after accepting the use of internet access.mojoPortal: 2.3.4.3: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2343-released.aspxMoonRover - Enter Island: Moon Rover: Simulates Rover Moves on MoonNanoPrompt: Release 0.4.1 - Beta 1.1: Now released: Beta 1.1 for .NET 4.0ORAYLIS BI.SmartDiff: Release 1.0.0: see change log for detailsQuickStart Engine (3D Game Engine for XNA): QuickStart Engine v0.22: Main FeaturesClean engine architecture Makes it easy to make your own game using the engine. Messaging system allows you to communicate between s...SharePoint 2010 Managed Metadata WebPart: Taxonomy WebPart 0.0.1: Initial version releasedSharePoint 2010 Service Manager: SharePoint Service Manager 2010 v1: Starts and stops all SharePoint services Works with both Microsoft SQL Server and SQL Server Express Edition (for SharePoint) Supports disablin...Shweet: SharePoint 2010 Team Messaging built with Pex: Shweet RTM Release: Shweet has been updated to work with the RTM of SharePoint and the latest version of Pex Visual Studio 2010 Power Tools. Know issues: Continuous I...Silverlight Testing Automation Tool: StatLight v1.0 - Beta 1: Things still to accomplish (before official v1 release)Item # 10716: Release build hangs on x64 machine Still awaiting UnitDriven change feedback...SPSS & PSPP file library: 0.1 alpha: This first test release allows to read an .sav SPSS/PSPP/PASW file and retreive the data, either in the form of a .net DataReader or as a custom Sp...SqlDiffFramework-A Visual Differencing Engine for Dissimilar Data Sources: SqlDiffFramework 1.0.0.0: Initial Public Release Requirements: .NET Framework 2.0 SqlDiffFramework is designed for the enterprise! Check with your system administrator (or...sqwarea: Sqwarea 0.0.210.0 (alpha): First public release. Deploy package contains : the documentation for classes. the azure package with the Web Role. the zip with assemblies wh...Sweeper: Sweeper Alpha 2: SweeperA Visual Studio Add-in for C# Code Formatting - Visual Studio 2008 Includes: A UI for options, Enable or disable any specific task you want ...TailspinSpyworks - WebForms Sample Application: TailspinSpyworks-v0.8: ASP.NET 4 ECommerce ApplicationTweetSharp: TweetSharp Release Candidate: New FeaturesComplete core rewrite for stability and speed, introducing Hammock http://hammock.codeplex.com Windows Phone 7, Silverlight 4 OOB (El...Wicked Compression ASP.NET HTTP Module: Wicked Compression ASP.NET HTTP Module Beta: Now supports AJAX from .NET Framework 2.0 through 4.0! Bug Fix for Excluded Paths! Binary Release Included for .NET Framework 2.0 through 4.0!Most Popular ProjectsRawrWBFS ManagerAJAX Control Toolkitpatterns & practices – Enterprise LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionASP.NETDotNetNuke® Community EditionMost Active ProjectsRawrpatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterHydroServer - CUAHSI Hydrologic Information System ServerGMap.NET - Great Maps for Windows Forms & PresentationParticle Plot Pivotpatterns & practices: Azure Security GuidanceSqlDiffFramework-A Visual Differencing Engine for Dissimilar Data SourcesNB_Store - Free DotNetNuke Ecommerce Catalog ModuleN2 CMS

    Read the article

  • CodePlex Daily Summary for Saturday, March 10, 2012

    CodePlex Daily Summary for Saturday, March 10, 2012Popular ReleasesPlayer Framework by Microsoft: Player Framework for Windows 8 Metro (BETA): Player Framework for HTML/JavaScript and XAML/C# Metro Style Applications.WPF Application Framework (WAF): WAF for .NET 4.5 (Experimental): Version: 2.5.0.440 (Experimental): This is an experimental release! It can be used to investigate the new .NET Framework 4.5 features. The ideas shown in this release might come in a future release (after 2.5) of the WPF Application Framework (WAF). More information can be found in this dicussion post. Requirements .NET Framework 4.5 (The package contains a solution file for Visual Studio 11) The unit test projects require Visual Studio 11 Professional Changelog All: Upgrade all proje...SSH.NET Library: 2012.3.9: There are still few outstanding issues I wanted to include in this release but since its been a while and there are few new features already I decided to create a new release now. New Features Add SOCKS4, SOCKS5 and HTTP Proxy support when connecting to remote server. For silverlight only IP address can be used for server address when using proxy. Add dynamic port forwarding support using ForwardedPortDynamic class. Add new ShellStream class to work with SSH Shell. Add supports for mu...Test Case Import Utilities for Visual Studio 2010 and Visual Studio 11 Beta: V1.2 RTM: This release (V1.2 RTM) includes: Support for connecting to Hosted Team Foundation Server Preview. Support for connecting to Team Foundation Server 11 Beta. Fix to issue with read-only attribute being set for LinksMapping-ReportFile which may have led to problems when saving the report file. Fix to issue with “related links” not being set properly in certain conditions. Fix to ensure that tool works fine when the Excel file contained rich text data. Note: Data is still imported in pl...Audio Pitch & Shift: Audio Pitch And Shift 3.5.0: Modules (mod, xm, it, etc..) supportcallisto: callisto 2.0.19: BUG FIX: Autorun.load() function in scripting now has sandboxed path (Thanks Mikey!) BUG FIX: UserObject.Name property now allows full 20 byte string replacements. FEATURE REQUEST: File.* script functions now allow file extensions.EntitiesToDTOs - Entity Framework DTO Generator: EntitiesToDTOs.v2.1: Changelog Fixed template file access issue on Win7. Fix on configuration load when target project was not found and "Use project default namespace" was checked. Minor fix on loading latest configuration at startup. Minor fix in VisualStudioHelper class. DTO's properties accessors are now in one line. Improvements in PropertyHelper to get a cleaner and more performant code. Added Website project type as a not supported project type. Using Error List pane from VS IDE to show Enti...DotNetNuke® Community Edition CMS: 06.01.04: Major Highlights Fixed issue with loading the splash page skin in the login, privacy and terms of use pages Fixed issue when searching for words with special characters in them Fixed redirection issue when the user does not have permissions to access a resource Fixed issue when clearing the cache using the ClearHostCache() function Fixed issue when displaying the site structure in the link to page feature Fixed issue when inline editing the title of modules Fixed issue with ...Mayhem: Mayhem Developer Preview: This is the developer preview of Mayhem. Enjoy!Team Foundation Server Process Template Customization Guidance: v1 - For Visual Studio 11: Welcome to the BETA release of the Team Foundation Server Process Template Customization preview. As this is a BETA release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these BETA artifacts in a production environment. Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has not been through an independent technical review Documentation has not been rev...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 1.2: Medium trust compliant lot of small change for medium trust compliance full refactoring of user management refactoring of Client Refactoring of user management Magelia.WebStore.Client no longer reference Magelia.WebStore.Services.Contract Refactoring page category multi parent category added copy category feature added Refactoring page catalog copy catalog feature added variant management improvement ability to define a default variant for a variable product ability to ord...PDFsharp - A .NET library for processing PDF: PDFsharp and MigraDoc Foundation 1.32: PDFsharp and MigraDoc Foundation 1.32 is a stable version that fixes a few bugs that were found with version 1.31. Version 1.32 includes solutions for Visual Studio 2010 only (but it should be possible to add the project files to existing solutions for VS 2005 or VS 2008). Users of VS 2005 or VS 2008 can still download version 1.31 with the solutions for those versions that allow them to easily try the samples that are included. While it may create smaller PDF files than version 1.30 because...Terminals: Version 2.0 - Release: Changes since version 1.9a:New art works New usability in Organize favorites window Improved usability of imports/exports and scans Large number of fixes Improvements in single instance mode Comparing November beta 4, this corrects: New application icons Doesn't show Logon error codes Fixed command line arguments exception for single instance mode Fixed detaching of tabs improved usability in detached window Fixed option settings for Capture manager Fixed system tray noti...MFCMAPI: March 2012 Release: Build: 15.0.0.1032 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeTortoiseHg: TortoiseHg 2.3.1: bugfix releaseSimple Injector: Simple Injector v1.4.1: This release adds two small improvements to the SimpleInjector.Extensions.dll. No changes have been made to the core library. New features and improvements in this release for the SimpleInjector.Extensions.dll The RegisterManyForOpenGeneric extension methods now accept non-generic decorator, as long as they implement the given open generic service type. GetTypesToRegister methods added to the OpenGenericBatchRegistrationExtensions class which allows to customize the behavior. Note that the...CommonLibrary: Code: CodeVidCoder: 1.3.1: Updated HandBrake core to 0.9.6 release (svn 4472). Removed erroneous "None" container choice. Change some logic and help text to stop assuming you have to pick the VIDEO_TS folder for a DVD scan. This should make previewing DVD titles on the Queue Multiple Titles window possible when you've picked the root DVD directory.Google Books Downloader for Windows: Google Books Downloader: Google Books Downloader 1.8ExtAspNet: ExtAspNet v3.1.0: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-03-04 v3.1.0 -??Hidden???????(〓?〓)。 -?PageManager??...New ProjectsAres Backup: Ares Backup is a Backup software which can save bytediffs and provides several storage plugins.BackItUp: Backup-Tool für Visual Studio Projektebinbin domain: binbin domain Blexus Service Plattform: Some cool stuff about Wcf Services. - can communicate files - can communicate xaml objects (generate dynamically Gui) CardPlay - a Solitaire Framework for .Net: CardPlay is a C# framework for developing Solitaire card games. The solution includes a sample WPF client along with over 100 games.Cloud Files Upload: Windows application to script cloud file uploads.Code First API Library, Scaffolding & Guidance for Coded UI Tests: Code first Coded UI Tests for web apps. Library, Scaffolding and Guidance.CPEBook by FMUG & TPAY: CPEBook by MUG & TPAY Projet dot NET CPEBookDot Net Application String Resources Viewer: Dot Net Application String Resources ViewerFilter for SharePoint Web Settings Page: This solution show a simple way to integrate a filter box by using jquery, a global farm feature and a simple delegate control for AdditionalPageHead.Google Books Downloader for Windows: Save Google books in PDF, JPEG or PNG format.GUIToolkit: C++ Windowless GUI,DirectUIHarvest Sports: Harvest SportsInfoPath Analyzer: InfoPath Analyzer makes InfoPath form development and troubleshooting much easier. You're easy to find the relationship between controls and data fields, search data fields or controls by name, edit InfoPath inner html directly.Kinectsignlanguage project: This project will help kinect be used for sign language to speech so that sign language people can be understood while talking to important people. LotteryVote: ????manager123: Trying out CodePlexMcRegister: McRegister is an asp.net mvc 3 razor website that enables you to register users on your minecraft server it works in conjunction with a minecraft mod called EasyAuth.MetroTipi: HelloTipi Sous l'interface Metro de Windows 8Microsoft AppFactory: AppFactory is a powerful data-driven build system for Windows Phone (and soon Windows 8) projects. Its purpose is to help developers start with template projects and turn them into suites of applications.MiniStock: MiniStock is an experimental reference architecture for scalable cloud-based architectures. Implemented in .net.online book shopping: online book shoppingScopa: Carousel Team Scopa per WP7 XNA testtom03092012tfs01: testtom03092012tfs01UsingLib: A library of automatically removed utilities: 1.Changing cursor to hourglass in Windows Forms 2.Logging It's developed in C#WHMCS Library: WHMCS is an all-in-one client management, billing & support solution for online businesses. Handling everything from signup to termination, WHMCS is a powerful business automation tool that puts you firmly in control. The WHMCS Library is a .NET wrapper for the WHMCS API. Written entirely in C# but really easy to port over to VB.NET. Coming from a VB.NET background we tried hard to make sure porting would be simple for VB.NET community members. ZoneEditService: Windows service to update ZoneEdit for dynamic dns.

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • Unity not Working 14.04

    - by Back.Slash
    I am using Ubuntu 14.04 LTS x64. I did a sudo apt-get upgrade yesterday and restarted my PC. Now my taskbar and panel are missing. When I try to restart Unity using unity --replace Then I get error: unity-panel-service stop/waiting compiz (core) - Info: Loading plugin: core compiz (core) - Info: Starting plugin: core unity-panel-service start/running, process 3906 compiz (core) - Info: Loading plugin: ccp compiz (core) - Info: Starting plugin: ccp compizconfig - Info: Backend : gsettings compizconfig - Info: Integration : true compizconfig - Info: Profile : unity compiz (core) - Info: Loading plugin: composite compiz (core) - Info: Starting plugin: composite compiz (core) - Info: Loading plugin: opengl compiz (core) - Info: Unity is fully supported by your hardware. compiz (core) - Info: Unity is fully supported by your hardware. compiz (core) - Info: Starting plugin: opengl libGL error: dlopen /usr/lib/x86_64-linux-gnu/dri/i965_dri.so failed (/usr/lib/x86_64-linux-gnu/dri/i965_dri.so: undefined symbol: _glapi_tls_Dispatch) libGL error: dlopen ${ORIGIN}/dri/i965_dri.so failed (${ORIGIN}/dri/i965_dri.so: cannot open shared object file: No such file or directory) libGL error: dlopen /usr/lib/dri/i965_dri.so failed (/usr/lib/dri/i965_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: i965_dri.so libGL error: driver pointer missing libGL error: failed to load driver: i965 libGL error: dlopen /usr/lib/x86_64-linux-gnu/dri/swrast_dri.so failed (/usr/lib/x86_64-linux-gnu/dri/swrast_dri.so: undefined symbol: _glapi_tls_Dispatch) libGL error: dlopen ${ORIGIN}/dri/swrast_dri.so failed (${ORIGIN}/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: dlopen /usr/lib/dri/swrast_dri.so failed (/usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: swrast_dri.so libGL error: failed to load driver: swrast compiz (core) - Info: Loading plugin: compiztoolbox compiz (core) - Info: Starting plugin: compiztoolbox compiz (core) - Info: Loading plugin: decor compiz (core) - Info: Starting plugin: decor compiz (core) - Info: Loading plugin: vpswitch compiz (core) - Info: Starting plugin: vpswitch compiz (core) - Info: Loading plugin: snap compiz (core) - Info: Starting plugin: snap compiz (core) - Info: Loading plugin: mousepoll compiz (core) - Info: Starting plugin: mousepoll compiz (core) - Info: Loading plugin: resize compiz (core) - Info: Starting plugin: resize compiz (core) - Info: Loading plugin: place compiz (core) - Info: Starting plugin: place compiz (core) - Info: Loading plugin: move compiz (core) - Info: Starting plugin: move compiz (core) - Info: Loading plugin: wall compiz (core) - Info: Starting plugin: wall compiz (core) - Info: Loading plugin: grid compiz (core) - Info: Starting plugin: grid compiz (core) - Info: Loading plugin: regex compiz (core) - Info: Starting plugin: regex compiz (core) - Info: Loading plugin: imgpng compiz (core) - Info: Starting plugin: imgpng compiz (core) - Info: Loading plugin: session compiz (core) - Info: Starting plugin: session I/O warning : failed to load external entity "/home/sumeet/.compiz/session/10de541a813cc1a8fc140170575114755000000020350005" compiz (core) - Info: Loading plugin: gnomecompat compiz (core) - Info: Starting plugin: gnomecompat compiz (core) - Info: Loading plugin: animation compiz (core) - Info: Starting plugin: animation compiz (core) - Info: Loading plugin: fade compiz (core) - Info: Starting plugin: fade compiz (core) - Info: Loading plugin: unitymtgrabhandles compiz (core) - Info: Starting plugin: unitymtgrabhandles compiz (core) - Info: Loading plugin: workarounds compiz (core) - Info: Starting plugin: workarounds compiz (core) - Info: Loading plugin: scale compiz (core) - Info: Starting plugin: scale compiz (core) - Info: Loading plugin: expo compiz (core) - Info: Starting plugin: expo compiz (core) - Info: Loading plugin: ezoom compiz (core) - Info: Starting plugin: ezoom compiz (core) - Info: Loading plugin: unityshell compiz (core) - Info: Starting plugin: unityshell WARN 2014-06-02 18:46:23 unity.glib.dbus.server GLibDBusServer.cpp:579 Can't register object 'org.gnome.Shell' yet as we don't have a connection, waiting for it... ERROR 2014-06-02 18:46:23 unity.debug.interface DebugDBusInterface.cpp:216 Unable to load entry point in libxpathselect: libxpathselect.so.1.4: cannot open shared object file: No such file or directory compiz (unityshell) - Error: GL_ARB_vertex_buffer_object not supported ERROR 2014-06-02 18:46:23 unity.shell.compiz unityshell.cpp:3850 Impossible to delete the unity locked stamp file compiz (core) - Error: Plugin initScreen failed: unityshell compiz (core) - Error: Failed to start plugin: unityshell compiz (core) - Info: Unloading plugin: unityshell X Error of failed request: BadWindow (invalid Window parameter) Major opcode of failed request: 3 (X_GetWindowAttributes) Resource id in failed request: 0x3e000c9 Serial number of failed request: 10115 Current serial number in output stream: 10116 Any help would be highly appreciated. EDIT : My PC configuration description: Portable Computer product: Dell System XPS L502X (System SKUNumber) vendor: Dell Inc. version: 0.1 serial: 1006ZP1 width: 64 bits capabilities: smbios-2.6 dmi-2.6 vsyscall32 configuration: administrator_password=unknown boot=normal chassis=portable family=HuronRiver System frontpanel_password=unknown keyboard_password=unknown power-on_password=unknown sku=System SKUNumber uuid=44454C4C-3000-1030-8036-B1C04F5A5031 *-core description: Motherboard product: 0YR8NN vendor: Dell Inc. physical id: 0 version: A00 serial: .1006ZP1.CN4864314C0560. slot: Part Component *-firmware description: BIOS vendor: Dell Inc. physical id: 0 version: A11 date: 05/29/2012 size: 128KiB capacity: 2496KiB capabilities: pci pnp upgrade shadowing escd cdboot bootselect socketedrom edd int13floppy360 int13floppy1200 int13floppy720 int5printscreen int9keyboard int14serial int17printer int10video acpi usb ls120boot smartbattery biosbootspecification netboot *-cpu description: CPU product: Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz vendor: Intel Corp. physical id: 19 bus info: cpu@0 version: Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz serial: Not Supported by CPU slot: CPU size: 800MHz capacity: 800MHz width: 64 bits clock: 100MHz capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid cpufreq configuration: cores=4 enabledcores=4 threads=8 *-cache:0 description: L1 cache physical id: 1a slot: L1-Cache size: 64KiB capacity: 64KiB capabilities: synchronous internal write-through data *-cache:1 description: L2 cache physical id: 1b slot: L2-Cache size: 256KiB capacity: 256KiB capabilities: synchronous internal write-through data *-cache:2 description: L3 cache physical id: 1c slot: L3-Cache size: 6MiB capacity: 6MiB capabilities: synchronous internal write-back unified *-memory description: System Memory physical id: 1d slot: System board or motherboard size: 6GiB *-bank:0 description: SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) product: M471B5273DH0-CH9 vendor: Samsung physical id: 0 serial: 450F1160 slot: ChannelA-DIMM0 size: 4GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:1 description: SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) product: HMT325S6BFR8C-H9 vendor: Hynix/Hyundai physical id: 1 serial: 0CA0E8E2 slot: ChannelB-DIMM0 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-pci description: Host bridge product: 2nd Generation Core Processor Family DRAM Controller vendor: Intel Corporation physical id: 100 bus info: pci@0000:00:00.0 version: 09 width: 32 bits clock: 33MHz *-pci:0 description: PCI bridge product: Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port vendor: Intel Corporation physical id: 1 bus info: pci@0000:00:01.0 version: 09 width: 32 bits clock: 33MHz capabilities: pci pm msi pciexpress normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:3000(size=4096) memory:f0000000-f10fffff ioport:c0000000(size=301989888) *-generic UNCLAIMED description: Unassigned class product: Illegal Vendor ID vendor: Illegal Vendor ID physical id: 0 bus info: pci@0000:01:00.0 version: ff width: 32 bits clock: 66MHz capabilities: bus_master vga_palette cap_list configuration: latency=255 maxlatency=255 mingnt=255 resources: memory:f0000000-f0ffffff memory:c0000000-cfffffff memory:d0000000-d1ffffff ioport:3000(size=128) memory:f1000000-f107ffff *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:52 memory:f1400000-f17fffff memory:e0000000-efffffff ioport:4000(size=64) *-communication description: Communication controller product: 6 Series/C200 Series Chipset Family MEI Controller #1 vendor: Intel Corporation physical id: 16 bus info: pci@0000:00:16.0 version: 04 width: 64 bits clock: 33MHz capabilities: pm msi bus_master cap_list configuration: driver=mei_me latency=0 resources: irq:50 memory:f1c05000-f1c0500f *-usb:0 description: USB controller product: 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 vendor: Intel Corporation physical id: 1a bus info: pci@0000:00:1a.0 version: 05 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci-pci latency=0 resources: irq:16 memory:f1c09000-f1c093ff *-multimedia description: Audio device product: 6 Series/C200 Series Chipset Family High Definition Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 05 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:53 memory:f1c00000-f1c03fff *-pci:1 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 1 vendor: Intel Corporation physical id: 1c bus info: pci@0000:00:1c.0 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode cap_list configuration: driver=pcieport resources: irq:16 *-pci:2 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 2 vendor: Intel Corporation physical id: 1c.1 bus info: pci@0000:00:1c.1 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:17 memory:f1b00000-f1bfffff *-network description: Wireless interface product: Centrino Wireless-N 1030 [Rainbow Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: mon.wlan0 version: 34 serial: bc:77:37:14:47:e5 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list logical wireless ethernet physical configuration: broadcast=yes driver=iwlwifi driverversion=3.13.0-27-generic firmware=18.168.6.1 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:51 memory:f1b00000-f1b01fff *-pci:3 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 4 vendor: Intel Corporation physical id: 1c.3 bus info: pci@0000:00:1c.3 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:19 memory:f1a00000-f1afffff *-usb description: USB controller product: uPD720200 USB 3.0 Host Controller vendor: NEC Corporation physical id: 0 bus info: pci@0000:04:00.0 version: 04 width: 64 bits clock: 33MHz capabilities: pm msi msix pciexpress xhci bus_master cap_list configuration: driver=xhci_hcd latency=0 resources: irq:19 memory:f1a00000-f1a01fff *-pci:4 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 5 vendor: Intel Corporation physical id: 1c.4 bus info: pci@0000:00:1c.4 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:16 memory:f1900000-f19fffff *-pci:5 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 6 vendor: Intel Corporation physical id: 1c.5 bus info: pci@0000:00:1c.5 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:17 ioport:2000(size=4096) ioport:f1800000(size=1048576) *-network description: Ethernet interface product: RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: 06 serial: 14:fe:b5:a3:ac:40 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=172.19.167.151 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:49 ioport:2000(size=256) memory:f1804000-f1804fff memory:f1800000-f1803fff *-usb:1 description: USB controller product: 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 vendor: Intel Corporation physical id: 1d bus info: pci@0000:00:1d.0 version: 05 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci-pci latency=0 resources: irq:23 memory:f1c08000-f1c083ff *-isa description: ISA bridge product: HM67 Express Chipset Family LPC Controller vendor: Intel Corporation physical id: 1f bus info: pci@0000:00:1f.0 version: 05 width: 32 bits clock: 33MHz capabilities: isa bus_master cap_list configuration: driver=lpc_ich latency=0 resources: irq:0 *-ide:0 description: IDE interface product: 6 Series/C200 Series Chipset Family 4 port SATA IDE Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 version: 05 width: 32 bits clock: 66MHz capabilities: ide pm bus_master cap_list configuration: driver=ata_piix latency=0 resources: irq:19 ioport:40b8(size=8) ioport:40cc(size=4) ioport:40b0(size=8) ioport:40c8(size=4) ioport:4090(size=16) ioport:4080(size=16) *-serial UNCLAIMED description: SMBus product: 6 Series/C200 Series Chipset Family SMBus Controller vendor: Intel Corporation physical id: 1f.3 bus info: pci@0000:00:1f.3 version: 05 width: 64 bits clock: 33MHz configuration: latency=0 resources: memory:f1c04000-f1c040ff ioport:efa0(size=32) *-ide:1 description: IDE interface product: 6 Series/C200 Series Chipset Family 2 port SATA IDE Controller vendor: Intel Corporation physical id: 1f.5 bus info: pci@0000:00:1f.5 version: 05 width: 32 bits clock: 66MHz capabilities: ide pm bus_master cap_list configuration: driver=ata_piix latency=0 resources: irq:19 ioport:40a8(size=8) ioport:40c4(size=4) ioport:40a0(size=8) ioport:40c0(size=4) ioport:4070(size=16) ioport:4060(size=16) *-scsi:0 physical id: 1 logical name: scsi0 capabilities: emulated *-disk description: ATA Disk product: SAMSUNG HN-M640M physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 2AR1 serial: S2T3J1KBC00006 size: 596GiB (640GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 sectorsize=512 signature=6b746d91 *-volume:0 description: Windows NTFS volume physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 version: 3.1 serial: 0272-3e7f size: 348MiB capacity: 350MiB capabilities: primary bootable ntfs initialized configuration: clustersize=4096 created=2013-09-18 12:20:45 filesystem=ntfs label=System Reserved modified_by_chkdsk=true mounted_on_nt4=true resize_log_file=true state=dirty upgrade_on_mount=true *-volume:1 description: Extended partition physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 size: 116GiB capacity: 116GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume:0 description: Linux swap / Solaris partition physical id: 5 logical name: /dev/sda5 capacity: 6037MiB capabilities: nofs *-logicalvolume:1 description: Linux filesystem partition physical id: 6 logical name: /dev/sda6 logical name: / capacity: 110GiB configuration: mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,data=ordered state=mounted *-volume:2 description: Windows NTFS volume physical id: 3 bus info: scsi@0:0.0.0,3 logical name: /dev/sda3 logical name: /media/os version: 3.1 serial: 4e7853ec-5555-a74d-82e0-9f49798d3772 size: 156GiB capacity: 156GiB capabilities: primary ntfs initialized configuration: clustersize=4096 created=2013-09-19 09:19:00 filesystem=ntfs label=OS mount.fstype=fuseblk mount.options=ro,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096 state=mounted *-volume:3 description: Windows NTFS volume physical id: 4 bus info: scsi@0:0.0.0,4 logical name: /dev/sda4 logical name: /media/data version: 3.1 serial: 7666d55f-e1bf-e645-9791-2a1a31b24b9a size: 322GiB capacity: 322GiB capabilities: primary ntfs initialized configuration: clustersize=4096 created=2013-09-17 23:27:01 filesystem=ntfs label=Data modified_by_chkdsk=true mount.fstype=fuseblk mount.options=rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096 mounted_on_nt4=true resize_log_file=true state=mounted upgrade_on_mount=true *-scsi:1 physical id: 2 logical name: scsi1 capabilities: emulated *-cdrom description: DVD-RAM writer product: DVD+-RW GT32N vendor: HL-DT-ST physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/sr0 version: A201 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-battery product: DELL vendor: SANYO physical id: 1 version: 2008 serial: 1.0 slot: Rear capacity: 57720mWh configuration: voltage=11.1V `

    Read the article

  • The Low Down Dirty Azure Blues

    - by SGWellens
    Remember the SETI screen savers that used to be on everyone's computer? As far I as know, it was the first bona-fide use of "Cloud" computing…albeit an ad hoc cloud. I still think it was a brilliant leveraging of computing power. My interest in clouds was re-piqued when I went to a technical seminar at the local .Net User Group. The speaker was Mike Benkovitch and he expounded magnificently on the virtues of the Azure platform. Mike always does a good job. One killer reason he gave for cloud computing is instant scalability. Not applicable for most applications, but it is there if needed. I have a bunch of files stored on Microsoft's SkyDrive platform which is cloud storage. It is painfully slow. Accessing a file means going through layers and layers of software, redirections and security. Am I complaining? Hell no! It's free! So my opinions of Cloud Computing are both skeptical and appreciative. What intrigued me at the seminar, in addition to its other features, is that Azure can serve as a web hosting platform. I have a client with an Asp.Net web site I developed who is not happy with the performance of their current hosting service. I checked the cost of Azure and since the site has low bandwidth/space requirements the cost would be competitive with the existing host provider: Azure Pricing Calculator. And, Azure has a three month free trial. Perfect! I could try moving the website and see how it works for free. I went through the signup process. Everything was proceeding fine until I went to the MS SQL database management screen. A popup window informed me that I needed to install Silverlight on my machine. Silverlight? No thanks. Buh-Bye. I half-heartedly found the Azure support button and logged a ticket telling them I didn't want Silverlight on my machine. Within 4 to 6 hours (and a myriad (5) of automated support emails) they sent me a link to a database management page that did not require Silverlight. Thanks! I was able to create a database immediately. One really nice feature was that after creating the database, I was given a list of connection strings. I went to the current host provider, made a backup of the database and saved it to my machine. I attached to the remote database using SQL Server Studio 2012 and looked for the Restore menu item. It was missing. So I tried using the SQL command: RESTORE DATABASE MyDatabase FROM DISK ='C:\temp\MyBackup.bak' Msg 40510, Level 16, State 1, Line 1 Statement 'RESTORE DATABASE' is not supported in this version of SQL Server. Are you kidding me? Why on earth…? This can't be happening! I opened both the source database and destination database in SQL Management Studio. I right clicked the source database, selected "Tasks" and noticed a menu selection called "Deploy Database to SQL Azure" Are you kidding me? Could it be? Oh yes, it be! There was a small problem because the database already existed on the Azure machine, I deployed to a new name, deleted the existing database and renamed the deployed database to what I needed. It was ridiculously easy. Being able to attach SQL Management Studio to remote databases is an awesome but scary feature. You can limit the IP addresses that can access the database which enhances security but when you give people, any people, me included, that much power, one errant mouse click could bring a live system down. My Advice: Tread softly and carry a large backup thumb-drive. Then I created a web site, the URL it returned look something like this: http://MyWebSite.azurewebsites.net/ Azure supports FTP, but I couldn't figure out the settings until I downloaded the publishing profile. It was an XML file that contained the needed information. I still couldn't connect with my FTP client (FileZilla). After about an hour of messing around, I deleted the port number from the FileZilla setup page….and voila, I was in like Flynn.   There are other options of deploying directly from Visual Studio, TFS, etc. but I do not like integrated tools that do things without my asking: It's usually hard to figure out what they did and how to undo it. I uploaded the aspx , cs , webconfig, etc. files. Bu it didn't run. The site I ported was in .NET 3.5. The Azure website configuration page gave me a choice between .NET 2.0 and 4.0. So, I switched to Visual Studio 2010, chose .NET 4.0 and upgraded the site. Of course I have the original version completely backed up and stored in a granite cave beneath the Nevada desert. And I have a backup CD under my pillow. The site uses ReportViewer to generate PDF documents. Of course it was the wrong version. I removed the old references to version 9 and added new references to version 10 (*see note below). Since the DLLs were not on the Azure Server, I uploaded them to the bin directory, crossed my fingers, burned some incense and gave it a try. After some fiddling around it ran. I don't know if I did anything particular to make it work or it just needed time to sort things out. However, one critical feature didn't work: ReportViewer could not programmatically generate PDF documents. I was getting this exception: "An error occurred during local report processing. Parameter is not valid." Rats. I did some searching and found other people were having the same problem, so I added a post saying I was having the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsazurewebsitespreview/thread/b4a6eb43-0013-435f-9d11-00ee26a8d017 Currently they are looking into this problem and I am waiting for the results. Hence I had the time to write this BLOG entry. How lucky you are. This was the last message I got from the Microsoft person: Hi Steve, Windows Azure Web Sites is a multi-tenant environment. For security issue, we limited some API calls. Unfortunately, some GDI APIS required by the PDF converting function are in this list. We have noticed this issue, and still investigation the best way to go. At this moment, there is no news to share. Sorry about this. Will keep you posted. If I had to guess, I would say they are concerned with people uploading images and doing intensive graphics programming which would hog CPU time.  But that is just a guess. Another problem. While trying to resolve the ReportViewer problem, I tried to write a file to the PDF directory to see if there was a permissions problem with some test code: String MyPath = MapPath(@"~\PDFs\Test.txt"); File.WriteAllText(MyPath, "Hello Azure");     I got this message: Access to the path <my path> is denied. After some research, I understood that since Azure is a cloud based platform, it can't allow web applications to save files to local directories. The application could be moved or replicated as scaling occurs and trying to manage local files would be problematic to say the least. There are other options: Use the Azure APIs to get a path. That way the location of the storage is separated from the application. However, the web site is then tied Azure and can't be moved to another hosting platform. Use the ApplicationData folder (not recommended). Write to BLOB storage. Or, I could try and stream the PDF output directly to the email and not save a file. I'm not going to work on a final solution until the ReportViewer is fixed. I am just sharing some of the things you need to be aware of if you decide to use Azure. I got this information from here. (Note the author of the BLOG added a comment saying he has updated his entry). Is my memory faulty? While getting this BLOG ready, I tried to write the test file again. And it worked. My memory is incorrect, or much more likely, something changed on the server…perhaps while they are trying to get ReportViewer to work. (Anyway, that's my story and I'm sticking to it). *Note: Since Visual Studio 2010 Express doesn't include a Report Editor, I downloaded and installed SQL Server Report Builder 2.0. It is a standalone Report Editor to replace the one not in Visual Studio 2010 Express. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • CodePlex Daily Summary for Friday, July 19, 2013

    CodePlex Daily Summary for Friday, July 19, 2013Popular ReleasesuComponents: uComponents v5.5.0: Following on from our 101355 release, we bring you... v5.5.0! Resolved issues The following issues have been resolved in 5.5.0: 14634 14848 DataType Grid 14840 14842 14845 UpgradingThe best way to upgrade uComponents is to re-install via Umbraco's back-office package installer, (either from the package repository or a local package upload). Do not uninstall an existing uComponents package, as this may lead to data loss! Umbraco compatibilityThis release is compatible with Umbraco 4...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.19.4: One Click Install from NuGet This release introduces the 51Degrees.mobi IIS Vary Header Fix. When Compression and Caching is used in IIS, the Vary header is overwritten, making intelligent caching with dynamic content impossible. Find out more about installing the Vary Header fix. Changes to Version 2.1.19.4Handlers now have a ‘Count’ property. This is an integer value that shows how many devices in the dataset that use that handler. Provider.cs -> GetDeviceInfoByID to address a problem w...SalarDbCodeGenerator: SalarDbCodeGenerator v2.1.2013.0719: Version 2.1.2013.0719 2013/7/19 Pattern Changes: * DapperContext pattern is added. * All patterns are updated to work with one-to-one relations. Changes: * One-to-one relation is supported. * Minor bug fixes.Scryber PDF Generation: Scryber.Installer.v0.8.3: This one has tables! A significant update to the scryber library, with the inclusion of tables. There is no column span, or row span, and nested tables are not fully supported. But - they overflow nicely and are well styled. And they can be data bound. Other updates The Link element has inner components directly as a child, rather than in a Content element. The existing files will still continue to work, but are invalid wrt the schema. Databinding is now supported at the document level, r...Player Framework by Microsoft: Player Framework for Windows and WP (v1.3 beta 2): Includes all changes in v1.3 beta 1 Additional support for Windows 8.1 Preview New API (JS): addTextTrack New API (JS): msKeys New API (JS): msPlayToPreferredSourceUri New API (JS): msSetMediaKeys New API (JS): onmsneedkey New API (Xaml): SetMediaStreamSource method New API (Xaml): Stretch property New API (Xaml): StretchChanged event New API (Xaml): AreTransportControlsEnabled property New API (Xaml): IsFullWindow property New API (Xaml): PlayToPreferredSourceUri proper...Outlook 2013 Add-In: Multiple Calendars: As per popular request, this new version includes: - Support for multiple calendars. This can be enabled in the configuration by choosing which ones to show/hide appointments from. In some cases (public folders) it may time out and crash, and so far it only supports "My Calendars", so not shared ones yet. Also they're currently shown in the same font/color so there are no confusions with color categories, but please drop me a line on any suggestions you'd like to see implemented. - Added fri...GoAgent GUI: GoAgent GUI 1.2.0 ???: *???????,??????????????,?????????????? ????????* ????????(Beta) Beta????:https://goagent.codeplex.com/workitem/list/basic ??????:uygw@outlook.com ????:https://goagent.codeplex.com/documentation ?????????????????。Circuit Diagram: Circuit Diagram 2.0 Beta 2: New in this release: Show grid in editor Cut/copy/paste support Bug fixesAscend 3D: Ascend (2013-07-17): Ascend 2.1 Animation bugfixes and speed/robustness improvements Added compatibility for multiple top-level bones on armatures Ascend Blender Exporter 2.1 Removed need to create parent-child relationship between armature and skinning mesh Added support for exporting multiple top-level bones on armatures AscendViewer 1.0.4 Updated to use latest version of Ascend libraryCommunity TFS Build Extensions: July 2013: The July 2013 release contains VS2010 Activities(target .NET 4.0) VS2012 Activities (target .NET 4.5) VS2013 Activities (target .NET 4.5.1) Community TFS Build Manager VS2012 The Community TFS Build Manager can also be found in the Visual Studio Gallery here where updates will first become available. A version supporting VS2010 is also available in the Gallery here.smoketestgit1: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingsmoketesthg: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingsmoketesttfs: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingdatajs - JavaScript Library for data-centric web applications: datajs version 1.1.1: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes ...Orchard Project: Orchard 1.7 RC: Planning releasedTerminals: Version 3.1 - Release: Changes since version 3.0:15992 Unified usage of icons in user interface Added context menu in Organize favorites grid Fixed:34219 34210 34223 33981 34209 Install notes:No changes in database (use database from release 3.0) No upgrade of configuration, passwords, credentials or favorites See also upgrade notes for release 3.0Media Companion: Media Companion MC3.573b: XBMC Link - Let MC update your XBMC Library Fixes in place, Enjoy the XBMC Link function Well, Phil's been busy in the background, and come up with a Great new feature for Media Companion. Currently only implemented for movies. Once we're happy that's working with no issues, we'll extend the functionality to include TV shows. All the help for this is build into the application. Go to General Preferences - XBMC Link for details. Help us make it better* Currently only tested on local and ...Wsus Package Publisher: Release v1.2.1307.15: Fix a bug where WPP crash if 'ShowPendingUpdates' is start with wrong credentials. Fix a bug where WPP crash if ArrivalDateAfter and ArrivalDateBefore is equal in the ComputerView. Add a filter in the ComputerView. (Thanks to NorbertFe for this feature request) Add an option, when right-clicking on a computer, you can ask for display the current logon user of the remote computer. Add an option in settings to choose if WPP ping remote computers using IPv4, IPv6 or IPv6 and, if fail, IP...Lab Of Things: vBeta1: Initial release of LoTVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...New Projects[C#]A Complete MySQL Manager Project: An open-source MySQL database manager oriented to multi-threading environments, supporting to create an entire database from scratch and edit it.bCal Javascript Calendar Generator: Using this simple javascript library, you can create simple navigable calendars in your website with just one line of user code.Better Network: Tools to delete network profile and network lists in Windows 8.C# Intellisense for Notepad++: 'CS-Script Intellisense' is a real C# intellisense solution based on CS-Script and ICSharpCode.NRefactory/Mono.Cecil.DARF: Dynamic Application Runtime Framework: A set of tools and libraries which help software developers create fully distributed and component-based applications. FeiXinV5: Just TestFollower Catcher: Game made for the 2012 Windows 8 Megathon. Second place in Madrid's local competition.MeeOS .NET: El primer sistema operativo GNU GPL en español.Music Vault: Music Vault is program to store your favorite music in one place, you can in any time move all your music library by moving only one file.MVC uTorrent: This is an ASP.NET MVC uTorrent front end, that utilises the uTorrent API. The main features: - Single page application - Can be hosted in IISMyTaskManager: This project Related to distribute the task to log in users OO2RDF: OO2RDF is a framework for data triplification. P(y)rotein Subcellular Location Image Database: A library that allows communication with an OMERO database, an open-source software for storage and manipulation of biological images.PE Toolbox: PE Toolbox ? ????? ?? ???????????, ????? ??????? ??? ??????????? ?? ?????. ???? ????? ??????????? ???????: - PE IFE (Importing from Excel) PeGscan: *PeGscan*ScriptZilla: ScriptZilla is an Free Editor for Windows with more Than 23 Languages.Sharepoint : Reconciliate SSRS Reports and Datasets with datasources Powershell: Powershell to link datasources to SSRS reports and datasetsSync-Moped: The idea behind the "Sync-Moped" is to synchronize two directories with backup functionality.System.Time: System.Time provides a type for .NET that allows the managing of a time in System.String, System.DateTime and System.TimeSpan form at the same time.Text Analytics: Project summary will be added later.Tibia Universal MC .NET: This tool was written as an attempt to port the [url:http://code.google.com/p/xenomc/] to the .NET Framework.VRFramework: VR Framework is a system for creating a virtual reality games using your PC, and Android smart phone, and Unity 3D.

    Read the article

  • Dotfuscator Deep Dive with WP7

    - by Bil Simser
    I thought I would share some experience with code obfuscation (specifically the Dotfuscator product) and Windows Phone 7 apps. These days twitter is a buzz with black hat and white operations coming out about how the marketplace is insecure and Microsoft failed, blah, blah, blah. So it’s that much more important to protect your intellectual property. You should protect it no matter what when releasing apps into the wild but more so when someone is paying for them. You want to protect the time and effort that went into your code and have some comfort that the casual hacker isn’t going to usurp your next best thing. Enter code obfuscation. Code obfuscation is one tool that can help protect your IP. Basically it goes into your compiled assemblies, rewrites things at an IL level (like renaming methods and classes and hiding logic flow) and rewrites it back so that the assembly or executable is still fully functional but prying eyes using a tool like ILDASM or Reflector can’t see what’s going on.  You can read more about code obfuscation here on Wikipedia. A word to the wise. Code obfuscation isn’t 100% secure. More so on the WP7 platform where the OS expects certain things to be as they were meant to be. So don’t expect 100% obfuscation of every class and every method and every property. It’s just not going to happen. What this does do is give you some level of protection but don’t put all your eggs in one basket and call it done. Like I said, this is just one step in the process. There are a few tools out there that provide code obfuscation and support the Windows Phone 7 platform (see links to other tools at the end of this post). One such tool is Dotfuscator from PreEmptive solutions. The thing about Dotfuscator is that they’ve struck a deal with Microsoft to provide a *free* copy of their commercial product for Windows Phone 7. The only drawback is that it only runs until March 31, 2010. However it’s a good place to start and the focus of this article. Getting Started When you fire up Dotfuscator you’re presented with a dialog to start a new project or load a previous one. We’ll start with a new project. You’re then looking at a somewhat blank screen that shows an Input tab (among others) and you’re probably wondering what to do? Click on the folder icon (first one) and browse to where your xap file is. At this point you can save the project and click on the arrow to start the process. Bam! You’re done. Right? Think again. The program did indeed run and create a new version of your xap (doing it’s thing and rewriting back your *obfuscated* assemblies) but let’s take a look at the assembly in Reflector to see the end result. Remember a xap file is really just a glorified zip file (or cab file if you prefer). When you ran Dotfuscator for the first time with the default settings you’ll see it created a new version of your xap in a folder under “My Documents” called “Dotfuscated” (you can configure the output directory in settings). Here’s the new xap file. Since a xap is just a zip, rename it to .cab or .zip or something and open it with your favorite unarchive program (I use WinRar but it doesn’t matter as long as it can unzip files). If you already have the xap file associated with your unarchive tool the rename isn’t needed. Once renamed extract the contents of the xap to your hard drive: Now you’ll have a folder with the contents of the xap file extracted: Double click or load up your assembly (WindowsPhoneDataBoundApplication1.dll in the example) in Reflector and let’s see the results: Hmm. That doesn’t look right. I can see all the methods and the code is all there for my LoadData method I wanted to protect. Product failure. Let’s return it for a refund. Hold your horses. We need to check out the settings in the program first. Remember when we loaded up our xap file. It started us on the Input tab but there was a settings tab before that. Wonder what it does? Here’s the default settings: Renaming Taking a closer look, all of the settings in Feature are disabled. WTF? Yeah, it leaves me scratching my head why an obfuscator by default doesn’t obfuscate. However it’s a simple fix to change these settings. Let’s enable Renaming as it sounds like a good start. Renaming obscures code by renaming methods and fields to names that are not understandable. Great. Run the tool again and go through the process of unzipping the updated xap and let’s take a look in Reflector again at our project. This looks a lot better. Lots of methods named a, b, c, d, etc. That’ll help slow hackers down a bit. What about our logic that we spent days weeks on? Let’s take a look at the LoadData method: What gives? We have renaming enabled but all of our code is still there. If you look through all your methods you’ll find it’s still sitting there out in the open. Control Flow Back to the settings page again. Let’s enable Control Flow now. Control Flow obfuscation synthesizes branching, conditional, and iterative constructs (such as if, for, and while) that produce valid executable logic, but yield non-deterministic semantic results when decompilation is attempted. In other words, the code runs as before, but decompilers cannot reproduce the original code. Do the dance again and let’s see the results in Reflector. Ahh, that’s better. Methods renamed *and* nobody can look at our LoadData method now. Life is good. More than Minimum This is the bare minimum to obfuscate your xap to at least a somewhat comfortable level. However I did find that while this worked in my Hello World demo, it didn’t work on one of my real world apps. I had to do some extra tweaking with that. Below are the screens that I used on one app that worked. I’m not sure what it was about the app that the approach above didn’t work with (maybe the extra assembly?) but it works and I’m happy with it. YMMV. Remember to test your obfuscated app on your device first before submitting to ensure you haven’t obfuscated the obfuscator. settings tab: rename tab: string encryption tab: premark tab: A few final notes Play with the settings and keep bumping up the bar to try to get as much obfuscation as you can. The more the better but remember you can overdo it. Always (always, always, always) deploy your obfuscated xap to your device and test it before submitting to the marketplace. I didn’t and got rejected because I had gone overboard with the obfuscation so the app wouldn’t launch at all. Not everything is going to be obfuscated. Specifically I don’t see a way to obfuscate auto properties and a few other language features. Again, if you crank the settings up you might hide these but I haven’t spent a lot of time optimizing the process. Some people might say to obfuscate your xaml using string encryption but again, test, test, test. Xaml is picky so too much obfuscation (or any) might disable your app or produce odd rendering effets. Remember, obfuscation is not 100% secure! Don’t rely on it as a sole way of protecting your assets. Other Tools Dotfuscator is one just product and isn’t the end-all be-all to obfuscation so check out others below. For example, Crypto can make it so Reflector doesn’t even recognize the app as a .NET one and won’t open it. Others can encrypt resources and Xaml markup files. Here are some other obfuscators that support the Windows Phone 7 platform. Feel free to give them a try and let people know your experience with them! Dotfuscator Windows Phone Edition Crypto Obfuscator for .NET DeepSea Obfuscation

    Read the article

  • CodePlex Daily Summary for Sunday, December 19, 2010

    CodePlex Daily Summary for Sunday, December 19, 2010Popular ReleasesTeam Foundation Server Administration Tool: 2.1: TFS Administration Tool 2.1, is the first version of the TFS Administration Tool which is built on top of the Team Foundation Server 2010 object model. TFS Administration Tool 2.1 can be installed on machines that are running either Team Explorer 2010, or Team Foundation Server 2010.SQL Monitor: SQL Monitor 3.0 alpha 5: 1. fix a problem with not expanding nodes in object explorer, sorryHacker Passwords: HackerPasswords.zip: Source code, executable and documentationWatchersNET.SiteMap: WatchersNET.SiteMap 01.03.03: Whats NewSkin Object: You can now filter by Terms for Example use: <object id="dnnSITEMAPSL" codetype="dotnetnuke/server" codebase="SITEMAPSL"> <param name="TaxMode" value="terms" /> <param name="TaxTerms" value="TermName1,TermName2" /> </object> changes Tax Term Filter should work correct nowSubtitleTools: SubtitleTools 1.3: - Added .srt FileAssociation & Win7 ShowRecentCategory feature. - Applied UnifiedYeKe to fix Persian search problems. - Reduced file size of Persian subtitles for uploading @OSDB.EnhSim: EnhSim 2.2.3 ALPHA: 2.2.3 ALPHAThis release adds in the changes for 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in th...Facebook C# SDK: 4.1.0: - Lots of bug fixes - Removed Dynamic Runtime Language dependencies from non-dynamic platforms. - Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample - Changed internal serialization to use Json.net - BREAKING CHANGE: Canvas Session is no longer support. Use Signed Request instead. Canvas Session has been deprecated by Facebook. - BREAKING CHANGE: Some renames and changes with Authorizer, CanvasAuthorizer, and Authorization ac...NuGet (formerly NuPack): NuGet 1.0 build 11217.102: Note: this release is slightly newer than RC1, and fixes a couple issues relating to updating packages to newer versions. NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (h...WCF Community Site: WCF Web APIs 10.12.17: Welcome to the second release of WCF Web APIs on codeplex Here is what is new in this release. WCF Support for jQuery - create WCF web services that are easy to consume from JavaScript clients, in particular jQuery. Better support for using JsonValue as dynamic Support for JsonValue change notification events for databinding and other purposes Support for going between JsonValue and CLR types WCF HTTP - create HTTP / REST based web services. This is a minor release which contains fixe...Orchard Project: Orchard 0.9: Orchard Release Notes Build: 0.9.253 Published: 12/16/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard-Using-Web-PI.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.0.9.253.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orch...Pyxis 2: Beta 2.2: An issue stopping the Checkbox from working properly has been resolved (gradient background has also been added) A TabDialog control has been added NumericUpDown control added A driver for the VS1053 has been added (not yet in use) PyxisAPI.OpenFile now works with all its features PyxisAPI.SaveFile now works with all features Settings window is now available from the Pyxis menu You can now connect using Static IP or DHCP You can now set your system time from the Settings windo...sgMotion Animation Library: sgMotion v1.3 for SunBurn 2.0.9 [Includes Sample]: sgMotion 1.3 release for use with SunBurn 2.0.9 (all editions). Includes example project with assets, and full Windows and Xbox support. (tested on all platforms)DotSpatial: DotSpatial 12-15-2010: This release contains a few minor bug fixes and hopefully the GDAL libraries for the 3.5 x86 build actually built to the correct directory this time.DotNetNuke® Community Edition: 05.06.01 Beta: This is the initial Beta of DotNetNuke 5.6.1. See the DotNetNuke Roadmap a full list of changes in this release.MSBuild Extension Pack: December 2010: Release Blog Post The MSBuild Extension Pack December 2010 release provides a collection of over 380 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...TweetSharp: TweetSharp v2.0.0.0 - Preview 5: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 5 ChangesMaintenance release with user reported fixes Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numerous fixes reported by preview users Preview 3 ChangesNumerous ...Silverlight Contrib: Silverlight Contrib 2010.1.0: 2010.1.0 New FeaturesCompatibility Release for Silverlight 4 and Visual Studio 2010FlickrNet API Library: 3.1.4000: Newest release. Now contains dedicated Windows Phone 7 DLL as well as all previous DLLs. Also contains Windows Help file documentation now as standard.mojoPortal: 2.3.5.8: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2358-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-12-13: Code samples for Visual Studio 2010New ProjectsANLP - Another .NET Lexer Parser: This project aims to have a lexer/parser working in Silverlight and help people to write their own grammar and make the lexer/parser available in Silverlight.Better App Tab Shortcuts Firefox Extension: An extension for Firefox 4.0 which improves the shortcuts for selecting tabs.BizTalk Map Converter: Converts BizTalk Maps into ExcelCampo Minado: Campo minado em SilverlightCSharpHASH: Hash project is an college project conserning business software development. Written in C#.DS_HW1: hw1DynamicViewModel: MVVM using POCOs with .NET 4.0: This project aims to provide a way to implement the Model View ViewModel (MVVM) architectural pattern using Plain Old CLR Objects (POCOs) while taking full advantage of .NET 4.0 DynamicObject Class.Evolution Game: Evolution game that uses genetic algorithms to visualize life of different species.Fluent Parser: Fluent Parser is a library allowing to build a syntaxic analyser in C#. The grammar is built using only .NET objects into the form of a Parsing Expression Grammar (PEG).GoogleMusic Player: Choose and play music from newly launched google Music http://www.google.co.in/music create and save playlists Using C#Gracefully update SharePoint 2010 document metadata: Users want to update the metadata for a document in a document library, for which they don’t have access or if they wanted to update some read-only fields. Hacker Passwords: Hacker Passwords generator - free open sourceHTML5 Canvas FlowView: FlowView is a simple to use API for creating/implementing a flowview style element in HTML5/Canvas. The FlowView looks much like something from Apple's cover view.L#: A F# program to provide similar functionalities offered by Prolog for F# oriented applications.MapInfo Tools: Utilities for working with MapInfo files.Notepad X: Notepad X is an alternative open source text editor for Microsoft Windows, with a lot of customization options created to help users managing text documents, featuring tab navigation.OpenFlyClient: An educational open source of a client written in C# for FlyFF. This is for educational and evaluation purposes only. Using this for private servers is not allowed.Orkut API Library: Orkut API Library (more of a wrapper) for .NET makes it easier for .NET developers creating web and/or desktop applications to leverage Orkut OpenSocial API from within the familiar Visual Studio environment. PasteHtml.NET: PasteHtml.NET is a .NET API for using the PasteHtml.com service.Photon: Silverlight MVVM Framework RSATasync: Running analyses in RiskSpectrum PSA Professional software is a time-consuming task. RSATasync is a middleware, which uses .NET4 Parallel Extensions to parallelize analyses sent by the editor to the analyzer in the RiskSpectrum software.The Killie01 Project: all killie01 projects (opensource)United Nations News for Windows Phone 7: Open source project for United Nations News for Windows Phone 7. WCF SIP Stack: WCF SIPWordpress .NET: WPDotnet is a library to ease the Wordpress integration with ASP.NET. Some Server controls are implemented. If you don't want to use them, you can just use the classes.

    Read the article

  • Windows Azure Evolution &ndash; Preview Developer Portal

    - by Shaun
    With the MEET Windows Azure event on 7th June, there are many new features and updates in windows azure platform. In the coming several posts I will try to cover some of them. And in the first post here I would like to just have a quick walkthrough of the new preview developer portal.   History of the Developer Portal If you have been working with windows azure since 2009 or 2010, you should remember the first version of the developer portal. It was built in HTML with very limited features. I have the impression when I was using is old one. The layout is not that attractive and you have very limited features. On November, 2010 alone with the SDK 1.3 release, the developer portal was getting a big jump. In order to give more usability and features this it turned to be built on Silverlight. Hence it runs like a desktop application with many windows, lists, commands and context menus. From 2010 till now many features were involved into this portal, such as the remote desktop, co-admin, virtual connect, VM role, etc.. And the portal itself became more and more complicated. But it brought some problems by using the Silverlight. The first one is the browser capability. As you know in most mobile and tablet device the browser doesn’t allow the rich content plugin, such as Flash and Silverlight. This means people cannot open and configure their azure services from their iPad, iPhone and Windows Phone, etc., even though what they need may just be restart a hosted service, or view the status of their databases. Another problem is the performance. Silverlight provides rich experience to the users, but also needs more bandwidth. So in this upgrade the preview developer portal will be back to use HTML, with JavaScript, as a mobile friendly, cross browser, interactively web site.   Preview Portal vs. Silverlight Portal Before I started to talk about the new preview portal I’d better highlight that, this preview portal is a PREVIEW version, which means even though you can do almost all features that already in the old one, as long as some cool new features I will mention in the coming several posts, there are something still under developed and migrated. So sometimes you need to switch back to the old one. For example, in preview portal there is no co-admin manage function, no remote desktop function and the SQL database manage function will take you back to the old SQL Azure Manage Portal. But as Microsoft said these missing features will be moved in the preview portal in the couple of next few months. Since the public URL of the developer portal, https://windows.azure.com/, had been changed to point to this preview one, you need to click to preview button on top of the page and click the “Take me to the previous portal” link.   Overview There are four parts in the preview portal. On the top is the header which shows the account you are currently logging in. If you click on the header it will show the top menu of windows azure, where you can navigate to the windows azure home page, the price information page, community and account, etc.. The navigation bar is on the left hand side, with the categories listed below. ALL ITEMS All items in your windows azure account, includes the web sites, services, databases, etc.. WEB SITES The web sites in your windows azure account. It will only show the web sites you have. The linked resources will be shown if you drill down into a web site. VIRTUAL MACHINES The virtual machines that you had been deployed to azure. CLOUD SERVICES All windows azure hosted services in your account. SQL DATABASES All SQL databases (SQL Azure) in your account. STORAGE All windows azure storage services in your account. NETWORKS The virtual network (Windows Azure Connect) you had been created. The available items will be listed in the main part of the page based on which category your currently selected. If there’s no item it will show the link to you to quick create. At the bottom of the page there will be the command and information bar. Based on what is selected and what is performed by the user, it will show the related information and commands. For example, in the image below when I was creating a new web site, the information bar told me that my web site is being provisioned; and there are two commands in the command bar. And once it ready the command bar will show some commands that I can do to my new web site. The “Web Sites” is a new feature introduced alone with this upgrade. It gives us an easier and quicker way to establish a website from the scratch or from some existing library. I will introduce it more details in the coming next post. Also in the command bar you can create a service by clicking the NEW button. It will slide the creation panel up to you.   Where’s My Hosted Services The Windows Azure Hosted Services had been renamed to the Cloud Services. Create a new service would be very easy. Just click the NEW button at the bottom of the page, and select the CLOUD SERVICE and QIUICK CREATE. This will create a blank hosted service without deployment and certificate. It just needs you to specify the service URL and the affinity/region. Then the service will be shown in the list. If you clicked the item all information will be shown in the main part. Since there’s no package deployed to this service so currently we cannot see any information about it. But we can upload the package by using the command at the bottom. And as you can see, we could manage the configuration, instances, certificates and we can scale up and down (change the VM size), in and out (increase and decrease the instance count) to our service. Assuming I had created an ASP.NET MVC 3 web role project in Visual Studio and completed the package. Then I can click the UPLOAD button in this page to deploy my package. In the popping up window I just specify my deployment name, package file and configure file. Also I can check the box below so that it will NOT warn me if only one instance of this deployment. Once we clicked the OK button our package will be uploaded and provisioned by the platform. After a while we can see the service was ready from the information bar. We can have the basic information about this service and deployment if we to the dashboard page. For example the usage overview diagram, status, URL, public IP address, etc.. In the configure page we can view and change the CSCFG content such as the monitor setting, connection strings, OS family. In scale page we can increase and decrease the count of the instances. And in the instances page we can view all instances status. And, if your services is using some SQL databases and storages they will be shown as the linked resources under the linked resources page. And you can manage the certificates of this service as well under the certificates page.   How About My Storage Services The storage service can be managed by clicking into the STORAGES link in the navigation bar. And we can create a new storage service from the NEW button. After specify the storage name and region it will be previsioned by the platform. If you want to copy or manage the storage key you can just click the Manage Keys button at the bottom, which is very easy. What I want to highlight here is that, you can monitor your storage service by enabling the monitor configuration. Click the storage item in the list and navigate to the configure page. As you can see in the page you can enable the monitoring for blob, table and queue. And you can also enable the logging when any requests come to the storage. But as the tooltip shown in the page, enabling the monitoring and logging will increase the usage of the storage, which means increase the bill of them. So make sure you enable them properly.   And My SQL Databases (SQL Azure) The last thing I want to quick introduce is the SQL databases, which was formally named SQL Azure. You can create a new SQL Database Server and a new database by clicking the ADD button under the SQL Database navigation item. In the popping up windows just specify the database name, the edition, size, collation and the server. You can select an existing SQL Database Server if you have, or cerate a new one. If you selected to create a new server, there will be another step you need to do, which is specify the server login, password and the region. Once it ready you can mange your databases as well as the servers in the portal. In a particular server you can update the firewall settings in its Configure page. So, What Else There are some other area on the preview portal I didn’t cover, such as the virtual machines, virtual network and web sites. Regarding the virtual machines and web sites I will talk about them in the future separated post. Regarding the virtual network, it the Windows Azure Connect we are familiar with. But as I mention in the beginning of this post, the preview portal is still under developed. Some features are not available here. For example, you cannot manage the co-admin of your subscriptions, you cannot open the remote desktop on your hosted services, and you cannot navigate to the Windows Azure Service Bus, Access Control and Caching, which formally named Windows Azure AppFabric directly. In these cases you need to navigate back to the old portal. So in the coming several months we might need to use both these two sites.   Summary In this post I quick introduced the new windows azure developer portal. Since it had been rearranged and renamed I demonstrated some features that existing in the old portal, such as how to create and deploy a hosted service, how to provision a storage service and SQL database. All features in the old portal had been, is being and will be migrated into this new portal, but some of them were in a different category and page we need to figure out.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Monday, May 10, 2010

    CodePlex Daily Summary for Monday, May 10, 2010New ProjectsAzure Publish-Subscribe: Infrastructure implementing the publish-subscribe pattern in a Windows Azure context. The unit of publishing is an XML document with an optional l...Bakalarska prace SCSF: This work dealing with the technology of Microsoft Software Factories which makes possible an efficient development of applications under MS Windows.Begtostudy-Test: NoteExpress User Tool (NEUT) is a opensource project for NoteExpress user developpers to share their tools and ideas using secondary development. N...CodeReview: Code Review is an open source development tool based on the same approach than FxCop - check the compiled assemblies to enforce good practices rule...CommonFilter: CommonFilter is a subset of the CommonData project, containing just the functions and unit tests for filtering user input.Custom SharepointDesigner Actions: These are a couple of custom actions that i use in sharepoint designer to ease workflow creation.Customer Portal Accelerator for Microsoft Dynamics CRM: The Customer Portal accelerator for Microsoft Dynamics CRM provides businesses the ability to deliver portal capabilities to their customers while ...Danzica Asset Management System: Danzica is an asset management system written in C# that can ingest all of your hardware management systems into a single cohesive portal, giving y...Dot2Silverlight: Dot2Silverlight is a project thats enables to render graphs (written in Dot format) in Silverlight. dot2silverlight, dot, silverlight, C#, graphviz...eGrid_Windows7: This project will include a new platform independent version of the eGrid project. The new version will run on windows 7, using WPF 4 and the Surfa...Game project JAMK: game project on jamkHamcast for multi station coordination: Amateur Radio multiple station operation tends to have loggers and operators striving to get particular information from each other, like what IP a...Headspring Labs: Headspring Labs showcases presentations, samples, and starter kits developed by Headspring.Hongrui Software Development Management Platform: Hongrui Software Development Management Platform 鸿瑞软件开发管理平台(HrSDMP) LinkField: 带有链接的多列 Field migre.me plugin for Seesmic Desktop Platform: migre.me is a brazilian service created to reduces the size of URLs and provides tracking data for shortened links. migre.me plugin for Seesmic ...MISAO: MISAO is a presentation tool.Mongodb Management Studio: Mongodb Management Studio makes it easier for mongodb users (including DBA/Developers/Administrators) to use mongodb. It's developed in ASP.NET 4.0...MS Build for DotNetNuke module development: Automate the task of creating DotNetNuke module PA packs easily using MS Build. Create manifest files, include version #'s automatically and more.Rubyish: C# extensions providing a rubyish syntax to C#SharePoint 2007 Web Parts: The goal of this project is to develop a set of web parts for SharePoint 2007.SharePoint 2010 Web Parts: The goal of this project is to develop a set of web parts for SharePoint 2010.Taxomatic: Taxomatic adds the ability to bulk create Content Types and Columns to a SharePoint site collection. It also caters for the export of the Content T...trackuda: trackuda - track the motion!Web Camera Shooter: Small tool for taking screenshots from web camera and saving them to disk with few image filters as options.Windows API Code Pack Contrib: Extensions to Windows API Code PackNew ReleasesAlan Platform: Technical Preview 1: В центре данного релиза интерфейс, точнее не сам интерфейс, а принцип, по которому он построен. Используя парочку предоставленных свойств, можно со...Begtostudy-Test: Test: Don't Download.BFBC2 PRoCon: PRoCon 0.3.5.1: Release Notes ComingCBM-Command: 2010-05-09: Release Notes - 2010-05-09New Features FILE COPYING Changes Removed the Swap Panels functionality to make room for file copying. It's still in th...CBM-Command: 2010-05-10: Release Notes - 2010-05-10New Features Launching PRG Files New color schemes to better match the C128 and C64 platforms Function Keys Changes ...CommonFilter: CommonFilter0.3D: This initial release of CommonFilter 0.3D is a subset of the CommonData solution that contains just the filter functions.Custom SharepointDesigner Actions: Custom SPD Actions v1.0: This is the first version of the actions library, it includes: Calling a Webservice. Convert a string to a double. Convert a string to an integ...Customer Portal Accelerator for Microsoft Dynamics CRM: Customer Portal Accelerator for Dynamics CRM: The Customer Portal accelerator for Microsoft Dynamics CRM provides businesses the ability to deliver portal capabilities to their customers while ...EPiMVC - EPiServer CMS with ASP.NET MVC: EPiMVC CTP1: First release that mainly addresses routing. The release is described in greater detail here.Helium Frog Animator: Helium Frog 2_06SW3: This is a first release (not for end users as it is not packaged) of mods intended to make the program run easier on netbooks and customised for us...HouseFly controls: HouseFly controls alpha 0.9.8.0: HouseFly controls release 0.9.8.0 alphaID3Tag.Net: ID3TagLib.Net 1.1: The ID3Tag team is proud to release a new version of the ID3tag lib! New features: Removing of ID3V1 and ID3V2 tags Logging operations ( if enab...IT Tycoon: IT Tycoon 0.2.1: Switched to .NET Framework 4 and implemented some Parallel.ForEach calls.MeaMod Playme: MeaMod Playme 0.9.6.5 Bucking Bunny: MeaMod Playme 0.9.6.5 Bucking Bunny Version 0.9.6.5 | Change Set: 48050 -- Added Playme Store -- Added Buy Album -- Added OCDS/Core system -- Adde...migre.me plugin for Seesmic Desktop Platform: migre.me plugin 0.7.0.1: Initial release. Compatible with Seesmic Desktop Platform 0.7.0.772.MISAO: Ver. 5 Alpha(2010-05-10): Alpha versionMultiwfn: multiwfn1.3.2_source: multiwfn1.3.2_sourcePocket Wiki: Desktop Wiki (in dev): Full screen wiki for the PC - supports the same parsers that Pocket Wiki does. Currently in development but usable. Left side shows listbox of all ...Rubyish: alpha: intial buildSevenZipLib Library: v9.13 beta: New release to match 7-zip 9.13 betaShake - C# Make: Shake v0.1.11: Initial version of Shake's services (API), command line parameters (dynamic) now available via ShakeServices class. Introducing interfaces and bas...SharpNotes: SharpNotes (New): This is the release of SharpNotes.SharpNotes: SharpNotes Source (New): This is the source release of SharpNotes.StackOverflow Desktop Client in C# and WPF: StackOverflow Client 1.0: Improved UI Showing votes/answers/views on popups. Bug fixesTaxomatic: Design_0: Design documentThe Movie DB API: TMDB API v1.2: Updated to reflect changes in The Movie DB API.Web Camera Shooter: 1.0.0.0: Initial release. Unstable. Often exception AccessViolation from Touchless SDK.WF Personalplaner: Personalplaner v1.7.29.10127: - Drag und Drop wurde beim Plan und in der Maske unter Plan\Plan-Layout anpassen in die Grids eingefügt - Weitere kleine bugfixesWindows Phone 7 Panorama & Pivot controls: panorama + pivot controls v0.7 (samples included): Panorama and Pivot Controls source code + sample projects. - Phone.Controls.Samples : source code for the PanoramaControl and PivotControl. - Pic...XmlCodeEditor: Release 0.9 Alpha: Release 0.9 AlphaMost Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryRawrThe Information Literacy Education Learning Environment (ILE)Mirror Testing SystemCaliburn: An Application Framework for WPF and SilverlightjQuery Library for SharePoint Web Serviceswhitepatterns & practices - UnityTweetSharpBlogEngine.NET

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >