Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 73/679 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Biff stroganoff

    - by leffen
    oppskrift biff stroganoff for 4 personer   Tilberedning: løk og hvitløk skrelles, hakkes og stekes myk og blank. tas deretter ut av gryten. kjøttet skjæres i terninger og brunes hurtig ved sterk varme, litt om gangen. kok ut gryten med litt vann mellom hver bruning. ha kjøttet, med kraften fra utkoket, tilbake i gryten. demp varmen. ha oppi løk, hvetemel, tomatpuré og rømme. smak til med salt og pepper. la gryten trekke i ca. 4 min på svak varme. serveres med ris eller kokte poteter. gjerne med brød og frisk salat.   Ingredienser i oppskrift: 500 gr ytrefilet 1 stk stor løk 3 ss margarin 1 ts salt 2 stk hvitløksfedd   1 krm pepper 1 ss hvetemel 2-3 ss tomatpuré 3 dl rømme

    Read the article

  • More Mobile Payments

    - by David Dorf
    In the previous post I discussed the Bump Payments from PayPayl, but that's not the only innovative way to make purchases using your phone. Verizon recently announced a partnership with Danal that allows shoppers to charge online purchases to their Verizon bill. For e-commerce sites that accept this type of payment, it's a two step process. At checkout, the shopper enters their mobile number and billing zip code. Then a SMS message is sent to the mobile phone that contains a one-time code that must be entered on the e-commerce site. This two-factor authentication seems pretty secure, and no pre-registration or credit card is necessary. There's a $25 a month maximum, but I bet the limit gets raised as Verizon gets more comfortable with security. Merchants are charged a fee similar to credit card fees. Another example of mobile payments is offered by BlingNation. Customers attach a small NFC sticker to their phones that allows them to "tap" the POS device to make a payment. The NFC chip is connected to their checking account, so the transaction is treated as a debit payment. Text messages are sent to the mobile that confirm the payments so shoppers can easily verify their purchases. BlingNation is working with banks like Adirondack Trust Company and The State Bank of La Junta in Colorado. Heck, you can even send money to inmates in the Arkansas prison system using your mobile phone now that the state of Arkansas supports payments via their mobile website. Everyone is getting into the act now.

    Read the article

  • Silverlight Cream for May 02, 2010 -- #854

    - by Dave Campbell
    In this Issue: Michael Washington, Jason Young(-2-, -3-), Phil Middlemiss, Jeremy Likness, Victor Gaudioso, Kunal Chowdhury, Antoni Dol, and Jacek Ciereszko(-2-). Shoutout: Victor Gaudioso has aggregated All of My Silverlight Video Tutorials in One Place (revised again 05.02.10) From SilverlightCream.com: Unit Testing A Silverlight 'Simplified MVVM' Modal Popup Michael Washington's latest 'Simplified MVVM' post is published at The Code Project and is on Unit Testing with MVVM. Input Localization in Silverlight without IValueConverter Jason Young sent me some links to posts I've not seen... this first one is on localization by using the Language property of the Root Visual. MVVM – The Model - Part 1 – INotifyPropertyChanged Jason Young's next archive post is the first of a series on MVVM and Silverlight 4 ... implementing a simple ViewModel base class. Silverlight, WCF, and ASP.Net Configuration Gotchas Jason Young worked at tracking down the answers to some forum questions and in the process has produced a post of 'gotchas' with using WCF in Silverlight. A Chrome and Glass Theme - Part 5 Phil Middlemiss has part 5 of his Chrome and Glass Theme tutorial up ... in this one, he's looking at the Progress Bar and Slider. Download the files and play along. Silverlight Out of Browser (OOB) Versions, Images, and Isolated Storage Jeremy Likness has a post up responding to his 3 major questions about OOB apps, and he has to code up for the sample too. New Silverlight Video Tutorial: How to Make a Slide In/Out Navigation Bar – All in Blend Victor Gaudioso's latest video tutorial is on building a Behavior for a Slide in/out Navigation bar... kinda like the menu sliders on my GlyphMap Utility... only easier! Command Binding in Silverlight 4 (Step-by-Step) Kunal Chowdhury has another post up at DotNetFunda, and this time he's talking about Command Binding in Silverlight 4 with an eye toward MVVM usage. The Silverlight PageCurl implementation Antoni Dol has a post up about doing a Page Curl effect in Silverlight. He has a manual up on the effect and full application code. How to center and scale Silverlight applications using ViewBox control Jacek Ciereszko has a couple posts up about centering and scaling your app with the ViewBox control. This first one is a code solution. Source is available, as is a Polish version. Silverlight Center And Scale Behavior Jacek Ciereszko's 2nd post, he provides a Behavior that handles the scaling and centering of the previous post. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • At the Java DEMOgrounds - JavaFX

    - by Janice J. Heiss
    JavaFX has made rapid progress in the last year, as is evidenced by the wealth of demos on display. A few questions appear to be prominent in the minds of JavaFX enthusiasts. Here are some questions with answers provided by Oracle’s JavaFX team.When will the rest of the JavaFX code be available in open source?Oracle has started to open source JavaFX. The existing platform code will finish being committed to OpenJFX by the end of the year.Why should I use JavaFX instead of HTML5?We see JavaFX as complementary to HTML5, and most companies we talk to react positively once they understand how they can benefit from a hybrid solution. As most HTML5 developers will tell you, the biggest obstacle to deploying HTML5 applications is fragmentation. JavaFX offers a convenient way to render HTML and JavaScript within its WebView component, which provides the same level of quality and features across Windows, Mac, and Linux. Additionally, JavaScript in WebView can make calls into the Java code, and vice versa, allowing developers to tap into the best of both worlds.What is the market penetration of JavaFX? It is currently limited, as we've just made available JavaFX on Mac and Linux in August, but we expect JavaFX to be present on millions of desktop-type systems now that JavaFX is included as part of the JRE. We have also significantly lowered the level of effort required to deploy an application bundling the JRE and JavaFX runtime libraries. Finally, we are seeing a lot of interest by companies operating in the embedded market, who have found it hard to develop compelling UIs with existing technologies.Below are summaries of JavaFX Demos on display at JavaOne 2012:JavaFX EnsembleEnsemble is a collection of over 100 JavaFX samples packaged as a JavaFX application. This demo is especially useful to those new to JavaFX, or those not familiar with its latest features (e.g. canvas, color picker). Ensemble is the reference for getting familiar with JavaFX functionality. Each sample can be run from within Ensemble, and the API for each sample, as well as the source code are available alongside the sample.The samples source code can be saved as a NetBeans project for convenience purposes, or can be copied as is in any other Java IDE. The version of Ensemble shown is packaged as a native Windows application, including the JRE and JavaFX libraries. It was created with the JavaFX packager, which provides multiple packaging options, and frees developers from the cumbersome and error-prone process of packaging a Java application.FX Experience ToolsFX Experience Tools is a JavaFX application that provides different utilities to create new skins for your JavaFX applications. One of the most powerful features of JavaFX is the ability to skin applications via CSS. Since not all Java developers are familiar with CSS, these utilities are a great starting point to create custom skins. JavaFX allows developers to easily customize the look and feel of their applications through CSS. FX Experience Tools makes it easy to create new themes for JavaFX applications, even if you are not familiar with CSS. FX Experience Tools is a JavaFX application packaged as a native application including the JRE and JavaFX runtime libraries. FX Experience tools shows how this type of deployment simplifies the packaging of Java applications without requiring developers to master the intricacies of Java application packaging. The download site for FX Experience Tools is http://fxexperience.com/2012/03/announcing-fx-experience-tools/ JavaFX Scene BuilderJavaFX Scene Builder is a visual layout tool that lets users quickly design the UI of your JavaFX application, without coding. Users can drag and drop UI components, modify their properties, apply style sheets, and the FXML code they create for the layout is automatically generated in the background. The result is an FXML file that can then be combined with a Java project by binding the UI to the application’s logic. Developers can easily create user interfaces for their application, as well as separate the application’s UI from the application logic for easier maintenance. Attendees can get this app by going to javafx.com and checking the link at top of the “Overview” page.Scene Builder allows developers to easily layout JavaFX UI controls, charts, shapes, and containers, so that you can quickly prototype user interfaces. It generates FXML, an XML-based markup language that enables users to define an application’s user interface, separately from the application logic. Scene Builder can be used in combination with any Java IDE, but is more tightly integrated with NetBeans IDE. It is written as a JavaFX application, with native desktop integration on Windows and Mac OS X. It’s a perfect example of a JavaFX application packages as a native application.Scene Builder is available for your preferred development platform. Besides the GA release on Windows and Mac, a Developer Preview of Scene Builder for Linux has just been made available.Scenic ViewScenic View is a tool that can be used to understand the current state of your application UI, and to also easily manipulate properties of the scenegraph without having to keep editing your code. Creating UIs is a complex process, and it can be hard and tedious detecting these issues, editing the code, and then compiling it to test the app again. Scenic View is a great diagnostics tool that helps developers identify these issues and correct them at runtime.Attendees can get Scenic View by going to javafx.com, selecting the “Community” tab, and clicking the link under the “Third Party Tools and Utilities” section.Scenic View allows developers to easily examine the state of a JavaFX application scenegraph while the application is running. Some of the latest features added to Scenic View include event monitoring, javadoc browsing, and contextual menus. The download site for Scenic View is available here: http://fxexperience.com/scenic-view/ Conference TourConference Tour is an application that lets users discover some of the major Java conferences throughout the world. The Conference Tour application shows how simple it is to mix JavaFX and HTML5 into a single, interactive application. Attendees get Conference Tour here.JavaFX includes a Web engine based on Webkit that provides a consistent web interface to render HTML5 across operating systems, within a JavaFX application. JavaFX features a bi-directional bridge that allows Java APIs to call JavaScript within WebView, or allows JavaScript to make calls to Java APIs. This allows developers to leverage the best of both worlds.Java EE developers can take advantage of WebView and the JavaScript-Java bridge to allow their HTML clients to seamlessly bypass Web browser’s sandbox to access native system resources, providing a richer user experience.FXMediaPlayerFXMediaPlayer is an application that lets developers check different media functionality in JavaFX, such as synthesizer or support for HTTP Live Streaming (HLS). This demo shows how developers can embed video content in their Java applications. JavaFX leverages the underlying video (e.g., H.264) and audio (e.g., AAC) codecs on the user’s computer. JavaFX APIs allow developers to interact with the video content (e.g. play/pause, or programmable markers). Some of the latest media features introduced in JavaFX 2.2 include HTTP Live Streaming (HLS). Obviously there is a lot for JavaFX enthusiasts to chew on!

    Read the article

  • Web Services for Info Explorer Zones

    - by Anthony Shorten
    One of the most interesting uses for XAI and Configurable objects is the exposure of a query portal as a Web Service. Let me illustrate this with an example. Say you have an interface that requires a list of data from a number of product tables. In the past you would have to build a java program to do this with SQL then use an application service but it is now possible with just configuration. The first step in the process is to create the SQL you want to use for the interface. It can be any valid static SQL or use host variables for the WHERE clause (we call that filtered). Once you are happy with the SQL (and it performs acceptably) you can incorporate that SQL into a Info-Explorer Zone. You can use any of the explorer zone types but I typically recommend F1-DE-SINGLE as it supports a single SQL statement with multiple filters (up to 15) as well as hidden filters (up to 5). Hidden filters are typically not displayed in the UI for criteria (remember explorer zones can be used on the user Interface as well) but for web services they can be used as normal filters (this means you can use up to 20 filters all up). Once you are happy with the zone, you now need to define it as a Business Service. We have a generic service called FWLZDEXP which allows a explorer zone to be defined as a Business Service. If you open any Business Service based upon FWLZDEXP you will see some examples. The schema is standard and pretty self explanatory in terms of the structure. The schema pattern looks like this: Zone element - maps to the ZONE_CD element and the default value is the zone name you just created. This links the business service to the zone. Filter elements - You name the filters as you like but the mapField is set to Fx_VALUE where x is the filter number corresponding to the filter element in the zone definition. Hidden filter elements - You name the filters as you like but the mapField is set to Hx_VALUE where x is the filter number corresponding to the hidden filter element in the zone definition. results group - this holds the elements of the result set. Each element in your result set has a tagname and is linked to the COL_VALUE mapField and the row element is lists the SEQNO of the column. This corresponds to the column number in the results set in the zone. An example schema is shown below for the F1-USGRACML zone, which returns the access modes for a user group and application service filters. In the example, the userGroup and applicationService elements are the filters and the rows would contain a list of accessModeDescr. This is just a simple example to illustrate the point. There are lots of examples in the product that you can investigate. One recommendation, to save time, is that you copy the schema from one of the examples to save you typing it from scratch. You can simply modify the tags and other elements to suit your needs. Once the Business Service is defined it can simply be defined as a Web Service by registering an XAI Inbound Service using the Business Service definition as a basis. You now have a Web Service based upon a Info Explorer Zone. This is one of my favorite components as it allows interfaces to be simplified. This will be my last blog entry for this year. I hope you all have a great and safe Christmas and an even greater new year. Next year promises to be an exciting year and I look forward to communicating exciting developments we are working on at the moment as they are released.

    Read the article

  • Using a "white list" for extracting terms for Text Mining

    - by [email protected]
    In Part 1 of my post on "Generating cluster names from a document clustering model" (part 1, part 2, part 3), I showed how to build a clustering model from text documents using Oracle Data Miner, which automates preparing data for text mining. In this process we specified a custom stoplist and lexer and relied on Oracle Text to identify important terms.  However, there is an alternative approach, the white list, which uses a thesaurus object with the Oracle Text CTXRULE index to allow you to specify the important terms. INTRODUCTIONA stoplist is used to exclude, i.e., black list, specific words in your documents from being indexed. For example, words like a, if, and, or, and but normally add no value when text mining. Other words can also be excluded if they do not help to differentiate documents, e.g., the word Oracle is ubiquitous in the Oracle product literature. One problem with stoplists is determining which words to specify. This usually requires inspecting the terms that are extracted, manually identifying which ones you don't want, and then re-indexing the documents to determine if you missed any. Since a corpus of documents could contain thousands of words, this could be a tedious exercise. Moreover, since every word is considered as an individual token, a term excluded in one context may be needed to help identify a term in another context. For example, in our Oracle product literature example, the words "Oracle Data Mining" taken individually are not particular helpful. The term "Oracle" may be found in nearly all documents, as with the term "Data." The term "Mining" is more unique, but could also refer to the Mining industry. If we exclude "Oracle" and "Data" by specifying them in the stoplist, we lose valuable information. But it we include them, they may introduce too much noise. Still, when you have a broad vocabulary or don't have a list of specific terms of interest, you rely on the text engine to identify important terms, often by computing the term frequency - inverse document frequency metric. (This is effectively a weight associated with each term indicating its relative importance in a document within a collection of documents. We'll revisit this later.) The results using this technique is often quite valuable. As noted above, an alternative to the subtractive nature of the stoplist is to specify a white list, or a list of terms--perhaps multi-word--that we want to extract and use for data mining. The obvious downside to this approach is the need to specify the set of terms of interest. However, this may not be as daunting a task as it seems. For example, in a given domain (Oracle product literature), there is often a recognized glossary, or a list of keywords and phrases (Oracle product names, industry names, product categories, etc.). Being able to identify multi-word terms, e.g., "Oracle Data Mining" or "Customer Relationship Management" as a single token can greatly increase the quality of the data mining results. The remainder of this post and subsequent posts will focus on how to produce a dataset that contains white list terms, suitable for mining. CREATING A WHITE LIST We'll leverage the thesaurus capability of Oracle Text. Using a thesaurus, we create a set of rules that are in effect our mapping from single and multi-word terms to the tokens used to represent those terms. For example, "Oracle Data Mining" becomes "ORACLEDATAMINING." First, we'll create and populate a mapping table called my_term_token_map. All text has been converted to upper case and values in the TERM column are intended to be mapped to the token in the TOKEN column. TERM                                TOKEN DATA MINING                         DATAMINING ORACLE DATA MINING                  ORACLEDATAMINING 11G                                 ORACLE11G JAVA                                JAVA CRM                                 CRM CUSTOMER RELATIONSHIP MANAGEMENT    CRM ... Next, we'll create a thesaurus object my_thesaurus and a rules table my_thesaurus_rules: CTX_THES.CREATE_THESAURUS('my_thesaurus', FALSE); CREATE TABLE my_thesaurus_rules (main_term     VARCHAR2(100),                                  query_string  VARCHAR2(400)); We next populate the thesaurus object and rules table using the term token map. A cursor is defined over my_term_token_map. As we iterate over  the rows, we insert a synonym relationship 'SYN' into the thesaurus. We also insert into the table my_thesaurus_rules the main term, and the corresponding query string, which specifies synonyms for the token in the thesaurus. DECLARE   cursor c2 is     select token, term     from my_term_token_map; BEGIN   for r_c2 in c2 loop     CTX_THES.CREATE_RELATION('my_thesaurus',r_c2.token,'SYN',r_c2.term);     EXECUTE IMMEDIATE 'insert into my_thesaurus_rules values                        (:1,''SYN(' || r_c2.token || ', my_thesaurus)'')'     using r_c2.token;   end loop; END; We are effectively inserting the token to return and the corresponding query that will look up synonyms in our thesaurus into the my_thesaurus_rules table, for example:     'ORACLEDATAMINING'        SYN ('ORACLEDATAMINING', my_thesaurus)At this point, we create a CTXRULE index on the my_thesaurus_rules table: create index my_thesaurus_rules_idx on        my_thesaurus_rules(query_string)        indextype is ctxsys.ctxrule; In my next post, this index will be used to extract the tokens that match each of the rules specified. We'll then compute the tf-idf weights for each of the terms and create a nested table suitable for mining.

    Read the article

  • WebSocket and Java EE 7 - Getting Ready for JSR 356 (TOTD #181)

    - by arungupta
    WebSocket is developed as part of HTML 5 specification and provides a bi-directional, full-duplex communication channel over a single TCP socket. It provides dramatic improvement over the traditional approaches of Polling, Long-Polling, and Streaming for two-way communication. There is no latency from establishing new TCP connections for each HTTP message. There is a WebSocket API and the WebSocket Protocol. The Protocol defines "handshake" and "framing". The handshake defines how a normal HTTP connection can be upgraded to a WebSocket connection. The framing defines wire format of the message. The design philosophy is to keep the framing minimum to avoid the overhead. Both text and binary data can be sent using the API. WebSocket may look like a competing technology to Server-Sent Events (SSE), but they are not. Here are the key differences: WebSocket can send and receive data from a client. A typical example of WebSocket is a two-player game or a chat application. Server-Sent Events can only push data data to the client. A typical example of SSE is stock ticker or news feed. With SSE, XMLHttpRequest can be used to send data to the server. For server-only updates, WebSockets has an extra overhead and programming can be unecessarily complex. SSE provides a simple and easy-to-use model that is much better suited. SSEs are sent over traditional HTTP and so no modification is required on the server-side. WebSocket require servers that understand the protocol. SSE have several features that are missing from WebSocket such as automatic reconnection, event IDs, and the ability to send arbitrary events. The client automatically tries to reconnect if the connection is closed. The default wait before trying to reconnect is 3 seconds and can be configured by including "retry: XXXX\n" header where XXXX is the milliseconds to wait before trying to reconnect. Event stream can include a unique event identifier. This allows the server to determine which events need to be fired to each client in case the connection is dropped in between. The data can span multiple lines and can be of any text format as long as EventSource message handler can process it. WebSockets provide true real-time updates, SSE can be configured to provide close to real-time by setting appropriate timeouts. OK, so all excited about WebSocket ? Want to convert your POJOs into WebSockets endpoint ? websocket-sdk and GlassFish 4.0 is here to help! The complete source code shown in this project can be downloaded here. On the server-side, the WebSocket SDK converts a POJO into a WebSocket endpoint using simple annotations. Here is how a WebSocket endpoint will look like: @WebSocket(path="/echo")public class EchoBean { @WebSocketMessage public String echo(String message) { return message + " (from your server)"; }} In this code "@WebSocket" is a class-level annotation that declares a POJO to accept WebSocket messages. The path at which the messages are accepted is specified in this annotation. "@WebSocketMessage" indicates the Java method that is invoked when the endpoint receives a message. This method implementation echoes the received message concatenated with an additional string. The client-side HTML page looks like <div style="text-align: center;"> <form action=""> <input onclick="send_echo()" value="Press me" type="button"> <input id="textID" name="message" value="Hello WebSocket!" type="text"><br> </form></div><div id="output"></div> WebSocket allows a full-duplex communication. So the client, a browser in this case, can send a message to a server, a WebSocket endpoint in this case. And the server can send a message to the client at the same time. This is unlike HTTP which follows a "request" followed by a "response". In this code, the "send_echo" method in the JavaScript is invoked on the button click. There is also a <div> placeholder to display the response from the WebSocket endpoint. The JavaScript looks like: <script language="javascript" type="text/javascript"> var wsUri = "ws://localhost:8080/websockets/echo"; var websocket = new WebSocket(wsUri); websocket.onopen = function(evt) { onOpen(evt) }; websocket.onmessage = function(evt) { onMessage(evt) }; websocket.onerror = function(evt) { onError(evt) }; function init() { output = document.getElementById("output"); } function send_echo() { websocket.send(textID.value); writeToScreen("SENT: " + textID.value); } function onOpen(evt) { writeToScreen("CONNECTED"); } function onMessage(evt) { writeToScreen("RECEIVED: " + evt.data); } function onError(evt) { writeToScreen('<span style="color: red;">ERROR:</span> ' + evt.data); } function writeToScreen(message) { var pre = document.createElement("p"); pre.style.wordWrap = "break-word"; pre.innerHTML = message; output.appendChild(pre); } window.addEventListener("load", init, false);</script> In this code The URI to connect to on the server side is of the format ws://<HOST>:<PORT>/websockets/<PATH> "ws" is a new URI scheme introduced by the WebSocket protocol. <PATH> is the path on the endpoint where the WebSocket messages are accepted. In our case, it is ws://localhost:8080/websockets/echo WEBSOCKET_SDK-1 will ensure that context root is included in the URI as well. WebSocket is created as a global object so that the connection is created only once. This object establishes a connection with the given host, port and the path at which the endpoint is listening. The WebSocket API defines several callbacks that can be registered on specific events. The "onopen", "onmessage", and "onerror" callbacks are registered in this case. The callbacks print a message on the browser indicating which one is called and additionally also prints the data sent/received. On the button click, the WebSocket object is used to transmit text data to the endpoint. Binary data can be sent as one blob or using buffering. The HTTP request headers sent for the WebSocket call are: GET ws://localhost:8080/websockets/echo HTTP/1.1Origin: http://localhost:8080Connection: UpgradeSec-WebSocket-Extensions: x-webkit-deflate-frameHost: localhost:8080Sec-WebSocket-Key: mDbnYkAUi0b5Rnal9/cMvQ==Upgrade: websocketSec-WebSocket-Version: 13 And the response headers received are Connection:UpgradeSec-WebSocket-Accept:q4nmgFl/lEtU2ocyKZ64dtQvx10=Upgrade:websocket(Challenge Response):00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 The headers are shown in Chrome as shown below: The complete source code shown in this project can be downloaded here. The builds from websocket-sdk are integrated in GlassFish 4.0 builds. Would you like to live on the bleeding edge ? Then follow the instructions below to check out the workspace and install the latest SDK: Check out the source code svn checkout https://svn.java.net/svn/websocket-sdk~source-code-repository Build and install the trunk in your local repository as: mvn install Copy "./bundles/websocket-osgi/target/websocket-osgi-0.3-SNAPSHOT.jar" to "glassfish3/glassfish/modules/websocket-osgi.jar" in your GlassFish 4 latest promoted build. Notice, you need to overwrite the JAR file. Anybody interested in building a cool application using WebSocket and get it running on GlassFish ? :-) This work will also feed into JSR 356 - Java API for WebSocket. On a lighter side, there seems to be less agreement on the name. Here are some of the options that are prevalent: WebSocket (W3C API, the URL is www.w3.org/TR/websockets though) Web Socket (HTML5 Demos - html5demos.com/web-socket) Websocket (Jenkins Plugin - wiki.jenkins-ci.org/display/JENKINS/Websocket%2BPlugin) WebSockets (Used by Mozilla - developer.mozilla.org/en/WebSockets, but use WebSocket as well) Web sockets (HTML5 Working Group - www.whatwg.org/specs/web-apps/current-work/multipage/network.html) Web Sockets (Chrome Blog - blog.chromium.org/2009/12/web-sockets-now-available-in-google.html) I prefer "WebSocket" as that seems to be most common usage and used by the W3C API as well. What do you use ?

    Read the article

  • Auto-Suggest via &lsquo;Trie&rsquo; (Pre-fix Tree)

    - by Strenium
    Auto-Suggest (Auto-Complete) “thing” has been around for a few years. Here’s my little snippet on the subject. For one of my projects, I had to deal with a non-trivial set of items to be pulled via auto-suggest used by multiple concurrent users. Simple, dumb iteration through a list in local cache or back-end access didn’t quite cut it. Enter a nifty little structure, perfectly suited for storing and matching verbal data: “Trie” (http://tinyurl.com/db56g) also known as a Pre-fix Tree: “Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the key with which it is associated. All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. Values are normally not associated with every node, only with leaves and some inner nodes that correspond to keys of interest.” This is a very scalable, performing structure. Though, as usual, something ‘fast’ comes at a cost of ‘size’; fortunately RAM is more plentiful today so I can live with that. I won’t bore you with the detailed algorithmic performance here - Google can do a better job of such. So, here’s C# implementation of all this. Let’s start with individual node: Trie Node /// <summary> /// Contains datum of a single trie node. /// </summary> public class AutoSuggestTrieNode {     public char Value { get; set; }       /// <summary>     /// Gets a value indicating whether this instance is leaf node.     /// </summary>     /// <value>     ///     <c>true</c> if this instance is leaf node; otherwise, a prefix node <c>false</c>.     /// </value>     public bool IsLeafNode { get; private set; }       public List<AutoSuggestTrieNode> DescendantNodes { get; private set; }         /// <summary>     /// Initializes a new instance of the <see cref="AutoSuggestTrieNode"/> class.     /// </summary>     /// <param name="value">The phonetic value.</param>     /// <param name="isLeafNode">if set to <c>true</c> [is leaf node].</param>     public AutoSuggestTrieNode(char value = ' ', bool isLeafNode = false)     {         Value = value;         IsLeafNode = isLeafNode;           DescendantNodes = new List<AutoSuggestTrieNode>();     }       /// <summary>     /// Gets the descendants of the pre-fix node, if any.     /// </summary>     /// <param name="descendantValue">The descendant value.</param>     /// <returns></returns>     public AutoSuggestTrieNode GetDescendant(char descendantValue)     {         return DescendantNodes.FirstOrDefault(descendant => descendant.Value == descendantValue);     } }   Quite self-explanatory, imho. A node is either a “Pre-fix” or a “Leaf” node. “Leaf” contains the full “word”, while the “Pre-fix” nodes act as indices used for matching the results.   Ok, now the Trie: Trie Structure /// <summary> /// Contains structure and functionality of an AutoSuggest Trie (Pre-fix Tree) /// </summary> public class AutoSuggestTrie {     private readonly AutoSuggestTrieNode _root = new AutoSuggestTrieNode();       /// <summary>     /// Adds the word to the trie by breaking it up to pre-fix nodes + leaf node.     /// </summary>     /// <param name="word">Phonetic value.</param>     public void AddWord(string word)     {         var currentNode = _root;         word = word.Trim().ToLower();           for (int i = 0; i < word.Length; i++)         {             var child = currentNode.GetDescendant(word[i]);               if (child == null) /* this character hasn't yet been indexed in the trie */             {                 var newNode = new AutoSuggestTrieNode(word[i], word.Count() - 1 == i);                   currentNode.DescendantNodes.Add(newNode);                 currentNode = newNode;             }             else                 currentNode = child; /* this character is already indexed, move down the trie */         }     }         /// <summary>     /// Gets the suggested matches.     /// </summary>     /// <param name="word">The phonetic search value.</param>     /// <returns></returns>     public List<string> GetSuggestedMatches(string word)     {         var currentNode = _root;         word = word.Trim().ToLower();           var indexedNodesValues = new StringBuilder();         var resultBag = new ConcurrentBag<string>();           for (int i = 0; i < word.Trim().Length; i++)  /* traverse the trie collecting closest indexed parent (parent can't be leaf, obviously) */         {             var child = currentNode.GetDescendant(word[i]);               if (child == null || word.Count() - 1 == i)                 break; /* done looking, the rest of the characters aren't indexed in the trie */               indexedNodesValues.Append(word[i]);             currentNode = child;         }           Action<AutoSuggestTrieNode, string> collectAllMatches = null;         collectAllMatches = (node, aggregatedValue) => /* traverse the trie collecting matching leafNodes (i.e. "full words") */             {                 if (node.IsLeafNode) /* full word */                     resultBag.Add(aggregatedValue); /* thread-safe write */                   Parallel.ForEach(node.DescendantNodes, descendandNode => /* asynchronous recursive traversal */                 {                     collectAllMatches(descendandNode, String.Format("{0}{1}", aggregatedValue, descendandNode.Value));                 });             };           collectAllMatches(currentNode, indexedNodesValues.ToString());           return resultBag.OrderBy(o => o).ToList();     }         /// <summary>     /// Gets the total words (leafs) in the trie. Recursive traversal.     /// </summary>     public int TotalWords     {         get         {             int runningCount = 0;               Action<AutoSuggestTrieNode> traverseAllDecendants = null;             traverseAllDecendants = n => { runningCount += n.DescendantNodes.Count(o => o.IsLeafNode); n.DescendantNodes.ForEach(traverseAllDecendants); };             traverseAllDecendants(this._root);               return runningCount;         }     } }   Matching operations and Inserts involve traversing the nodes before the right “spot” is found. Inserts need be synchronous since ordering of data matters here. However, matching can be done in parallel traversal using recursion (line 64). Here’s sample usage:   [TestMethod] public void AutoSuggestTest() {     var autoSuggestCache = new AutoSuggestTrie();       var testInput = @"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer nec odio. Praesent libero.                 Sed cursus ante dapibus diam. Sed nisi. Nulla quis sem at nibh elementum imperdiet. Duis sagittis ipsum. Praesent mauris.                 Fusce nec tellus sed augue semper porta. Mauris massa. Vestibulum lacinia arcu eget nulla. Class aptent taciti sociosqu ad                 litora torquent per conubia nostra, per inceptos himenaeos. Curabitur sodales ligula in libero. Sed dignissim lacinia nunc.                 Curabitur tortor. Pellentesque nibh. Aenean quam. In scelerisque sem at dolor. Maecenas mattis. Sed convallis tristique sem.                 Proin ut ligula vel nunc egestas porttitor. Morbi lectus risus, iaculis vel, suscipit quis, luctus non, massa. Fusce ac                 turpis quis ligula lacinia aliquet. Mauris ipsum. Nulla metus metus, ullamcorper vel, tincidunt sed, euismod in, nibh. Quisque                 volutpat condimentum velit. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nam                 nec ante. Sed lacinia, urna non tincidunt mattis, tortor neque adipiscing diam, a cursus ipsum ante quis turpis. Nulla                 facilisi. Ut fringilla. Suspendisse potenti. Nunc feugiat mi a tellus consequat imperdiet. Vestibulum sapien. Proin quam. Etiam                 ultrices. Suspendisse in justo eu magna luctus suscipit. Sed lectus. Integer euismod lacus luctus magna. Quisque cursus, metus                 vitae pharetra auctor, sem massa mattis sem, at interdum magna augue eget diam. Vestibulum ante ipsum primis in faucibus orci                 luctus et ultrices posuere cubilia Curae; Morbi lacinia molestie dui. Praesent blandit dolor. Sed non quam. In vel mi sit amet                 augue congue elementum. Morbi in ipsum sit amet pede facilisis laoreet. Donec lacus nunc, viverra nec.";       testInput.Split(' ').ToList().ForEach(word => autoSuggestCache.AddWord(word));       var testMatches = autoSuggestCache.GetSuggestedMatches("le"); }   ..and the result: That’s it!

    Read the article

  • Oracle WebCenter Portlet Debugging

    - by Alexander Rudat
    IntroductionThis article describes how to debug a portlets that is already deployed to WebLogic server using Oracle JDeveloper 11g.OverviewThese a Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} re the basic steps involved in remote debugging an WebCenter portlets deployed in WebLogic:Configuration of the WebLogic to support remote debuggingConfiguration of the portlet project in JDeveloperActual debugging of the portletConfiguration of the WebLogicTo start the WebLogic server in debugging mode, there are a couple of configuration changes that need to be done to the WebLogic domain where the portlet is deployed.First we need to edit JVM options of the WebLogic server startup script where the portlet is deployed. Normally the startManagedWebLogic.cmd is used to start this managed server.This startup script is located in the %MIDDLEWARE_HOME%\user_projects\domains\<domain_name>\bin  directory, where %MIDDLEWARE_HOME% is the installation directory of WebLogic.Add the following line before the set JAVA_OPTIONS= line:set REMOTE_DEBUG_JAVA_OPTIONS=-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=4000,server=y,suspend=nChange the set JAVA_OPTIONS= line to read like the one below:set JAVA_OPTIONS=%SAVE_JAVA_OPTIONS% %ADF_JAVA_OPTIONS% %REMOTE_DEBUG_JAVA_OPTIONS%After this changes save the startup script and start the managed server and be sure that you have access to the admin console (for example http://localhost:7001/console).Finally we need to check, that HTTP tunneling is enabled on the managed server. To do this, login to the admin console, select the managed server and select the Protocols tab.Be sure that Enable Tunneling is selected.Configuration of the portletFirst let's create a new run configuration specifically for remote debugging. Double-click the project where you portlets are developed.In the Project Properties select the Run/Debug/Profile page. Click New... to create a new run configuration. In the Create Run Configuration  dialog enter Remote Debugging for the name of the run configuration. Leave the Copy Settings From selection to Default and click OK to create the new run configuration.Once the Remote Debug run configuration is created, select it in the Run Configurations and click Edit... to bring up the Edit Run Configuration dialog. In the Launch Settings page click on the Remote Debugging checkbox to enable remote debugging for this run configuration.Finally select the Remote page and verify that the Protocol is set to Attach to JPDA and the port matches the port specified earlier when configuring WebLogic for remote debugging (defaults to 4000).Actual debugging of the portletTo start the remote debugging profile, right-click on your portlet project and select Start Remote Debugger.Now JDeveloper is asking the host and port specification. If you WebLogic server is installed locally, you can apply the default settings: Set a breakpoint at you java code and run the portal (WebCenter) application, where the portlet is used.That's all, now you are able to debug the portlet java code. Hope you will find all errors in your portlet :-)Referenceshttp://www.oracle.com/technology/products/jdev/howtos/weblogic/remotedebugwls.html

    Read the article

  • Dynamic Grouping and Columns

    - by Tim Dexter
    Some good collaboration between myself and Kan Nishida (Oracle BIP Consulting) over at bipconsulting on a question that came in yesterday to an internal mailing list. Is there a way to allow columns to be place into a template dynamically? This would be similar to the Answers Column selector. A customer has said Crystal can do this and I am trying to see how BI Pub can do the same. Example: Report has Regions as a dimension in a table, they want the user to select a parameter that will insert either Units or Dollars without having to create multiple templates. Now whether Crystal can actually do it or not is another question, can Publisher? Yes we can! Kan took the first stab. His approach, was to allow to swap out columns in a table in the report. Some quick steps: 1. Create a parameter from BIP server UI 2. Declare the parameter in RTF template You can check this post to see how you can declare the parameter from the server. http://bipconsulting.blogspot.com/2010/02/how-to-pass-user-input-values-to-report.html 3. Use the parameter value to condition if a particular column needs to be displayed or not. You can use <?if@column:.....?> syntax for Column level IF condition. The if@column is covered in user documentation. This would allow a developer to create a report with the parameter or multiple parameters to allow the user to pick a column to be included in the report. I took a slightly different tack, with the mention of the column selector in the Answers report I took that to mean that the user wanted to select more of a dimensional column and then have the report recalculate all its totals and subtotals based on that selected column. This is a little bit more involved and involves some smart XSL and XPATH expressions, but still very doable. The user can select a column as a parameter, that is passed to the template rather than the query. The parameter value that is actually passed is the element name that you want to regroup the data by. Inside the template we then reference that parameter value in our for-each-group loop. That's where we need the trixy XSL/XPATH code to get the regrouping to happen. At this juncture, I need to hat tip to Klaus, for his article on dynamic sorting that he wrote back in 2006. I basically took his sorting code and applied it to the for-each loop. You can follow both of Kan's first two steps above i.e. Create a parameter from BIP server UI - this just needs to be based on a 'list' type list of value with name/value pairs e.g. Department/DEPARTMENT_NAME, Job/JOB_TITLE, etc. The user picks the 'friendly' value and the server passes the element name to the template. Declare the parameter in RTF template - been here before lots of times right? <?param@begin:group1;'"DEPARTMENT_NAME"'?> I have used a default value so that I can test the funtionality inside the template builder (notice the single and double quotes.) Next step is to use the template builder to build a re-grouped report layout. It does not matter if its hard coded right now; we will add in the dynamic piece next. Once you have a functioning template that is re-grouping correctly. Open up the for-each-group field and modify it to use the parameter: <?for-each-group:ROW;./*[name(.) = $group1]?> 'group1' is my grouping parameter, declared above. We need the XPATH expression to find the column in the XML structure we want to group that matches the one passed by the parameter. Its essentially looking through the data tree for a match. We can show the actual grouping value in the report output with a similar XPATH expression <?./*[name(.) = $group1]?> In my example, I took things a little further so that I could have a dynamic label for the parameter value. For instance if I am using MANAGER as the parameter I want to show: Manager: Tim Dexter My XML elements are readable e.g. DEPARTMENT_NAME. Its a simple case of replacing the underscore with a space and then 'initcapping' the result: <?xdoxslt:init_cap(translate($group1,'_',' '))?> With this in place, the user can now select a grouping column in the BIP report viewer and the layout will re-group the data and any calculations based on that column. I built a group above report but you could equally build the group left version to truly mimic the Answers column selector. If you are interested you can get an example report, sample data and layout template here. Of course, you can combine Klaus' dynamic sorting, Kan's conditional column approach and this dynamic grouping to build a real kick ass report for users that will keep them happy for hours..

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • LOV's autoSuggestBehaviour

    - by raghu.yadav
    af:autoSuggestBehaviour component example works pretty straight forward on LOV's in input form and Table'sgood example by juan here http://www.oracle.com/technology/products/jdev/howtos/autosuggest/explaining_autosuggestbehavior.htm,

    Read the article

  • Podcast Show Notes &ndash; Oracle Coherence and Data Grid Technology - Part 1

    - by Bob Rhubart
    This week’s ArchBeat Podcast program kicks off a three-part series featuring a discussion of Oracle Coherence and data grid technology. Listen to Part 1 The panelists for this discussion are: Cameron Purdy, VP of Development, Oracle Blog | Twitter | LinkedIn | Oracle Mix Aleksandar Seovic, founder and managing director at S4HC Inc. Blog| Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile (Aleks is also the author of  Oracle Coherence 3.5 from Packt Publishing.) John Stouffer, independent consultant, Oracle Applications DBA/Architect Blog |  LinkedIn | Oracle Mix | Oracle ACE Profile Part two will be available on June 23, part 3 on June 30. Coming soon On July 7 the ArchBeat Podcast kicks of a series featuring an open discussion of Architecture and Agility. Stay tuned: RSS   Technorati Tags: oracle,otn,arch2arch,archbeat,coherence,data grid,cameron purdy,aleks seovic del.icio.us Tags: oracle,otn,arch2arch,archbeat,coherence,data grid,cameron purdy,aleks seovic

    Read the article

  • To 'seal' or to 'wrap': that is the question ...

    - by Simon Thorpe
    If you follow this blog you will already have a good idea of what Oracle Information Rights Management (IRM) does. By encrypting documents Oracle IRM secures and tracks all copies of those documents, everywhere they are shared, stored and used, inside and outside your firewall. Unlike earlier encryption products authorized end users can transparently use IRM-encrypted documents within standard desktop applications such as Microsoft Office, Adobe Reader, Internet Explorer, etc. without first having to manually decrypt the documents. Oracle refers to this encryption process as 'sealing', and it is thanks to the freely available Oracle IRM Desktop that end users can transparently open 'sealed' documents within desktop applications without needing to know they are encrypted and without being able to save them out in unencrypted form. So Oracle IRM provides an amazing, unprecedented capability to secure and track every copy of your most sensitive information - even enabling end user access to be revoked long after the documents have been copied to home computers or burnt to CD/DVDs. But what doesn't it do? The main limitation of Oracle IRM (and IRM products in general) is format and platform support. Oracle IRM supports by far the broadest range of desktop applications and the deepest range of application versions, compared to other IRM vendors. This is important because you don't want to exclude sensitive business processes from being 'sealed' just because either the file format is not supported or users cannot upgrade to the latest version of Microsoft Office or Adobe Reader. But even the Oracle IRM Desktop can only open 'sealed' documents on Windows and does not for example currently support CAD (although this is coming in a future release). IRM products from other vendors are much more restrictive. To address this limitation Oracle has just made available the Oracle IRM Wrapper all-format, any-platform encryption/decryption utility. It uses the same core Oracle IRM web services and classification-based rights model to manually encrypt and decrypt files of any format on any Java-capable operating system. The encryption envelope is the same, and it uses the same role- and classification-based rights as 'sealing', but before you can use 'wrapped' files you must manually decrypt them. Essentially it is old-school manual encryption/decryption using the modern classification-based rights model of Oracle IRM. So if you want to share sensitive CAD documents, ZIP archives, media files, etc. with a partner, and you already have Oracle IRM, it's time to get 'wrapping'! Please note that the Oracle IRM Wrapper is made available as a free sample application (with full source code) and is not formally supported by Oracle. However it is informally supported by its author, Martin Lambert, who also created the widely-used Oracle IRM Hot Folder automated sealing application.

    Read the article

  • Selecting Your Theme

    - by Ruth
    Would you like to personalize CRM On Demand? You can quite easily with CRM On Demand Release 17. Whether your company wants a custom theme that matches your company brand, or you have a preference about the look and feel of the application, you can select your theme in a few clicks. If you are interested in creating a custom theme, take a look at the Themes - Create Your CRM Style blog article. Selecting Your Company Theme If you are the company administrator, you can select the company theme from the company profile. Click the Admin link. Navigate to Company Administration Company Profile. In the Company Theme Setting section, click the Theme Name field to select a new theme. Selecting Your Personal Theme Even if you are not an administrator, you can select a theme for CRM On Demand on your computer. Your company may not allow access to this option, so talk to your company administrator if you are unable to select your theme. Click the My Setup link. Click the My Profile link. Click the Personal Profile link. In the Additional Information section, select the theme that you want in the Theme Name picklist. Here are some standard themes to help you find the look that you want:

    Read the article

  • Critical Patch Update For Oracle Fusion Middleware – CPU October 2012 by Daniel Mortimer

    - by JuergenKress
    The latest Critical Patch Update (CPU) has been released for Oracle products. Start your reading here. Patch Set Update and Critical Patch Update October 2012 Availability Document [ID 1477727.1] Oracle Fusion Middleware 11g Release 2 11.1.2.0 Oracle Fusion Middleware 11g Release 1 11.1.1.4 (Portal,Forms,Reports and Discoverer) 11.1.1.5 11.1.1.6 Oracle Application Server 10g Release 3 10.1.3.5 Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: patch ofm,critical patch,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Getting WCF Services in a Silverlight solution to play nice on deployment

    - by brendonpage
    I have come across 2 issues with deploying WCF services in a Silverlight solution, admittedly the one is more of a hiccup, and only occurs if you take the easy way out and reference your services through visual studio. The First Issue This occurs when you deploy your WFC services to an IIS server. When browse to the services using your web browser, you are greeted with “This collection already contains an address with scheme http.  There can be at most one address per scheme in this collection.”. When you make a call to this service from your Silverlight application, you get the extremely helpful “NotFound” error, this error message can be found in the error property of the event arguments on the complete event handler for that call. As it did with me this will leave most people scratching their head, because the very same services work just fine on the ASP.NET Development Web Server and on my local IIS server. Now I’m no server/hosting/IIS expert so I did a bit of searching when I first encountered this issue. I found out this happens because IIS supports multiple address bindings per protocol (http/https/ftp … etc) per web site, but WCF only supports binding to one address per protocol. This causes a problem when the WCF service is hosted on a site with multiple address bindings, because IIS provides all of the bindings to the host factory when running the service. While this problem occurs mainly on shared hosting solutions, it is not limited to shared hosting, it just seems like all shared hosting providers setup sites on their servers with multiple address bindings. For interests sake I added functionality to the example project attached to this post to dump the addresses given to the WCF service by IIS into a log file. This was the output on the shared hosting solution I use: http://mydomain.co.za/Services/TestService.svc http://www.mydomain.co.za/Services/TestService.svc http://mydomain-co-za.win13.wadns.net/Services/TestService.svc http://win13/Services/TestService.svc As you can see all these addresses are for the http protocol, which is where it all goes wrong for WCF. Fixes for the First Issue There are a few ways to get around this. The first being the easiest, target .NET 4! Yes that's right in .NET 4 WCF services support multiple addresses per protocol. This functionality is enabled by an option, which is on by default if you create a new project, you will need to turn on if you are upgrading to .NET 4. To do this set the multipleSiteBindingsEnabled property of the serviceHostingEnviroment tag in the web.config file to true, as shown below: <system.serviceModel>     <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> Beware this ONLY works in .NET 4, so if you don’t have a server with .NET 4 installed on that you can deploy to, you will need to employ one of the other work a rounds. The second option will work for .NET 3.5 & 4. For this option all you need to do is modify the web.config file and add baseAddressPrefixFilters to the serviceHostingEnviroment tag as shown below: <system.serviceModel>     <serviceHostingEnvironment>         <baseAddressPrefixFilters>              <add prefix="http://www.mydomain.co.za"/>         </baseAddressPrefixFilters>     </serviceHostingEnvironment> </system.serviceModel> These will be used to filter the list of base addresses that IIS provides to the host factory. When specifying these prefix filters be sure to specify filters which will only allow 1 result through, otherwise the entire exercise will be pointless. There is however a problem with this work a round, you are only allowed to specify 1 prefix filter per protocol. Which means you can’t add filters for all your environments, this will therefore add to the list of things to do before deploying or switching dev machines. The third option is the one I currently employ, it will work for .NET 3, 3.5 & 4, although it is not needed for .NET 4. For this option you create a custom host factory which inherits from the ServiceHostFactory class. In the implementation of the ServiceHostFactory you employ logic to figure out which of the base addresses, that are give by IIS, to use when creating the service host. The logic you use to do this is completely up to you, I have seen quite a few solutions that simply statically reference an index from the list of base addresses, this works for most situations but falls short in others. For instance, if the order of the base addresses where to change, it might end up returning an address that only resolves on the servers local network, like the last one in the example I gave at the beginning. Another instance, if a request comes in on a different protocol, like https, you will be creating the service host using an address which is on the incorrect protocol, like http. To reliably find the correct address to use, I use the address that the service was requested on. To accomplish this I use the HttpContext, which requires the service to operate with AspNetCompatibilityRequirements set on. If for some reason running you services with AspNetCompatibilityRequirements on isn’t an option, you can still use this method, you will just have to come up with your own logic for selecting the correct address. First you will need to enable AspNetCompatibilityRequirements for your hosting environment, to do this you will need to set it to true in the web.config file as shown below: <system.serviceModel>     <serviceHostingEnvironment AspNetCompatibilityRequirements="true" /> </system.serviceModel> You will then need to mark any services that are going to use the custom host factory, to allow AspNetCompatibilityRequirements, as shown below: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class TestService { } Now for the custom host factory, this is where the logic lives that selects the correct address to create service host with. The one i use is shown below: public class CustomHostFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { // // Compose a prefix filter based on the requested uri // string prefixFilter = HttpContext.Current.Request.Url.Scheme + "://" + HttpContext.Current.Request.Url.DnsSafeHost; if (!HttpContext.Current.Request.Url.IsDefaultPort) { prefixFilter += ":" + HttpContext.Current.Request.Url.Port.ToString() + "/"; } // // Find a base address that matches the prefix filter // foreach (Uri baseAddress in baseAddresses) { if (baseAddress.OriginalString.StartsWith(prefixFilter)) { return new ServiceHost(serviceType, baseAddress); } } // // Throw exception if no matching base address was found // throw new Exception("Custom Host Factory: No base address matching '" + prefixFilter + "' was found."); } } The most important line in the custom host factory is the one that returns a new service host. This has to return a service host that specifies only one base address per protocol. Since I filter by the address the request came on in, I only need to create the service host with one address, since this address will always be of the correct protocol. Now you have a custom host factory you have to tell your services to use it. To do this you view the markup of the service by right clicking on it in the solution explorer and choosing “View Markup”. Then you add/set the value of the Factory property to the full namespace path of you custom host factory, as shown below. And that is it done, the service will now use the specified custom host factory. The Second Issue As I mentioned earlier this issue is more of a hiccup, but I thought worthy of a mention so I included it. This issue only occurs when you add a service reference to a Silverlight project. Visual Studio will generate a lot of code for you, part of that generated code is the ServiceReferences.ClientConfig file. This file stores the endpoint configuration that is used when accessing your services using the generated proxy classes. Here is what that file looks like: <configuration>     <system.serviceModel>         <bindings>             <customBinding>                 <binding name="CustomBinding_TestService">                     <binaryMessageEncoding />                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />                 </binding>                 <binding name="CustomBinding_BrokenService">                     <binaryMessageEncoding />                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />                 </binding>             </customBinding>         </bindings>         <client>             <endpoint address="http://localhost:49347/services/TestService.svc"                 binding="customBinding" bindingConfiguration="CustomBinding_TestService"                 contract="TestService.TestService" name="CustomBinding_TestService" />             <endpoint address="http://localhost:49347/Services/BrokenService.svc"                 binding="customBinding" bindingConfiguration="CustomBinding_BrokenService"                 contract="BrokenService.BrokenService" name="CustomBinding_BrokenService" />         </client>     </system.serviceModel> </configuration> As you will notice the addresses for the end points are set to the addresses of the services you added the service references from, so unless you are adding the service references from your live services, you will have to change these addresses before you deploy. This is little more than an annoyance really, but it adds to the list of things to do before you can deploy, and if left unchecked that list can get out of control. Fix for the Second Issue The way you would usually access a service added this way is to create an instance of the proxy class like so: BrokenServiceClient proxy = new BrokenServiceClient(); Closer inspection of these generated proxy classes reveals that there are a few overloaded constructors, one of which allows you to specify the end point address to use when creating the proxy. From here all you have to do is come up with some logic that will provide you with the relative path to your services. Since my WCF services are usually hosted in the same project as my Silverlight app I use the class shown below: public class ServiceProxyHelper { /// <summary> /// Create a broken service proxy /// </summary> /// <returns>A broken service proxy</returns> public static BrokenServiceClient CreateBrokenServiceProxy() { Uri address = new Uri(Application.Current.Host.Source, "../Services/BrokenService.svc"); return new BrokenServiceClient("CustomBinding_BrokenService", address.AbsoluteUri); } } Then I will create an instance of the proxy class using my service helper class like so: BrokenServiceClient proxy = ServiceProxyHelper.CreateBrokenServiceProxy(); The way this works is “Application.Current.Host.Source” will return the URL to the ClientBin folder the Silverlight app is hosted in, the “../Services/BrokenService.svc” is then used as the relative path to the service from the ClientBin folder, combined by the Uri object this gives me the URL to my service. The “CustomBinding_BrokenService” is a reference to the end point configuration in the ServiceReferences.ClientConfig file. Yes this means you still need the ServiceReferences.ClientConfig file. All this is doing is using a different end point address than the one specified in the ServiceReferences.ClientConfig file, all the other settings form the ServiceReferences.ClientConfig file are still used when creating the proxy. I have uploaded an example project which covers the custom host factory solution from the first issue and everything from the second issue. I included the code to write a list of base addresses to a log file in my implementation of the custom host factory, this is not need for the custom host factory to function and can safely be removed. Download (WCFServicesDeploymentExample.zip)

    Read the article

  • Cut Caseload Costs, Speed Service Delivery For Social Services

    - by michael.seback
    Lower Caseload Costs, Speedier Service Delivery with New Oracle Social Services Solution Oracle has just introduced a new solution for social services agencies that's designed to help case workers address the challenges of rising workloads and growing demands by citizens for additional services. In the past, IT departments developed custom software in an effort to meet program outcomes. "Because this capability is out of the box with the Oracle solution, there's less complexity for organizations and an overall lower total cost of ownership," says Kimberly Ellison-Taylor, Oracle's executive director of health and human services. "Self service brings costs down to just pennies per interaction and makes it possible for clients to receive government services more quickly," Ellison-Taylor says. read more

    Read the article

  • Oracle WebCenter Partner Program

    - by kellsey.ruppel
    In competitive marketplaces, your company needs to quickly respond to changes and new trends, in order to open opportunities and build long-term growth. Oracle has a variety of next-generation services, solutions and resources that will leverage the differentiators in your offerings. Name your partnering needs: Oracle has the answer. This week we’d like to focus on Partners and the value your organization can gain from working with the Oracle PartnerNetwork. The Oracle PartnerNetwork will empower your company with exceptional resources to distinguish your offerings from the competition, seize opportunities, and increase your sales. We’re happy to welcome Christine Kungl, and Brian Buzzell, from Oracle’s World Wide Alliances & Channels (WWA&C) WebCenter Partner Enablement team, as today’s guests on the Oracle WebCenter blog. Q: What is the Oracle PartnerNetwork (OPN)?A: Christine: Oracle’s PartnerNetwork (OPN) is a collaborative partnership which allows registered companies specific added value resources to help differentiate themselves from their competition. Through OPN programs it provides companies the ability to seize and target opportunities, educate and train their teams, and leverage unparalleled opportunity given Oracle’s large market footprint. OPN’s multi-level programs are targeted at different levels allowing companies to grow and evolve with Oracle based on their business needs.  As part of their OPN memberships partners are encouraged to become OPN Specialized allowing those partners additional differentiation in Oracle’s Partner Network Community.  Q: What is an OPN Specialization and what resources are available for Specialized Partners?A: Brian: Oracle wanted a better way for our partners to differentiate their special skills and expertise, as well a more effective way to communicate that difference to customers.  Oracle’s expanding product portfolio demanded that we be able to identify partners with significant product knowledge—those who had made an investment in Oracle and a continuing commitment to deliver Oracle solutions. And with more than 30,000 Oracle partners around the world, Oracle needed a way for our customers to choose the right partner for their business. So how did Oracle meet this need? With the new partner program:  Oracle PartnerNetwork (OPN) Specialized. In this new program, Oracle partners are: Specialized :  Differentiating themselves from the competition with expertise that set them apart Recognized:  Being acknowledged for investing in becoming Oracle experts in specialized areas. Preferred :  Connecting with potential customers who are seeking  value-added solutions for their business OPN Specialized provides all partners with educational opportunities, training, and tools specially designed to build competency and grow business.  Partners can serve their customers better through key resources:OPN Specialized Knowledge Zones – Located on the updated and enhanced OPN portal— provide a single point of entry for all education and training information for Oracle partners. Enablement 2.0 Resources —Enablement 2.0 helps Oracle partners build their competencies and skills through a variety of educational opportunities and expanded training choices. These resources include: Enablement 2.0 “Boot camps” provide three-tiered learning levels that help jump-start partner training The role-based training covers Oracle’s application and technology products and offers a combination of classroom lectures, hands-on lab exercises, and case studies. Enablement 2.0 Interactive guided learning paths (GLPs) with recommendations on how to achieve specialization Upgraded partner solution kits Enhanced, specialized business centers available 24/7 around the globe on the OPN portal OPN Competency Center—Tracking ProgressThe OPN Competency Center keeps track as a partner applies for and achieves specialization in selected areas. You start with an assessment that compares your organization’s current skills and experience with the requirements for specialization in the area you have chosen. The OPN Competency Center then provides a roadmap that itemizes the skills and the knowledge you need to earn specialized status. In summary, OPN Specialization not only includes key training resources but a way to track and show progression for your partner organization. Q: What is are the OPN Membership Levels and what are the benefits?A:  Christine: The base OPN membership levels are: Remarketer: At the Remarketer level, retailers can choose to resell select Oracle products with the backing of authorized, regionally located, value-added distributors (VADs). The Remarketer level has no fees and no partner agreement with Oracle, but does offer online training and sales tools through the OPN portal.Program Details: RemarketerSilver Level: The Silver level is for Oracle partners who are focused on reselling and developing business with products ordered through the Oracle 1-Click Ordering Program. The Silver level provides a cost-effective, yet scalable way for partners to start an OPN Specialized membership and offers a substantial set of benefits that lets partners increase their competitive positioning. Program Details: SilverGold Level: Gold-level partners have the ability to specialize, helping them grow their business and create differentiation in the marketplace. Oracle partners at the Gold level can develop, sell, or implement the full stack of Oracle solutions and can apply to resell Oracle Applications.Program Details: GoldPlatinum Level: The Platinum level is for Oracle partners who want the highest level of benefits and are committed to reaching a minimum of five specializations. Platinum partners are recognized for their expertise in a broad range of products and technology, and receive dedicated support from Oracle.Program Details: PlatinumIn addition we recently introduced a new level:Diamond Level: This level is the most prestigious level of OPN Specialized. It allows companies to differentiate further because of their focused depth and breadth of their expertise. Program Details: DiamondSo as you can see there are various levels cost effective ways that Partners can get assistance, differentiation through OPN membership. Q: What role does the Oracle's World Wide Alliances & Channels (WWA&C), Partner Enablement teams and the WebCenter Community play?  A: Brian: Oracle’s WWA&C teams are responsible for manage relationships, educating their teams, creating go-to-market solutions and fostering communities for Oracle partners worldwide.  The WebCenter Partner Enablement Middleware Team is tasked to create, manage and distribute Specialization resources for the WebCenter Partner community. Q: What WebCenter Specializations are currently available?A: Christine:  As of now here are the following WebCenter Specializations and their availability: Oracle WebCenter Portal Specialization (Oracle WebCenter Portal): Available NowThe Oracle WebCenter Specialization provides insight into the following products: WebCenter Services, WebCenter Spaces, and WebLogic Portal.Oracle WebCenter Specialized Partners can efficiently use Oracle WebCenter products to create social applications, enterprise portals, communities, composite applications, and Internet or intranet Web sites on a standards-based, service-oriented architecture (SOA). The suite combines the development of rich internet applications; a multi-channel portal framework; and a suite of horizontal WebCenter applications, which provide content, presence, and social networking capabilities to create a highly interactive user experience. Oracle WebCenter Content Specialization: Available NowThe Oracle WebCenter Content Specialization provides insight into the following products; Universal Content Management, WebCenter Records Management, WebCenter Imaging, WebCenter Distributed Capture, and WebCenter Capture.Oracle WebCenter Content Specialized Partners can efficiently build content-rich business applications, reuse content, and integrate hundreds of content services with other business applications. This allows our customers to decrease costs, automate processes, reduce resource bottlenecks, share content effectively, minimize the number of lost documents, and better manage risk. Oracle WebCenter Sites Specialization: Available Q1 2012Oracle WebCenter Sites is part of the broader Oracle WebCenter platform that provides organizations with a complete customer experience management solution.  Partners that align with the new Oracle WebCenter Sites platform allow their customers organizations to: Leverage customer information from all channels and systems Manage interactions across all channels Unify commerce, merchandising, marketing, and service across all channels Provide personalized, choreographed consumer journeys across all channels Integrate order orchestration, supply chain management and order fulfillment Q: What criteria does the Partner organization need to achieve Specialization? What about individual Sales, PreSales & Implementation Specialist/Technical consultants?A: Brian: Each Oracle WebCenter Specialization has unique Business Criteria that must be met in order to achieve that Specialization.  This includes a unique number of transactions (co-sell, re-sell, and referral), customer references and then unique number of specialists as part of a partner team (Sales, Pre-Sales, Implementation, and Support).   Each WebCenter Specialization provides training resources (GLPs, BootCamps, Assessments and Exams for individuals on a partner’s staff to fulfill those requirements.  That criterion can be found for each Specialization on the Specialize tab for each WebCenter Knowledge Zone.  Here are the sample criteria, recommended courses, exams for the WebCenter Portal Specialization: WebCenter Portal Specialization Criteria Q: Do you have any suggestions on the best way for partners to get started if they would like to know more?A: Christine:   The best way to start is for partners is look at their business and core Oracle team focus and then look to become specialized in one or more areas.  Once you have selected the Specializations that are right for your business, you need to follow the first 3 key steps described below. The fourth step outlines the additional process to follow if you meet the criteria to be Advanced Specialized. Note that Step 4 may not be done without first following Steps 1-3.1. Join the Knowledge Zone(s) where you want to achieve Specialized status Go to the Knowledge Zone lick on the "Why Partner" tab Click on the "Join Knowledge Zone" link 2. Meet the Specialization criteria - Define and implement plans in your organization to achieve the competency and business criteria targets of the Specialization. (Note: Worldwide OPN members at the Gold, Platinum, or Diamond level and their Associates at the Gold, Platinum, or Diamond level may count their collective resources to meet the business and competency criteria required for specialization in this area.) 3. Apply for Specialization – when you have met the business and competency criteria required, inform Oracle by completing the following steps: Click on the "Specialize" tab in the Knowledge Zone Click on the "Apply Now" button Complete the online application form Oracle will validate the information provided, and once approved, you will receive notification from Oracle of your awarded Specialized status. Need more information? Access our Step by Step Guide (PDF) 4. Apply for Advanced Specialization (Optional) – If your company has on staff 50 unique Certified Implementation Specialists in your company's approved Specialization's product set, let Oracle know by following these steps: Ensure that you have 50 or more unique individuals that are Certified Implementation Specialists in the specific Specialization awarded to your company If you are pooling resources from another Associate or Worldwide entity, ensure you know that company’s name and country Have your Oracle PRM Administrator complete the online Advanced Specialization Application Oracle will validate the information provided, and once approved, you will receive notification from Oracle of your awarded Advanced Specialized status. There are additional resources on OPN as well as the broader WebCenter Community: v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Node.js Adventure - Node.js on Windows

    - by Shaun
    Two weeks ago I had had a talk with Wang Tao, a C# MVP in China who is currently running his startup company and product named worktile. He asked me to figure out a synchronization solution which helps his product in the future. And he preferred me implementing the service in Node.js, since his worktile is written in Node.js. Even though I have some experience in ASP.NET MVC, HTML, CSS and JavaScript, I don’t think I’m an expert of JavaScript. In fact I’m very new to it. So it scared me a bit when he asked me to use Node.js. But after about one week investigate I have to say Node.js is very easy to learn, use and deploy, even if you have very limited JavaScript skill. And I think I became love Node.js. Hence I decided to have a series named “Node.js Adventure”, where I will demonstrate my story of learning and using Node.js in Windows and Windows Azure. And this is the first one.   (Brief) Introduction of Node.js I don’t want to have a fully detailed introduction of Node.js. There are many resource on the internet we can find. But the best one is its homepage. Node.js was created by Ryan Dahl, sponsored by Joyent. It’s consist of about 80% C/C++ for core and 20% JavaScript for API. It utilizes CommonJS as the module system which we will explain later. The official definition of Node.js is Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. First of all, Node.js utilizes JavaScript as its development language and runs on top of V8 engine, which is being used by Chrome. It brings JavaScript, a client-side language into the backend service world. So many people said, even though not that actually, “Node.js is a server side JavaScript”. Additionally, Node.js uses an event-driven, non-blocking IO model. This means in Node.js there’s no way to block currently working thread. Every operation in Node.js executed asynchronously. This is a huge benefit especially if our code needs IO operations such as reading disks, connect to database, consuming web service, etc.. Unlike IIS or Apache, Node.js doesn’t utilize the multi-thread model. In Node.js there’s only one working thread serves all users requests and resources response, as the ST star in the figure below. And there is a POSIX async threads pool in Node.js which contains many async threads (AT stars) for IO operations. When a user have an IO request, the ST serves it but it will not do the IO operation. Instead the ST will go to the POSIX async threads pool to pick up an AT, pass this operation to it, and then back to serve any other requests. The AT will actually do the IO operation asynchronously. Assuming before the AT complete the IO operation there is another user comes. The ST will serve this new user request, pick up another AT from the POSIX and then back. If the previous AT finished the IO operation it will take the result back and wait for the ST to serve. ST will take the response and return the AT to POSIX, and then response to the user. And if the second AT finished its job, the ST will response back to the second user in the same way. As you can see, in Node.js there’s only one thread serve clients’ requests and POSIX results. This thread looping between the users and POSIX and pass the data back and forth. The async jobs will be handled by POSIX. This is the event-driven non-blocking IO model. The performance of is model is much better than the multi-threaded blocking model. For example, Apache is built in multi-threaded blocking model while Nginx is in event-driven non-blocking mode. Below is the performance comparison between them. And below is the memory usage comparison between them. These charts are captured from the video NodeJS Basics: An Introductory Training, which presented at Cloud Foundry Developer Advocate.   Node.js on Windows To execute Node.js application on windows is very simple. First of you we need to download the latest Node.js platform from its website. After installed, it will register its folder into system path variant so that we can execute Node.js at anywhere. To confirm the Node.js installation, just open up a command windows and type “node”, then it will show the Node.js console. As you can see this is a JavaScript interactive console. We can type some simple JavaScript code and command here. To run a Node.js JavaScript application, just specify the source code file name as the argument of the “node” command. For example, let’s create a Node.js source code file named “helloworld.js”. Then copy a sample code from Node.js website. 1: var http = require("http"); 2:  3: http.createServer(function (req, res) { 4: res.writeHead(200, {"Content-Type": "text/plain"}); 5: res.end("Hello World\n"); 6: }).listen(1337, "127.0.0.1"); 7:  8: console.log("Server running at http://127.0.0.1:1337/"); This code will create a web server, listening on 1337 port and return “Hello World” when any requests come. Run it in the command windows. Then open a browser and navigate to http://localhost:1337/. As you can see, when using Node.js we are not creating a web application. In fact we are likely creating a web server. We need to deal with request, response and the related headers, status code, etc.. And this is one of the benefit of using Node.js, lightweight and straightforward. But creating a website from scratch again and again is not acceptable. The good news is that, Node.js utilizes CommonJS as its module system, so that we can leverage some modules to simplify our job. And furthermore, there are about ten thousand of modules available n the internet, which covers almost all areas in server side application development.   NPM and Node.js Modules Node.js utilizes CommonJS as its module system. A module is a set of JavaScript files. In Node.js if we have an entry file named “index.js”, then all modules it needs will be located at the “node_modules” folder. And in the “index.js” we can import modules by specifying the module name. For example, in the code we’ve just created, we imported a module named “http”, which is a build-in module installed alone with Node.js. So that we can use the code in this “http” module. Besides the build-in modules there are many modules available at the NPM website. Thousands of developers are contributing and downloading modules at this website. Hence this is another benefit of using Node.js. There are many modules we can use, and the numbers of modules increased very fast, and also we can publish our modules to the community. When I wrote this post, there are totally 14,608 modules at NPN and about 10 thousand downloads per day. Install a module is very simple. Let’s back to our command windows and input the command “npm install express”. This command will install a module named “express”, which is a MVC framework on top of Node.js. And let’s create another JavaScript file named “helloweb.js” and copy the code below in it. I imported the “express” module. And then when the user browse the home page it will response a text. If the incoming URL matches “/Echo/:value” which the “value” is what the user specified, it will pass it back with the current date time in JSON format. And finally my website was listening at 12345 port. 1: var express = require("express"); 2: var app = express(); 3:  4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: app.get("/Echo/:value", function(req, res) { 9: var value = req.params.value; 10: res.json({ 11: "Value" : value, 12: "Time" : new Date() 13: }); 14: }); 15:  16: console.log("Web application opened."); 17: app.listen(12345); For more information and API about the “express”, please have a look here. Start our application from the command window by command “node helloweb.js”, and then navigate to the home page we can see the response in the browser. And if we go to, for example http://localhost:12345/Echo/Hello Shaun, we can see the JSON result. The “express” module is very populate in NPM. It makes the job simple when we need to build a MVC website. There are many modules very useful in NPM. - underscore: A utility module covers many common functionalities such as for each, map, reduce, select, etc.. - request: A very simple HTT request client. - async: Library for coordinate async operations. - wind: Library which enable us to control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps.   Node.js and IIS I demonstrated how to run the Node.js application from console. Since we are in Windows another common requirement would be, “can I host Node.js in IIS?” The answer is “Yes”. Tomasz Janczuk created a project IISNode at his GitHub space we can find here. And Scott Hanselman had published a blog post introduced about it.   Summary In this post I provided a very brief introduction of Node.js, includes it official definition, architecture and how it implement the event-driven non-blocking model. And then I described how to install and run a Node.js application on windows console. I also described the Node.js module system and NPM command. At the end I referred some links about IISNode, an IIS extension that allows Node.js application runs on IIS. Node.js became a very popular server side application platform especially in this year. By leveraging its non-blocking IO model and async feature it’s very useful for us to build a highly scalable, asynchronously service. I think Node.js will be used widely in the cloud application development in the near future.   In the next post I will explain how to use SQL Server from Node.js.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • A Video Chat with OAUG President David Ferguson

    - by Aaron Lazenby
    A week ago, I had a chance to sit down with OAUG president David Ferguson. I was really looking forward to this conversation after the sharp opinion piece David submitted to Profit Online last year about what it takes to implement social CRM in a sales organization.  Here, David shares his thoughts about this year's Collaborate 10 conference, the topics users are exited about, and the work the OAUG will be doing in the next twelve months.

    Read the article

  • Coherence - How to develop a custom push replication publisher

    - by cosmin.tudor(at)oracle.com
    CoherencePushReplicationDB.zipIn the example bellow I'm describing a way of developing a custom push replication publisher that publishes data to a database via JDBC. This example can be easily changed to publish data to other receivers (JMS,...) by performing changes to step 2 and small changes to step 3, steps that are presented bellow. I've used Eclipse as the development tool. To develop a custom push replication publishers we will need to go through 6 steps: Step 1: Create a custom publisher scheme class Step 2: Create a custom publisher class that should define what the publisher is doing. Step 3: Create a class data is performing the actions (publish to JMS, DB, etc ) for the custom publisher. Step 4: Register the new publisher against a ContentHandler. Step 5: Add the new custom publisher in the cache configuration file. Step 6: Add the custom publisher scheme class to the POF configuration file. All these steps are detailed bellow. The coherence project is attached and conclusions are presented at the end. Step 1: In the Coherence Eclipse project create a class called CustomPublisherScheme that should implement com.oracle.coherence.patterns.pushreplication.publishers.AbstractPublisherScheme. In this class define the elements of the custom-publisher-scheme element. For instance for a CustomPublisherScheme that looks like that: <sync:publisher> <sync:publisher-name>Active2-JDBC-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:custom-publisher-scheme> <sync:jdbc-string>jdbc:oracle:thin:@machine-name:1521:XE</sync:jdbc-string> <sync:username>hr</sync:username> <sync:password>hr</sync:password> </sync:custom-publisher-scheme> </sync:publisher-scheme> </sync:publisher> the code is: package com.oracle.coherence; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import com.oracle.coherence.patterns.pushreplication.Publisher; import com.oracle.coherence.configuration.Configurable; import com.oracle.coherence.configuration.Mandatory; import com.oracle.coherence.configuration.Property; import com.oracle.coherence.configuration.parameters.ParameterScope; import com.oracle.coherence.environment.Environment; import com.tangosol.io.pof.PofReader; import com.tangosol.io.pof.PofWriter; import com.tangosol.util.ExternalizableHelper; @Configurable public class CustomPublisherScheme extends com.oracle.coherence.patterns.pushreplication.publishers.AbstractPublisherScheme { /** * */ private static final long serialVersionUID = 1L; private String jdbcString; private String username; private String password; public String getJdbcString() { return this.jdbcString; } @Property("jdbc-string") @Mandatory public void setJdbcString(String jdbcString) { this.jdbcString = jdbcString; } public String getUsername() { return username; } @Property("username") @Mandatory public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } @Property("password") @Mandatory public void setPassword(String password) { this.password = password; } public Publisher realize(Environment environment, ClassLoader classLoader, ParameterScope parameterScope) { return new CustomPublisher(getJdbcString(), getUsername(), getPassword()); } public void readExternal(DataInput in) throws IOException { super.readExternal(in); this.jdbcString = ExternalizableHelper.readSafeUTF(in); this.username = ExternalizableHelper.readSafeUTF(in); this.password = ExternalizableHelper.readSafeUTF(in); } public void writeExternal(DataOutput out) throws IOException { super.writeExternal(out); ExternalizableHelper.writeSafeUTF(out, this.jdbcString); ExternalizableHelper.writeSafeUTF(out, this.username); ExternalizableHelper.writeSafeUTF(out, this.password); } public void readExternal(PofReader reader) throws IOException { super.readExternal(reader); this.jdbcString = reader.readString(100); this.username = reader.readString(101); this.password = reader.readString(102); } public void writeExternal(PofWriter writer) throws IOException { super.writeExternal(writer); writer.writeString(100, this.jdbcString); writer.writeString(101, this.username); writer.writeString(102, this.password); } } Step 2: Define what the CustomPublisher should basically do by creating a new java class called CustomPublisher that implements com.oracle.coherence.patterns.pushreplication.Publisher package com.oracle.coherence; import com.oracle.coherence.patterns.pushreplication.EntryOperation; import com.oracle.coherence.patterns.pushreplication.Publisher; import com.oracle.coherence.patterns.pushreplication.exceptions.PublisherNotReadyException; import java.io.BufferedWriter; import java.util.Iterator; public class CustomPublisher implements Publisher { private String jdbcString; private String username; private String password; private transient BufferedWriter bufferedWriter; public CustomPublisher() { } public CustomPublisher(String jdbcString, String username, String password) { this.jdbcString = jdbcString; this.username = username; this.password = password; this.bufferedWriter = null; } public String getJdbcString() { return this.jdbcString; } public String getUsername() { return username; } public String getPassword() { return password; } public void publishBatch(String cacheName, String publisherName, Iterator<EntryOperation> entryOperations) { DatabasePersistence databasePersistence = new DatabasePersistence( jdbcString, username, password); while (entryOperations.hasNext()) { EntryOperation entryOperation = (EntryOperation) entryOperations .next(); databasePersistence.databasePersist(entryOperation); } } public void start(String cacheName, String publisherName) throws PublisherNotReadyException { System.err .printf("Started: Custom JDBC Publisher for Cache %s with Publisher %s\n", new Object[] { cacheName, publisherName }); } public void stop(String cacheName, String publisherName) { System.err .printf("Stopped: Custom JDBC Publisher for Cache %s with Publisher %s\n", new Object[] { cacheName, publisherName }); } } In the publishBatch method from above we inform the publisher that he is supposed to persist data to a database: DatabasePersistence databasePersistence = new DatabasePersistence( jdbcString, username, password); while (entryOperations.hasNext()) { EntryOperation entryOperation = (EntryOperation) entryOperations .next(); databasePersistence.databasePersist(entryOperation); } Step 3: The class that deals with the persistence is a very basic one that uses JDBC to perform inserts/updates against a database. package com.oracle.coherence; import com.oracle.coherence.patterns.pushreplication.EntryOperation; import java.sql.*; import java.text.SimpleDateFormat; import com.oracle.coherence.Order; public class DatabasePersistence { public static String INSERT_OPERATION = "INSERT"; public static String UPDATE_OPERATION = "UPDATE"; public Connection dbConnection; public DatabasePersistence(String jdbcString, String username, String password) { this.dbConnection = createConnection(jdbcString, username, password); } public Connection createConnection(String jdbcString, String username, String password) { Connection connection = null; System.err.println("Connecting to: " + jdbcString + " Username: " + username + " Password: " + password); try { // Load the JDBC driver String driverName = "oracle.jdbc.driver.OracleDriver"; Class.forName(driverName); // Create a connection to the database connection = DriverManager.getConnection(jdbcString, username, password); System.err.println("Connected to:" + jdbcString + " Username: " + username + " Password: " + password); } catch (ClassNotFoundException e) { e.printStackTrace(); } // driver catch (SQLException e) { e.printStackTrace(); } return connection; } public void databasePersist(EntryOperation entryOperation) { if (entryOperation.getOperation().toString() .equalsIgnoreCase(INSERT_OPERATION)) { insert(((Order) entryOperation.getPublishableEntry().getValue())); } else if (entryOperation.getOperation().toString() .equalsIgnoreCase(UPDATE_OPERATION)) { update(((Order) entryOperation.getPublishableEntry().getValue())); } } public void update(Order order) { String update = "UPDATE Orders set QUANTITY= '" + order.getQuantity() + "', AMOUNT='" + order.getAmount() + "', ORD_DATE= '" + (new SimpleDateFormat("dd-MMM-yyyy")).format(order .getOrdDate()) + "' WHERE SYMBOL='" + order.getSymbol() + "'"; System.err.println("UPDATE = " + update); try { Statement stmt = getDbConnection().createStatement(); stmt.execute(update); stmt.close(); } catch (SQLException ex) { System.err.println("SQLException: " + ex.getMessage()); } } public void insert(Order order) { String insert = "insert into Orders values('" + order.getSymbol() + "'," + order.getQuantity() + "," + order.getAmount() + ",'" + (new SimpleDateFormat("dd-MMM-yyyy")).format(order .getOrdDate()) + "')"; System.err.println("INSERT = " + insert); try { Statement stmt = getDbConnection().createStatement(); stmt.execute(insert); stmt.close(); } catch (SQLException ex) { System.err.println("SQLException: " + ex.getMessage()); } } public Connection getDbConnection() { return dbConnection; } public void setDbConnection(Connection dbConnection) { this.dbConnection = dbConnection; } } Step 4: Now we need to register our publisher against a ContentHandler. In order to achieve that we need to create in our eclipse project a new class called CustomPushReplicationNamespaceContentHandler that should extend the com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler. In the constructor of the new class we define a new handler for our custom publisher. package com.oracle.coherence; import com.oracle.coherence.configuration.Configurator; import com.oracle.coherence.environment.extensible.ConfigurationContext; import com.oracle.coherence.environment.extensible.ConfigurationException; import com.oracle.coherence.environment.extensible.ElementContentHandler; import com.oracle.coherence.patterns.pushreplication.PublisherScheme; import com.oracle.coherence.environment.extensible.QualifiedName; import com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler; import com.tangosol.run.xml.XmlElement; public class CustomPushReplicationNamespaceContentHandler extends PushReplicationNamespaceContentHandler { public CustomPushReplicationNamespaceContentHandler() { super(); registerContentHandler("custom-publisher-scheme", new ElementContentHandler() { public Object onElement(ConfigurationContext context, QualifiedName qualifiedName, XmlElement xmlElement) throws ConfigurationException { PublisherScheme publisherScheme = new CustomPublisherScheme(); Configurator.configure(publisherScheme, context, qualifiedName, xmlElement); return publisherScheme; } }); } } Step 5: Now we should define our CustomPublisher in the cache configuration file according to the following documentation. <cache-config xmlns:sync="class:com.oracle.coherence.CustomPushReplicationNamespaceContentHandler" xmlns:cr="class:com.oracle.coherence.environment.extensible.namespaces.InstanceNamespaceContentHandler"> <caching-schemes> <sync:provider pof-enabled="false"> <sync:coherence-provider /> </sync:provider> <caching-scheme-mapping> <cache-mapping> <cache-name>publishing-cache</cache-name> <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name> <autostart>true</autostart> <sync:publisher> <sync:publisher-name>Active2 Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:remote-cluster-publisher-scheme> <sync:remote-invocation-service-name>remote-site1</sync:remote-invocation-service-name> <sync:remote-publisher-scheme> <sync:local-cache-publisher-scheme> <sync:target-cache-name>publishing-cache</sync:target-cache-name> </sync:local-cache-publisher-scheme> </sync:remote-publisher-scheme> <sync:autostart>true</sync:autostart> </sync:remote-cluster-publisher-scheme> </sync:publisher-scheme> </sync:publisher> <sync:publisher> <sync:publisher-name>Active2-Output-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:stderr-publisher-scheme> <sync:autostart>true</sync:autostart> <sync:publish-original-value>true</sync:publish-original-value> </sync:stderr-publisher-scheme> </sync:publisher-scheme> </sync:publisher> <sync:publisher> <sync:publisher-name>Active2-JDBC-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:custom-publisher-scheme> <sync:jdbc-string>jdbc:oracle:thin:@machine_name:1521:XE</sync:jdbc-string> <sync:username>hr</sync:username> <sync:password>hr</sync:password> </sync:custom-publisher-scheme> </sync:publisher-scheme> </sync:publisher> </cache-mapping> </caching-scheme-mapping> <!-- The following scheme is required for each remote-site when using a RemoteInvocationPublisher --> <remote-invocation-scheme> <service-name>remote-site1</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>localhost</address> <port>20001</port> </socket-address> </remote-addresses> <connect-timeout>2s</connect-timeout> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-invocation-scheme> <!-- END: com.oracle.coherence.patterns.pushreplication --> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <acceptor-config> <tcp-acceptor> <local-address> <address>localhost</address> <port>20002</port> </local-address> </tcp-acceptor> </acceptor-config> <autostart>true</autostart> </proxy-scheme> </caching-schemes> </cache-config> As you can see in the red-marked text from above I've:       - set new Namespace Content Handler       - define the new custom publisher that should work together with other publishers like: stderr and remote publishers in our case. Step 6: Add the com.oracle.coherence.CustomPublisherScheme to your custom-pof-config file: <pof-config> <user-type-list> <!-- Built in types --> <include>coherence-pof-config.xml</include> <include>coherence-common-pof-config.xml</include> <include>coherence-messagingpattern-pof-config.xml</include> <include>coherence-pushreplicationpattern-pof-config.xml</include> <!-- Application types --> <user-type> <type-id>1901</type-id> <class-name>com.oracle.coherence.Order</class-name> <serializer> <class-name>com.oracle.coherence.OrderSerializer</class-name> </serializer> </user-type> <user-type> <type-id>1902</type-id> <class-name>com.oracle.coherence.CustomPublisherScheme</class-name> </user-type> </user-type-list> </pof-config> CONCLUSIONSThis approach allows for publishers to publish data to almost any other receiver (database, JMS, MQ, ...). The only thing that needs to be changed is the DatabasePersistence.java class that should be adapted to the chosen receiver. Only minor changes are needed for the rest of the code (to publishBatch method from CustomPublisher class).

    Read the article

  • Protecting offline IRM rights and the error "Unable to Connect to Offline database"

    - by Simon Thorpe
    One of the most common problems I get asked about Oracle IRM is in relation to the error message "Unable to Connect to Offline database". This error message is a result of how Oracle IRM is protecting the cached rights on the local machine and if that cache has become invalid in anyway, this error is thrown. Offline rights and security First we need to understand how Oracle IRM handles offline use. The way it is implemented is one of the main reasons why Oracle IRM is the leading document security solution and demonstrates our methodology to ensure that solutions address both security and usability and puts the balance of these two in your control. Each classification has a set of predefined roles that the manager of the classification can assign to users. Each role has an offline period which determines the amount of time a user can access content without having to communicate with the IRM server. By default for the context model, which is the classification system that ships out of the box with Oracle IRM, the offline period for each role is 3 days. This is easily changed however and can be as low as under an hour to as long as years. It is also possible to switch off the ability to access content offline which can be useful when content is very sensitive and requires a tight leash. So when a user is online, transparently in the background, the Oracle IRM Desktop communicates with the server and updates the users rights and offline periods. This transparent synchronization period is determined by the server and communicated to all IRM Desktops and allows for users rights to be kept up to date without their intervention. This allows us to support some very important scenarios which are key to a successful IRM solution. A user doesn't have to make any decision when going offline, they simply unplug their laptop and they already have their offline periods synchronized to the maximum values. Any solution that requires a user to make a decision at the point of going offline isn't going to work because people forget to do this and will therefore be unable to legitimately access their content offline. If your rights change to REMOVE your access to content, this also happens in the background. This is very useful when someone has an offline duration of a week and they happen to make a connection to the internet 3 days into that offline period, the Oracle IRM Desktop detects this online state and automatically updates all rights for the user. This means the business risk is reduced when setting long offline periods, because of the daily transparent sync, you can reflect changes as soon as the user is online. Of course, if they choose not to come online at all during that week offline period, you cannot effect change, but you take that risk in giving the 7 day offline period in the first place. If you are added to a NEW classification during the day, this will automatically be synchronized without the user even having to open a piece of content secured against that classification. This is very important, consider the scenario where a senior executive downloads all their email but doesn't open any of it. Disconnects the laptop and then gets on a plane. During the flight they attempt to open a document attached to a downloaded email which has been secured against an IRM classification the user was not even aware they had access to. Because their new role in this classification was automatically synchronized their experience is a good one and the document opens. More information on how the Oracle IRM classification model works can be found in this article by Martin Abrahams. So what about problems accessing the offline rights database? So onto the core issue... when these rights are cached to your machine they are stored in an encrypted database. The encryption of this offline database is keyed to the instance of the installation of the IRM Desktop and the Windows user account. Why? Well what you do not want to happen is for someone to get their rights for content and then copy these files across hundreds of other machines, therefore getting access to sensitive content across many environments. The IRM server has a setting which controls how many times you can cache these rights on unique machines. This is because people typically access IRM content on more than one computer. Their work desktop, a laptop and often a home computer. So Oracle IRM allows for the usability of caching rights on more than one computer whilst retaining strong security over this cache. So what happens if these files are corrupted in someway? That's when you will see the error, Unable to Connect to Offline database. The most common instance of seeing this is when you are using virtual machines and copy them from one computer to the next. The virtual machine software, VMWare Workstation for example, makes changes to the unique information of that virtual machine and as such invalidates the offline database. How do you solve the problem? Resolution is however simple. You just delete all of the offline database files on the machine and they will be recreated with working encryption when the Oracle IRM Desktop next starts. However this does mean that the IRM server will think you have your rights cached to more than one computer and you will need to rerequest your rights, even though you are only going to be accessing them on one. Because it still thinks the old cache is valid. So be aware, it is good practice to increase the server limit from the default of 1 to say 3 or 4. This is done using the Enterprise Manager instance of IRM. So to delete these offline files I have a simple .bat file you can use; Download DeleteOfflineDBs.bat Note that this uses pskillto stop the irmBackground.exe from running. This is part of the IRM Desktop and holds open a lock to the offline database. Either kill this from task manager or use pskillas part of the script.

    Read the article

  • What if &ldquo;Microsoft&rdquo; were in our shoes? About Windows Phone

    - by Vijaya Malla
    This is what I think about Microsoft Windows Phone. If Microsoft were in our shoes looking at various phones available their configurations, memory, front facing cameras etc. Microsoft disappointed the USA customer base again by not getting Nokia Lumia 800. The Past: If we talk few years ago, few business people were on their Blackberry’s and few Gadget lovers were on crappy Windows OS devices. The world was all going right till Apple came with a revolutionary device iPhone, which completely changed our perception towards phone and how great a smartphone can be. It’s not just phone but the whole technology industry. The romantic appealing of the phone and smooth touch and feel of it made everyone to get one of those bad boys. The sales went up for not just Apple for AT&T too. Even though everyone complained about the signal strength of AT&T, everyone wanted to be on it because they have iPhones. All world wanted iPhone back then except Microsoft with few comments on how it is not going to be in market. But it did great and rocked the industry. A few years later with iPhone and Android taking over the smartphone market Microsoft realized that it should be in the game too. Worked on the design of it, and gave us the best Mobile OS ever. Everyone thinks that iOS is a great OS for phones but if you have touched a Windows Phone and use it for real then you will realize the strengths of it. so last year we welcomed Windows Phone 7 The Present : Windows Phone 7 has the fastest growing market. The phones are cheap, you can buy from any carrier out there. The phone became smarter and smarter with the recent update “Mango (7.5)” and with the collaboration with Nokia, Microsoft created a new eco-system for smartphones with the best smartphone hardware and best smartphone software. Everyone in the world was excited about the collaboration. As we fly over cloud 9 imagining about Nokia made Windows Phones we all heard a good news from Nokia “Nokia World”. Nokia showed the world what a best hardware making company can do with Windows Phone 7.5 OS. Nokia Lumia 800 and 710 took the spotlight. Everyone here in USA and all over the world wanted to own a Nokia Lumia 800 because of the design, software, proprietary apps from Nokia (maps, ESPN, drive and music). If USA market had Nokia Lumia 800, then it would have been the best step Microsoft and Nokia had ever made in their history of smartphone market. With all the numbers going to Android and IPhone, its not clear on why Microsoft/Nokia did not release Lumia 800 here in USA. Its unclear if Microsoft had learnt the lesson or not. if it had learnt the lesson I guess Microsoft needs to get the Nokia Lumia 800 to the USA. The Future: This is where we hope we get the best form Microsoft. I was an iPhone user, I used 2G, 3G, 3GS, 4 and then moved to Windows Phone and never felt so happy with my iPhones’. From the day when Nokia announced the partnership with Microsoft and said that they going to come up with a new Nokia windows phone, I was dreaming for my Nokia Phone. but looks like it is not going to happen any time soon. My thoughts about the Market :  Nokia has the biggest market base in the world. Even though people moved to Android or iPhone over the years in other parts of the world like India and China, people still love to use Nokia. Everyone who uses a Windows Phone now will wait for that day when Nokia Lumia comes to the USA but what either or both of the companies should do for a better market share is to make a very aggressive move with the hardware and bet on the devices. I am pretty sure that it will work. everyone here in the USA will like to have a dual core windows phone with front facing camera and all other crazy things that android/apple phones offer. I think we just have to wait for that day and hope that day comes soon. Love Microsoft and Nokia Thank you for reading.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >