Search Results

Search found 14385 results on 576 pages for 'email validation'.

Page 557/576 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • Find More Streaming TV Online with Clicker.tv

    - by DigitalGeekery
    Looking for a way to access more of your favorite TV Shows and other online entertainment? Today we’ll take a look at Clicker.tv which offers an awesome way to find tons of TV programs and movies. Clicker.tv Clicker.tv is an HTML5 web application that indexes both free and premium content from sources like Hulu, Netflix, Amazon, iTunes, and more. Some movies or episodes, such as those from Netflix and Amazon.com’s Video on Demand, will require viewers to have a membership, or pay a fee to access content. There is also a Clicker.tv app for Boxee.   Navigation Navigating in Clicker.tv is rather easy with your keyboard. Directional Keys: navigate up, down, left, and right. Enter: make a selection Backspace: return to previous screen Escape: return to the Clicker.tv home screen. Note: You can also navigate through Clicker.tv with your PC remote. Recommended Browsers Firefox 3.6 + Safari 4.0 + Internet Explorer 8 + Google Chrome Note: You’ll need the latest version of Flash installed to play the majority of content. Earlier versions of the above browsers may work, but for full keyboard functionality, stick with the recommendations. Using Clicker.tv The first time you go to Clicker.tv, (link below) you’ll be met with a welcome screen and some helpful hints. Click Enter when finished.   The Home screen feature Headliners, Trending Shows, and Trending Episodes. You can scroll through the different options and category links along the left side.   The Search link pulls up an onscreen keyboard so you can enter search terms with a remote as well as a keyboard. Type in your search terms and matching items are displayed on the screen.   You can also browse by a wide variety of categories. Select TV to browse only available TV programs. Or, browse only Movies in the movie category. There are also links for Web content and Music.   Creating an Account You can access all Clicker.tv content without an account, but a Clicker account allows users to create playlists and subscribe to shows and have them automatically added to their playlist. You’ll need to go to Clicker.com and create an account. You’ll find the link at the upper right of the page. Enter a username, password and email address. There also an option to link with Facebook, or you can simply Skip this step.   Go to Clicker.tv and sign in. You can manually type in your credentials or use the onscreen keyboard with your remote.   Settings If you’d prefer not to display content from premium sites or Netflix, you can remove them through the Settings. Toggle Amazon, iTunes and Netflix on or off.   Watching Episodes To watch an episode, select the image to begin playing from the default source, or select one of the other options. You can see in the example below that you can choose to watch the episode from Fox, Hulu, or Amazon Video on Demand.   Your episode will then launch and begin playing from your chosen source. If you choose a premium content source such as iTunes or Amazon’s VOD, you’ll be taken to the Amazon’s website or iTunes and prompted to purchase the content.   Playlists Once you’ve created an account and signed in, you can begin adding Shows to your playlist. Choose a series and select Add to Playlist.   You’ll see in the example below that Family Guy has been Added and the number 142 is shown next to the playlist icon to indicate that 142 episodes has been added to your playlist. Underneath the listings for each episode in your playlist you can mark as Watched, or Remove individual episodes.   You can also view the playlist or make any changes from the Clicker.com website. Click on “Playlist” on the top right of the Clicker.com site to access your playlists. You can select individual episodes from your playlists, remove them, or mark them as watched or unwatched. Clicker.TV and Boxee Boxee offers a Clicker.TV app that features a limited amount of the Clicker.TV content. You’ll find Clicker.TV located in the Boxee Apps Library. Select the Clicker App and then choose Start. From the Clicker App interface you can search or browse for available content. Select an episode you’d like to view… Then select play in the pop up window. You can also add it to your Boxee queue, share it, or add a shortcut, just as you can from other Boxee apps. When you click play your episode will launch and begin playing in Boxee. Conclusion Clicker.TV is currently still in Beta and has some limitations. Typical remotes won’t work completely in all external websites. So, you’ll still need a keyboard to be able to perform some operations such as switching to full screen mode. The Boxee app offers a more fully remote friendly environment, but unfortunately lacks a good portion of the Clicker.tv content. As with many content sites, availability of certain programming may be limited by your geographic location. Want to add Clicker.TV functionality to Windows Media Center? You can do so through the Boxee Integration for Windows 7 Media Center plug-in. Clicker.tv Clicker.com Similar Articles Productive Geek Tips Share Digital Media With Other Computers on a Home Network with Windows 7Stream Music and Video Over the Internet with Windows Media Player 12Listen to Online Radio with AntennaEnable Media Streaming in Windows Home Server to Windows Media PlayerNorton Internet Security 2010 [Review] TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • Import Data from Excel sheet to DB Table through OAF page

    - by PRajkumar
    1. Create a New Workspace and Project File > New > General > Workspace Configured for Oracle Applications File Name – PrajkumarImportxlsDemo   Automatically a new OA Project will also be created   Project Name -- ImportxlsDemo Default Package -- prajkumar.oracle.apps.fnd.importxlsdemo   2. Add JAR file jxl-2.6.3.jar to Apache Library Download jxl-2.6.3.jar from following link – http://www.findjar.com/jar/net.sourceforge.jexcelapi/jars/jxl-2.6.jar.html   Steps to add jxl.jar file in Local Machine Right Click on ImportxlsDemo > Project Properties > Libraries > Add jar/Directory and browse to directory where jxl-2.6.3.jar has been downloaded and select the JAR file            Steps to add jxl.jar file at EBS middle tier On your EBS middile tier copy jxl.jar at $FND_TOP/java/3rdparty/standalone Add $FND_TOP/java/3rdparty/standalone\jxl.jar to custom classpath in Jser.properties file which is at $IAS_ORACLE_HOME/Apache/Jserv/etc wrapper.classpath=/U01/oracle/dev/devappl/fnd/11.5.0/java/3rdparty/stdalone/jxl.jar Bounce Apache Server   3. Create a New Application Module (AM) Right Click on ImportxlsDemo > New > ADF Business Components > Application Module Name -- ImportxlsAM Package -- prajkumar.oracle.apps.fnd.importxlsdemo.server   Check Application Module Class: ImportxlsAMImpl Generate JavaFile(s)   4. Create Test Table in which we will insert data from excel CREATE TABLE xx_import_excel_data_demo (    -- --------------------      -- Data Columns      -- --------------------      column1                 VARCHAR2(100),      column2                 VARCHAR2(100),      column3                 VARCHAR2(100),      column4                 VARCHAR2(100),      column5                 VARCHAR2(100),      -- --------------------      -- Who Columns      -- --------------------      last_update_date   DATE         NOT NULL,      last_updated_by    NUMBER   NOT NULL,      creation_date         DATE         NOT NULL,      created_by             NUMBER    NOT NULL,      last_update_login  NUMBER );   5. Create a New Entity Object (EO) Right click on ImportxlsDemo > New > ADF Business Components > Entity Object Name – ImportxlsEO Package -- prajkumar.oracle.apps.fnd.importxlsdemo.schema.server Database Objects -- XX_IMPORT_EXCEL_DATA_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on ImportxlsDemo > New > ADF Business Components > View Object Name -- ImportxlsVO Package -- prajkumar.oracle.apps.fnd.importxlsdemo.server   In Step2 in Entity Page select ImportxlsEO and shuttle it to selected list In Step3 in Attributes Window select all columns and shuttle them to selected list   In Java page Uncheck Generate Java file for View Object Class: ImportxlsVOImpl Select Generate Java File for View Row Class: ImportxlsVORowImpl -> Generate Java File -> Accessors   7. Add Your View Object to Root UI Application Module Right click on ImportxlsAM > Edit ImportxlsAM > Data Model > Select ImportxlsVO and shuttle to Data Model list   8. Create a New Page Right click on ImportxlsDemo > New > Web Tier > OA Components > Page Name -- ImportxlsPG Package -- prajkumar.oracle.apps.fnd.importxlsdemo.webui   9. Select the ImportxlsPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties:   Attribute Property ID PageLayoutRN AM Definition prajkumar.oracle.apps.fnd.importxlsdemo.server.ImportxlsAM Window Title Import Data From Excel through OAF Page Demo Window Title Import Data From Excel through OAF Page Demo   11. Create messageComponentLayout Region Under Page Layout Region Right click PageLayoutRN > New > Region   Attribute Property ID MainRN Item Style messageComponentLayout   12. Create a New Item messageFileUpload Bean under MainRN Right click on MainRN > New > messageFileUpload Set Following Properties for New Item --   Attribute Property ID MessageFileUpload Item Style messageFileUpload   13. Create a New Item Submit Button Bean under MainRN Right click on MainRN > New > messageLayout Set Following Properties for messageLayout --   Attribute Property ID ButtonLayout   Right Click on ButtonLayout > New > Item   Attribute Property ID Go Item Style submitButton Attribute Set /oracle/apps/fnd/attributesets/Buttons/Go   14. Create Controller for page ImportxlsPG Right Click on PageLayoutRN > Set New Controller Package Name: prajkumar.oracle.apps.fnd.importxlsdemo.webui Class Name: ImportxlsCO   Write Following Code in ImportxlsCO in processFormRequest import oracle.apps.fnd.framework.OAApplicationModule; import oracle.apps.fnd.framework.OAException; import java.io.Serializable; import oracle.apps.fnd.framework.webui.OAControllerImpl; import oracle.apps.fnd.framework.webui.OAPageContext; import oracle.apps.fnd.framework.webui.beans.OAWebBean; import oracle.cabo.ui.data.DataObject; import oracle.jbo.domain.BlobDomain; public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processFormRequest(pageContext, webBean);  if (pageContext.getParameter("Go") != null)  {   DataObject fileUploadData = (DataObject)pageContext.getNamedDataObject("MessageFileUpload");   String fileName = null;                 try   {    fileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");   }   catch(NullPointerException ex)   {    throw new OAException("Please Select a File to Upload", OAException.ERROR);   }   BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, fileName);   try   {    OAApplicationModule oaapplicationmodule = pageContext.getRootApplicationModule();    Serializable aserializable2[] = {uploadedByteStream};    Class aclass2[] = {BlobDomain.class };    oaapplicationmodule.invokeMethod("ReadExcel", aserializable2,aclass2);   }   catch (Exception ex)   {    throw new OAException(ex.toString(), OAException.ERROR);   }  } }     Write Following Code in ImportxlsAMImpl.java import java.io.IOException; import java.io.InputStream; import jxl.Cell; import jxl.CellType; import jxl.Sheet; import jxl.Workbook; import jxl.read.biff.BiffException; import oracle.apps.fnd.framework.server.OAApplicationModuleImpl; import oracle.jbo.Row; import oracle.apps.fnd.framework.OAViewObject; import oracle.apps.fnd.framework.server.OAViewObjectImpl; import oracle.jbo.domain.BlobDomain; public void createRecord(String[] excel_data) {   OAViewObject vo = (OAViewObject)getImportxlsVO1();            if (!vo.isPreparedForExecution())    {   vo.executeQuery();      }                      Row row = vo.createRow();  try  {   for (int i=0; i < excel_data.length; i++)   {    row.setAttribute("Column" +(i+1) ,excel_data[i]);   }  }  catch(Exception e)  {   System.out.println(e.getMessage());   }  vo.insertRow(row);  getTransaction().commit(); }      public void ReadExcel(BlobDomain fileData) throws IOException {  String[] excel_data  = new String[5];  InputStream inputWorkbook = fileData.getInputStream();  Workbook w;          try  {   w = Workbook.getWorkbook(inputWorkbook);                       // Get the first sheet   Sheet sheet = w.getSheet(0);                       for (int i = 0; i < sheet.getRows(); i++)   {    for (int j = 0; j < sheet.getColumns(); j++)    {     Cell cell = sheet.getCell(j, i);     CellType type = cell.getType();     if (cell.getType() == CellType.LABEL)     {      System.out.println("I got a label " + cell.getContents());      excel_data[j] = cell.getContents();     }     if (cell.getType() == CellType.NUMBER)     {        System.out.println("I got a number " + cell.getContents());      excel_data[j] = cell.getContents();     }    }    createRecord(excel_data);   }  }              catch (BiffException e)  {   e.printStackTrace();  } }   15. Congratulation you have successfully finished. Run Your page and Test Your Work   Consider Excel PRAJ_TEST.xls with following data --       Lets Try to import this data into DB Table --          

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Setting up a new Silverlight 4 Project with WCF RIA Services

    - by Kevin Grossnicklaus
    Many of my clients are actively using Silverlight 4 and RIA Services to build powerful line of business applications.  Getting things set up correctly is critical to being to being able to take full advantage of the RIA services plumbing and when developers struggle with the setup they tend to shy away from the solution as a whole.  I’m a big proponent of RIA services and wanted to take the opportunity to share some of my experiences in setting up these types of projects.  In late 2010 I presented a RIA Services Master Class here in St. Louis, MO through my firm (ArchitectNow) and the information shared in this post was promised during that presentation. One other thing I want to mention before diving in is the existence of a number of other great posts on this subject.  I’ve learned a lot from many of them and wanted to call out a few of them.  The purpose of my post is to point out some of the gotchas that people get caught up on in the process but I would still encourage you to do as much additional research as you can to find the perfect setup for your needs. Here are a few additional blog posts and articles you should check out on the subject: http://msdn.microsoft.com/en-us/library/ee707351(VS.91).aspx http://adam-thompson.com/post/2010/07/03/Getting-Started-with-WCF-RIA-Services-for-Silverlight-4.aspx Technologies I don’t intend for this post to turn into a full WCF RIA Services tutorial but I did want to point out what technologies we will be using: Visual Studio.NET 2010 Silverlight 4.0 WCF RIA Services for Visual Studio 2010 Entity Framework 4.0 I also wanted to point out that the screenshots came from my personal development box which has a number of additional plug-ins and frameworks loaded so a few of the screenshots might not match 100% with what you see on your own machines. If you do not have Visual Studio 2010 you can download the express version from http://www.microsoft.com/express.  The Silverlight 4.0 tools and the WCF RIA Services components are installed via the Web Platform Installer (http://www.microsoft.com/web/download). Also, the examples given in this post are done in C#…sorry to you VB folks but the concepts are 100% identical. Setting up anew RIA Services Project This section will provide a step-by-step walkthrough of setting up a new RIA services project using a shared DLL for server side code and a simple Entity Framework model for data access.  All projects are created with the consistent ArchitectNow.RIAServices filename prefix and default namespace.  This would be modified to match your companies standards. First, open Visual Studio and open the new project window via File->New->Project.  In the New Project window, select the Silverlight folder in the Installed Templates section on the left and select “Silverlight Application” as your project type.  Verify your solution name and location are set appropriately.  Note that the project name we specified in the example below ends with .Client.  This indicates the name which will be given to our Silverlight project. I consider Silverlight a client-side technology and thus use this name to reflect that.  Click Ok to continue. During the creation on a new Silverlight 4 project you will be prompted with the following dialog to create a new web ASP.NET web project to host your Silverlight content.  As we are demonstrating the setup of a WCF RIA Services infrastructure, make sure the “Enable WCF RIA Services” option is checked and click OK.  Obviously, there are some other options here which have an effect on your solution and you are welcome to look around.  For our example we are going to leave the ASP.NET Web Application Project selected.  If you are interested in having your Silverlight project hosted in an MVC 2 application or a Web Site project these options are available as well.  Also, whichever web project type you select, the name can be modified here as well.  Note that it defaults to the same name as your Silverlight project with the addition of a .Web suffix. At this point, your full Silverlight 4 project and host ASP.NET Web Application should be created and will now display in your Visual Studio solution explorer as part of a single Visual Studio solution as follows: Now we want to add our WCF RIA Services projects to this same solution.  To do so, right-click on the Solution node in the solution explorer and select Add->New Project.  In the New Project dialog again select the Silverlight folder under the Visual C# node on the left and, in the main area of the screen, select the WCF RIA Services Class Library project template as shown below.  Make sure your project name is set appropriately as well.  For the sample below, we will name the project “ArchitectNow.RIAServices.Server.Entities”.   The .Server.Entities suffix we use is meant to simply indicate that this particular project will contain our WCF RIA Services entity classes (as you will see below).  Click OK to continue. Once you have created the WCF RIA Services Class Library specified above, Visual Studio will automatically add TWO projects to your solution.  The first will be an project called .Server.Entities (using our naming conventions) and the other will have the same name with a .Web extension.  The full solution (with all 4 projects) is shown in the image below.  The .Entities project will essentially remain empty and is actually a Silverlight 4 class library that will contain generated RIA Services domain objects.  It will be referenced by our front-end Silverlight project and thus allow for simplified sharing of code between the client and the server.   The .Entities.Web project is a .NET 4.0 class library into which we will put our data access code (via Entity Framework).  This is our server side code and business logic and the RIA Services plumbing will maintain a link between this project and the front end.  Specific entities such as our domain objects and other code we set to be shared will be copied automatically into the .Entities project to be used in both the front end and the back end. At this point, we want to do a little cleanup of the projects in our solution and we will do so by deleting the “Class1.cs” class from both the .Entities project and the .Entities.Web project.  (Has anyone ever intentionally named a class “Class1”?) Next, we need to configure a few references to make RIA Services work.  THIS IS A KEY STEP THAT CAUSES MANY HEADACHES FOR DEVELOPERS NEW TO THIS INFRASTRUCTURE! Using the Add References dialog in Visual Studio, add a project reference from the *.Client project (our Silverlight 4 client) to the *.Entities project (our RIA Services class library).  Next, again using the Add References dialog in Visual Studio, add a project reference from the *.Client.Web project (our ASP.NET host project) to the *.Entities.Web project (our back-end data services DLL).  To get to the Add References dialog, simply right-click on the project you with to add a reference to in the Visual Studio solution explorer and select “Add Reference” from the resulting context menu.  You will want to make sure these references are added as “Project” references to simplify your future debugging.  To reiterate the reference direction using the project names we have utilized in this example thus far:  .Client references .Entities and .Client.Web reference .Entities.Web.  If you have opted for a different naming convention, then the Silverlight project must reference the RIA Services Silverlight class library and the ASP.NET host project must reference the server-side class library. Next, we are going to add a new Entity Framework data model to our data services project (.Entities.Web).  We will do this by right clicking on this project (ArchitectNow.Server.Entities.Web in the above diagram) and selecting Add->New Project.  In the New Project dialog we will select ADO.NET Entity Data Model as in the following diagram.  For now we will call this simply SampleDataModel.edmx and click OK. It is worth pointing out that WCF RIA Services is in no way tied to the Entity Framework as a means of accessing data and any data access technology is supported (as long as the server side implementation maps to the RIA Services pattern which is a topic beyond the scope of this post).  We are using EF to quickly demonstrate the RIA Services concepts and setup infrastructure, as such, I am not providing a database schema with this post but am instead connecting to a small sample database on my local machine.  The following diagram shows a simple EF Data Model with two tables that I reverse engineered from a local data store.   If you are putting together your own solution, feel free to reverse engineer a few tables from any local database to which you have access. At this point, once you have an EF data model generated as an EDMX into your .Entites.Web project YOU MUST BUILD YOUR SOLUTION.  I know it seems strange to call that out but it important that the solution be built at this point for the next step to be successful.  Obviously, if you have any build errors, these must be addressed at this point. At this point we will add a RIA Services Domain Service to our .Entities.Web project (our server side code).  We will need to right-click on the .Entities.Web project and select Add->New Item.  In the Add New Item dialog, select Domain Service Class and verify the name of your new Domain Service is correct (ours is called SampleService.cs in the image below).  Next, click "Add”. After clicking “Add” to include the Domain Service Class in the selected project, you will be presented with the following dialog.  In it, you can choose which entities from the selected EDMX to include in your services and if they should be allowed to be edited (i.e. inserted, updated, or deleted) via this service.  If the “Available DataContext/ObjectContext classes” dropdown is empty, this indicates you have not yes successfully built your project after adding your EDMX.  I would also recommend verifying that the “Generate associated classes for metadata” option is selected.  Once you have selected the appropriate options, click “OK”. Once you have added the domain service class to the .Entities.Web project, the resulting solution should look similar to the following: Note that in the solution you now have a SampleDataModel.edmx which represents your EF data mapping to your database and a SampleService.cs which will contain a large amount of generated RIA Services code which RIA Services utilizes to access this data from the Silverlight front-end.  You will put all your server side data access code and logic into the SampleService.cs class.  The SampleService.metadata.cs class is for decorating the generated domain objects with attributes from the System.ComponentModel.DataAnnotations namespace for validation purposes. FINAL AND KEY CONFIGURATION STEP!  One key step that causes significant headache to developers configuring RIA Services for the first time is the fact that, when we added the EDMX to the .Entities.Web project for our EF data access, a connection string was generated and placed within a newly generated App.Context file within that project.  While we didn’t point it out at the time you can see it in the image above.  This connection string will be required for the EF data model to successfully locate it’s data.  Also, when we added the Domain Service class to the .Entities.Web project, a number of RIA Services configuration options were added to the same App.Config file.   Unfortunately, when we ultimately begin to utilize the RIA Services infrastructure, our Silverlight UI will be making RIA services calls through the ASP.NET host project (i.e. .Client.Web).  This host project has a reference to the .Entities.Web project which actually contains the code so all will pass through correctly EXCEPT the fact that the host project will utilize it’s own Web.Config for any configuration settings.  For this reason we must now merge all the sections of the App.Config file in the .Entities.Web project into the Web.Config file in the .Client.Web project.  I know this is a bit tedious and I wish there were a simpler solution but it is required for our RIA Services Domain Service to be made available to the front end Silverlight project.  Much of this manual merge can be achieved by simply cutting and pasting from App.Config into Web.Config.  Unfortunately, the <system.webServer> section will exist in both and the contents of this section will need to be manually merged.  Fortunately, this is a step that needs to be taken only once per solution.  As you add additional data structures and Domain Services methods to the server no additional changes will be necessary to the Web.Config. Next Steps At this point, we have walked through the basic setup of a simple RIA services solution.  Unfortunately, there is still a lot to know about RIA services and we have not even begun to take advantage of the plumbing which we just configured (meaning we haven’t even made a single RIA services call).  I plan on posting a few more introductory posts over the next few weeks to take us to this step.  If you have any questions on the content in this post feel free to reach out to me via this Blog and I’ll gladly point you in (hopefully) the right direction. Resources Prior to closing out this post, I wanted to share a number or resources to help you get started with RIA services.  While I plan on posting more on the subject, I didn’t invent any of this stuff and wanted to give credit to the following areas for helping me put a lot of these pieces into place.   The books and online resources below will go a long way to making you extremely productive with RIA services in the shortest time possible.  The only thing required of you is the dedication to take advantage of the resources available. Books Pro Business Applications with Silverlight 4 http://www.amazon.com/Pro-Business-Applications-Silverlight-4/dp/1430272074/ref=sr_1_2?ie=UTF8&qid=1291048751&sr=8-2 Silverlight 4 in Action http://www.amazon.com/Silverlight-4-Action-Pete-Brown/dp/1935182374/ref=sr_1_1?ie=UTF8&qid=1291048751&sr=8-1 Pro Silverlight for the Enterprise (Books for Professionals by Professionals) http://www.amazon.com/Pro-Silverlight-Enterprise-Books-Professionals/dp/1430218673/ref=sr_1_3?ie=UTF8&qid=1291048751&sr=8-3 Web Content RIA Services http://channel9.msdn.com/Blogs/RobBagby/NET-RIA-Services-in-5-Minutes http://silverlight.net/riaservices/ http://www.silverlight.net/learn/videos/all/net-ria-services-intro/ http://www.silverlight.net/learn/videos/all/ria-services-support-visual-studio-2010/ http://channel9.msdn.com/learn/courses/Silverlight4/SL4BusinessModule2/SL4LOB_02_01_RIAServices http://www.myvbprof.com/MainSite/index.aspx#/zSL4_RIA_01 http://channel9.msdn.com/blogs/egibson/silverlight-firestarter-ria-services http://msdn.microsoft.com/en-us/library/ee707336%28v=VS.91%29.aspx Silverlight www.silverlight.net http://msdn.microsoft.com/en-us/silverlight4trainingcourse.aspx http://channel9.msdn.com/shows/silverlighttv

    Read the article

  • Turn A Flash Drive Into a Portable Web Server

    - by Matthew Guay
    Portable applications are very useful for getting work done on the go, but how about portable servers?  Here’s how you can turn your flash drive into a portable web server. Getting Started To put a full web server on our flash drive, we’re going to use XAMPP Lite.  This lightweight, preconfigured server includes recent versions of Apache, MySQL, and PHP so you can run most websites and webapps directly from it.  You could use the full XAMPP, which includes more features such as a FileZilla FTP server and OpenSSL, but for most purposes, the light version is plenty for a portable server. Download the latest version of XAMPP Lite (link below).  In this tutorial, we used the self-extracting EXE version; you could choose the ZIP file and extract the files yourself, but we found it easier to use the executable. Run the installer, and click Browse choose where to install your server. Select your flash drive, or a folder in it, and click Ok.  Make sure your flash drive has at least 250MB of available storage space.  XAMPP will create an xampplite folder and store all the files in it during the installation.   Click Install, and all of the files will be extracted to your flash drive.  This may take a few moments depending on your flash drive’s speed. When the extraction process is finished, a Command Prompt window will open to finish the installation.  The first prompt will ask if you want to add shortcuts to the start menu and desktop; enter “n” since we don’t want to create start menu links to our portable server. Now enter “y” to configure XAMPP’s directories automatically. Finally, enter “y” to make XAMPP fully portable.  It will set up the servers to run without specific drive letters so your server will run from any computer. XAMPP will finalize your changes; press Enter when everything is completed. Setup will automatically launch the command line version of XAMPP.  On first run, confirm that your time zone is correct. And that’s it!  You can now run XAMPP’s control panel by entering 1, or you can exit and run XAMPP from any other computer with your flash drive. To complete your portable webserver kit, you may want to install Portable Firefox or Iron Browser on your flash drive so you always have your favorite browser ready to use. Running your portable server Using your portable server is very simple.  Open the xampplite folder on your flash drive and launch xampp-control.exe. Click Start beside Apache and MySql to get your webserver running. Please note: Do not check the Svc box, as this will run the server as a Windows service.  To keep XAMPP portable, you do not want it running as a service! Windows Firewall may prompt you that it blocked the server; click Allow access to let your server run. Once they’re running, you can click Admin to open the default XAMPP admin page running from your local webserver.  Or, you can view it by browsing to http://localhost/ or http://127.0.0.1/ in your browser. If everything is working correctly, you should see this page in your browser.  Choose your default language… And then you’ll see the default XAMPP admin page.   Click the Status link on the left sidebar to make sure everything is running correctly. If you click the Admin button for MySql in the XAMPP Control Panel, it will open phpMyAdmin in your default browser.  Alternately, you can open the MySql admin page by entering http://localhost/phpmyadmin/ or http://127.0.0.1/phpmyadmin/ in your favorite browser. Now you can add your own webpages to your webserver.  Save all of your web files in the \xampplight\htdocs\ folder on your flash drive. Install WordPress in your portable server Since XAMPP Lite includes MySql and PHP, you can even run webapps such as WordPress, the popular CMS and blogging platform.  Download WordPress (link below), and extract the files to the \xampplite\htdocs folder on your flash drive. Now all of the WordPress files are stored in \xampplite\htdocs\wordpress on your flash drive. We still need to setup WordPress on our portable server.  Open your MySql admin page http://localhost/phpmyadmin/ to create a new database for WordPress.  Enter a name for your database in the “Create new database” box, and click Create. Click the Privileges tab on the top, and the select “Add a new User”.   Enter a username and password for the database, and then click the Go button on the bottom of the page. Using WordPress Now, in your browser, enter http://localhost/wordpress/wp-admin/install.php.  Click Create a Configuration File to continue. Make sure you have your Database name, username, and password we created previously, and click “Let’s Go!” Enter your WordPress database name, username, and password, leave the other two entries as default, and click Submit. You should now have the database all ready to go.  Click “Run the install” to finish installing WordPress. Enter a title, username, and password for your test blog, as well as your email address, and then click “Install WordPress”. You now have a portable install of WordPress.  Click “Log In” to  access your WordPress admin page. Enter your username and password, and click Log In. Here you can add pages, posts, themes, extensions, and anything else just like you would on a normal WordPress site.  This is a great way to experiment with WordPress without messing up your real website. You can view your portable WordPress site by entering http://localhost/wordpress/ in your address bar. Closing your server When you’re done running your test server, click the Stop button on each of the services and then click the Exit button in the XAMPP control panel.  If you press the exit button on the top of the window, it will just minimize the control panel to the tray.   Alternately, you can shutdown your server by running xampp_stop.exe from your xampplite folder. Conclusion XAMPP Lite gives you a great way to run a full webserver directly from your flash drive.  Now, anywhere you go, you can test and tweak your webpages and webapps from any Windows computer.  Links Download XAMPP Lite Download WordPress Similar Articles Productive Geek Tips BitLocker To Go Encrypts Portable Flash Drives in Windows 7How To Use BitLocker on Drives without TPMSpeed up Your Windows Vista Computer with ReadyBoostView and Manage Flash Cookies the Easy WayInstall and Run Applications from Your iPod, Flash Drive or Mp3 Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error

    Read the article

  • Hey, Google: It’s Time to Add Multi-Window Multitasking To Android

    - by Chris Hoffman
    In 2012, Google’s Dianne Hackborn threatened to revoke CyanogenMod’s access to the Android Market if they moved forward with adding “Cornerstone” multitasking to their custom ROM. Samsung has since created their own multi-window multitasking feature. Dianne Hackborn said this “is something that needs to be done at the mainline platform level” so apps wouldn’t break. She was right — Android needs this as a standard feature and it’s time for Google to provide it. Doesn’t Android Have Multitasking? Android originally stood out from Apple’s iOS with its powerful multitasking. Applications can continue running in the background while you’re using another application. This makes Android powerful — you can even have BitTorrent clients downloading files in the background while using another app. Android still kept the design of a single app on screen at a time. This made a lot of sense when Android only ran on smartphones with small screens. Today, Android runs on everything from smaller smartphones all the way up to huge “phablets” like the Galaxy Note. Android has gone beyond phones and runs on 12-inch tablets, convertibles with keyboard docks, laptops, and even Android desktops. Android isn’t just a phone operating system. Samsung’s Multi-Window Isn’t Good Enough Samsung has tried to add value to Android by adding a multi-window feature. When you’re using a high-end phone like the Galaxy Note or Galaxy S, or a Galaxy tablet, you have the ability to run certain apps side-by-side with each other. There are big problems here. This only works on Samsung devices, and only on specific Samsung devices. To add support for this feature in a way that doesn’t break other apps, Samsung’s multi-window feature also only works with specific apps. You can’t just run any app in multi-window view, only the apps on the Multi Window bar Samsung provides. This prevents third-party apps from breaking, which is what Google was worried about with CyanogenMod’s Cornerstone feature. A feature that only works with a handful of apps on specific devices from a single manufacturer isn’t good enough. This feature needs to work on every Android device — or at least ones with suitably large screens and powerful enough internals. It needs to be an Android platform feature so application developers can ensure their apps will work properly with it on every device. Android developers shouldn’t have to add support for each manufacturer’s own multi-window feature if other manufacturers decide to copy Samsung. Floating Apps Are a Dirty Hack Floating apps also enable real multitasking. Remember that Android allows apps to run in the background while you’re using an app in the foreground. These apps can present interfaces that appear floating above the current app — think of it like using “always on top” to make a window always appear over every other app on a desktop operating system. You can install floating apps to browse the web, take notes, chat, and watch videos while using any app. Only apps specifically designed to run as floating apps will work, so you have to seek them out. Floating apps are also awkward to use because they float over the app you’re using, blocking parts of its interface. Microsoft added floating-window support to Skype for Android. You can have a video conversation and the other person’s face will always appear on your screen, even when you leave the Skype app. Microsoft is using more of Android’s multi-window multitasking power than Google is. Custom ROMs and Root-Only Tweaks Aren’t Acceptable Some custom ROMs are adding this feature to Android. Google threatened to revoke CyanogenMod’s access to the Android Market (now known as Google Play) if they added this feature because it could potentially break third-party apps. Today, other custom ROMs are working on split-screen multitasking. Samsung added their own version to their own devices. You can also get this feature by using a root-only Xposed Framework tweak known as XMultiWindow. If you have root access, you can get multi-window multitasking or any app on your device. This shouldn’t require rooting your device or installing a custom ROM. These third-party solutions often have awkward interfaces and bugs. We need an integrated, supported solution that works the same on every device. Why Multi-Window is Important Microsoft’s Windows 8.1 stands out among tablet operating systems for its powerful multitasking support, allowing you to view several apps side-by-side at the same time. Apple is also reported to be working on adding side-by-side apps to the iPad with iOS 8. On every competitor’s operating system, you’ll be able to view a web page while you write an email, watch a video while you browse the web, or chat with someone while you do anything else. But Android’s still remained frozen in time. Despite all Android’s underlying power — and despite the way Android allows apps to adapt to different screen sizes — Google is resisting adding this feature. Large-screen Android tablets like the Nexus 10 (remember that tablet Google hasn’t updated in over 18 months?) need this feature. So do huge phones, convertibles, laptops, and Android desktops. If tablets are the future of personal computing, we should be able to do more than one thing at a time on our tablets’ big screens. Microsoft, Samsung, and even Apple are realizing this — now it’s Google’s turn. Image Credit: Sergey Galyonkin on Flickr, Karlis Dambrans on Flickr

    Read the article

  • Yet Another ASP.NET MVC CRUD Tutorial

    - by Ricardo Peres
    I know that I have not posted much on MVC, mostly because I don’t use it on my daily life, but since I find it so interesting, and since it is gaining such popularity, I will be talking about it much more. This time, it’s about the most basic of scenarios: CRUD. Although there are several ASP.NET MVC tutorials out there that cover ordinary CRUD operations, I couldn’t find any that would explain how we can have also AJAX, optimistic concurrency control and validation, using Entity Framework Code First, so I set out to write one! I won’t go into explaining what is MVC, Code First or optimistic concurrency control, or AJAX, I assume you are all familiar with these concepts by now. Let’s consider an hypothetical use case, products. For simplicity, we only want to be able to either view a single product or edit this product. First, we need our model: 1: public class Product 2: { 3: public Product() 4: { 5: this.Details = new HashSet<OrderDetail>(); 6: } 7:  8: [Required] 9: [StringLength(50)] 10: public String Name 11: { 12: get; 13: set; 14: } 15:  16: [Key] 17: [ScaffoldColumn(false)] 18: [DatabaseGenerated(DatabaseGeneratedOption.Identity)] 19: public Int32 ProductId 20: { 21: get; 22: set; 23: } 24:  25: [Required] 26: [Range(1, 100)] 27: public Decimal Price 28: { 29: get; 30: set; 31: } 32:  33: public virtual ISet<OrderDetail> Details 34: { 35: get; 36: protected set; 37: } 38:  39: [Timestamp] 40: [ScaffoldColumn(false)] 41: public Byte[] RowVersion 42: { 43: get; 44: set; 45: } 46: } Keep in mind that this is a simple scenario. Let’s see what we have: A class Product, that maps to a product record on the database; A product has a required (RequiredAttribute) Name property which can contain up to 50 characters (StringLengthAttribute); The product’s Price must be a decimal value between 1 and 100 (RangeAttribute); It contains a set of order details, for each time that it has been ordered, which we will not talk about (Details); The record’s primary key (mapped to property ProductId) comes from a SQL Server IDENTITY column generated by the database (KeyAttribute, DatabaseGeneratedAttribute); The table uses a SQL Server ROWVERSION (previously known as TIMESTAMP) column for optimistic concurrency control mapped to property RowVersion (TimestampAttribute). Then we will need a controller for viewing product details, which will located on folder ~/Controllers under the name ProductController: 1: public class ProductController : Controller 2: { 3: [HttpGet] 4: public ViewResult Get(Int32 id = 0) 5: { 6: if (id != 0) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: return (this.View("Single", ctx.Products.Find(id) ?? new Product())); 11: } 12: } 13: else 14: { 15: return (this.View("Single", new Product())); 16: } 17: } 18: } If the requested product does not exist, or one was not requested at all, one with default values will be returned. I am using a view named Single to display the product’s details, more on that later. As you can see, it delegates the loading of products to an Entity Framework context, which is defined as: 1: public class ProductContext: DbContext 2: { 3: public DbSet<Product> Products 4: { 5: get; 6: set; 7: } 8: } Like I said before, I’ll keep it simple for now, only aggregate root Product is available. The controller will use the standard routes defined by the Visual Studio ASP.NET MVC 3 template: 1: routes.MapRoute( 2: "Default", // Route name 3: "{controller}/{action}/{id}", // URL with parameters 4: new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults 5: ); Next, we need a view for displaying the product details, let’s call it Single, and have it located under ~/Views/Product: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <!DOCTYPE html> 3:  4: <html> 5: <head runat="server"> 6: <title>Product</title> 7: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: } 6:  7: function onComplete(ctx) 8: { 9: } 10:  11: </script> 8: </head> 9: <body> 10: <div> 11: <% 1: : this.Html.ValidationSummary(false) %> 12: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 13: <% 1: : this.Html.EditorForModel() %> 14: <input type="submit" name="submit" value="Submit" /> 15: <% 1: } %> 16: </div> 17: </body> 18: </html> Yes… I am using ASPX syntax… sorry about that!   I implemented an editor template for the Product class, which must be located on the ~/Views/Shared/EditorTemplates folder as file Product.ascx: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<Product>" %> 2: <div> 3: <%: this.Html.HiddenFor(model => model.ProductId) %> 4: <%: this.Html.HiddenFor(model => model.RowVersion) %> 5: <fieldset> 6: <legend>Product</legend> 7: <div class="editor-label"> 8: <%: this.Html.LabelFor(model => model.Name) %> 9: </div> 10: <div class="editor-field"> 11: <%: this.Html.TextBoxFor(model => model.Name) %> 12: <%: this.Html.ValidationMessageFor(model => model.Name) %> 13: </div> 14: <div class="editor-label"> 15: <%= this.Html.LabelFor(model => model.Price) %> 16: </div> 17: <div class="editor-field"> 18: <%= this.Html.TextBoxFor(model => model.Price) %> 19: <%: this.Html.ValidationMessageFor(model => model.Price) %> 20: </div> 21: </fieldset> 22: </div> One thing you’ll notice is, I am including both the ProductId and the RowVersion properties as hidden fields; they will come handy later or, so that we know what product and version we are editing. The other thing is the included JavaScript files: jQuery, jQuery UI and unobtrusive validations. Also, I am not using the Content extension method for translating relative URLs, because that way I would lose JavaScript intellisense for jQuery functions. OK, so, at this moment, I want to add support for AJAX and optimistic concurrency control. So I write a controller method like this: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.TryValidateModel(product) == true) 7: { 8: using (BlogContext ctx = new BlogContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty })); 29: } 30: } So, this method is only valid for HTTP POST requests (HttpPost), coming from AJAX (AjaxOnly, from MVC Futures), and from authenticated users (Authorize). It returns a JSON object, which is what you would normally use for AJAX requests, containing three properties: Success: a boolean flag; RowVersion: the current version of the ROWVERSION column as a Base-64 string; ProductId: the inserted product id, as coming from the database. If the product is new, it will be inserted into the database, and its primary key will be returned into the ProductId property. Success will be set to true; If a DbUpdateConcurrencyException occurs, it means that the value in the RowVersion property does not match the current ROWVERSION column value on the database, so the record must have been modified between the time that the page was loaded and the time we attempted to save the product. In this case, the controller just gets the new value from the database and returns it in the JSON object; Success will be false. Otherwise, it will be updated, and Success, ProductId and RowVersion will all have their values set accordingly. So let’s see how we can react to these situations on the client side. Specifically, we want to deal with these situations: The user is not logged in when the update/create request is made, perhaps the cookie expired; The optimistic concurrency check failed; All went well. So, let’s change our view: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <%@ Import Namespace="System.Web.Security" %> 3:  4: <!DOCTYPE html> 5:  6: <html> 7: <head runat="server"> 8: <title>Product</title> 9: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: window.alert('An error occurred: ' + error); 6: } 7:  8: function onSuccess(ctx) 9: { 10: if (typeof (ctx.Success) != 'undefined') 11: { 12: $('input#ProductId').val(ctx.ProductId); 13: $('input#RowVersion').val(ctx.RowVersion); 14:  15: if (ctx.Success == false) 16: { 17: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 18: } 19: else 20: { 21: window.alert('Saved successfully'); 22: } 23: } 24: else 25: { 26: if (window.confirm('Not logged in. Login now?') == true) 27: { 28: document.location.href = '<%: FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 29: } 30: } 31: } 32:  33: </script> 10: </head> 11: <body> 12: <div> 13: <% 1: : this.Html.ValidationSummary(false) %> 14: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 15: <% 1: : this.Html.EditorForModel() %> 16: <input type="submit" name="submit" value="Submit" /> 17: <% 1: } %> 18: </div> 19: </body> 20: </html> The implementation of the onSuccess function first checks if the response contains a Success property, if not, the most likely cause is the request was redirected to the login page (using Forms Authentication), because it wasn’t authenticated, so we navigate there as well, keeping the reference to the current page. It then saves the current values of the ProductId and RowVersion properties to their respective hidden fields. They will be sent on each successive post and will be used in determining if the request is for adding a new product or to updating an existing one. The only thing missing is the ability to insert a new product, after inserting/editing an existing one, which can be easily achieved using this snippet: 1: <input type="button" value="New" onclick="$('input#ProductId').val('');$('input#RowVersion').val('');"/> And that’s it.

    Read the article

  • Multi-tenant ASP.NET MVC - Views

    - by zowens
    Part I – Introduction Part II – Foundation Part III – Controllers   So far we have covered the basic premise of tenants and how they will be delegated. Now comes a big issue with multi-tenancy, the views. In some applications, you will not have to override views for each tenant. However, one of my requirements is to add extra views (and controller actions) along with overriding views from the core structure. This presents a bit of a problem in locating views for each tenant request. I have chosen quite an opinionated approach at the present but will coming back to the “views” issue in a later post. What’s the deal? The path I’ve chosen is to use precompiled Spark views. I really love Spark View Engine and was planning on using it in my project anyways. However, I ran across a really neat aspect of the source when I was having a look under the hood. There’s an easy way to hook in embedded views from your project. There are solutions that provide this, but they implement a special Virtual Path Provider. While I think this is a great solution, I would rather just have Spark take care of the view resolution. The magic actually happens during the compilation of the views into a bin-deployable DLL. After the views are compiled, the are simply pulled out of the views DLL. Each tenant has its own views DLL that just has “.Views” appended after the assembly name as a convention. The list of reasons for this approach are quite long. The primary motivation is performance. I’ve had quite a few performance issues in the past and I would like to increase my application’s performance in any way that I can. My customized build of Spark removes insignificant whitespace from the HTML output so I can some some bandwidth and load time without having to deal with whitespace removal at runtime.   How to setup Tenants for the Host In the source, I’ve provided a single tenant as a sample (Sample1). This will serve as a template for subsequent tenants in your application. The first step is to add a “PostBuildStep” installer into the project. I’ve defined one in the source that will eventually change as we focus more on the construction of dependency containers. The next step is to tell the project to run the installer and copy the DLL output to a folder in the host that will pick up as a tenant. Here’s the code that will achieve it (this belongs in Post-build event command line field in the Build Events tab of settings) %systemroot%\Microsoft.NET\Framework\v4.0.30319\installutil "$(TargetPath)" copy /Y "$(TargetDir)$(TargetName)*.dll" "$(SolutionDir)Web\Tenants\" copy /Y "$(TargetDir)$(TargetName)*.pdb" "$(SolutionDir)Web\Tenants\" The DLLs with a name starting with the target assembly name will be copied to the “Tenants” folder in the web project. This means something like MultiTenancy.Tenants.Sample1.dll and MultiTenancy.Tenants.Sample1.Views.dll will both be copied along with the debug symbols. This is probably the simplest way to go about this, but it is a tad inflexible. For example, what if you have dependencies? The preferred method would probably be to use IL Merge to merge your dependencies with your target DLL. This would have to be added in the build events. Another way to achieve that would be to simply bypass Visual Studio events and use MSBuild.   I also got a question about how I was setting up the controller factory. Here’s the basics on how I’m setting up tenants inside the host (Global.asax) protected void Application_Start() { RegisterRoutes(RouteTable.Routes); // create a container just to pull in tenants var topContainer = new Container(); topContainer.Configure(config => { config.Scan(scanner => { scanner.AssembliesFromPath(Path.Combine(Server.MapPath("~/"), "Tenants")); scanner.AddAllTypesOf<IApplicationTenant>(); }); }); // create selectors var tenantSelector = new DefaultTenantSelector(topContainer.GetAllInstances<IApplicationTenant>()); var containerSelector = new TenantContainerResolver(tenantSelector); // clear view engines, we don't want anything other than spark ViewEngines.Engines.Clear(); // set view engine ViewEngines.Engines.Add(new TenantViewEngine(tenantSelector)); // set controller factory ControllerBuilder.Current.SetControllerFactory(new ContainerControllerFactory(containerSelector)); } The code to setup the tenants isn’t actually that hard. I’m utilizing assembly scanners in StructureMap as a simple way to pull in DLLs that are not in the AppDomain. Remember that there is a dependency on the host in the tenants and a tenant cannot simply be referenced by a host because of circular dependencies.   Tenant View Engine TenantViewEngine is a simple delegator to the tenant’s specified view engine. You might have noticed that a tenant has to define a view engine. public interface IApplicationTenant { .... IViewEngine ViewEngine { get; } } The trick comes in specifying the view engine on the tenant side. Here’s some of the code that will pull views from the DLL. protected virtual IViewEngine DetermineViewEngine() { var factory = new SparkViewFactory(); var file = GetType().Assembly.CodeBase.Without("file:///").Replace(".dll", ".Views.dll").Replace('/', '\\'); var assembly = Assembly.LoadFile(file); factory.Engine.LoadBatchCompilation(assembly); return factory; } This code resides in an abstract Tenant where the fields are setup in the constructor. This method (inside the abstract class) will load the Views assembly and load the compilation into Spark’s “Descriptors” that will be used to determine views. There is some trickery on determining the file location… but it works just fine.   Up Next There’s just a few big things left such as StructureMap configuring controllers with a convention instead of specifying types directly with container construction and content resolution. I will also try to find a way to use the Web Forms View Engine in a multi-tenant way we achieved with the Spark View Engine without using a virtual path provider. I will probably not use the Web Forms View Engine personally, but I’m sure some people would prefer using WebForms because of the maturity of the engine. As always, I love to take questions by email or on twitter. Suggestions are always welcome as well! (Oh, and here’s another link to the source code).

    Read the article

  • Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CD

    - by Trevor Bekolay
    Accidentally deleting a file is a terrible feeling. Not being able to boot into Windows and undelete that file makes that even worse. Fortunately, you can recover deleted files on NTFS hard drives from an Ubuntu Live CD. To show this process, we created four files on the desktop of a Windows XP machine, and then deleted them. We then booted up the same machine with the bootable Ubuntu 9.10 USB Flash Drive that we created last week. Once Ubuntu 9.10 boots up, open a terminal by clicking Applications in the top left of the screen, and then selecting Accessories > Terminal. To undelete our files, we first need to identify the hard drive that we want to undelete from. In the terminal window, type in: sudo fdisk –l and press enter. What you’re looking for is a line that ends with HPSF/NTFS (under the heading System). In our case, the device is “/dev/sda1”. This may be slightly different for you, but it will still begin with /dev/. Note this device name. If you have more than one hard drive partition formatted as NTFS, then you may be able to identify the correct partition by the size. If you look at the second line of text in the screenshot above, it reads “Disk /dev/sda: 136.4 GB, …” This means that the hard drive that Ubuntu has named /dev/sda is 136.4 GB large. If your hard drives are of different size, then this information can help you track down the right device name to use. Alternatively, you can just try them all, though this can be time consuming for large hard drives. Now that you know the name Ubuntu has assigned to your hard drive, we’ll scan it to see what files we can uncover. In the terminal window, type: sudo ntfsundelete <HD name> and hit enter. In our case, the command is: sudo ntfsundelete /dev/sda1 The names of files that can recovered show up in the far right column. The percentage in the third column tells us how much of that file can be recovered. Three of the four files that we originally deleted are showing up in this list, even though we shut down the computer right after deleting the four files – so even in ideal cases, your files may not be recoverable. Nevertheless, we have three files that we can recover – two JPGs and an MPG. Note: ntfsundelete is immediately available in the Ubuntu 9.10 Live CD. If you are in a different version of Ubuntu, or for some other reason get an error when trying to use ntfsundelete, you can install it by entering “sudo apt-get install ntfsprogs” in a terminal window. To quickly recover the two JPGs, we will use the * wildcard to recover all of the files that end with .jpg. In the terminal window, enter sudo ntfsundelete <HD name> –u –m *.jpg which is, in our case, sudo ntfsundelete /dev/sda1 –u –m *.jpg The two files are recovered from the NTFS hard drive and saved in the current working directory of the terminal. By default, this is the home directory of the current user, though we are working in the Desktop folder. Note that the ntfsundelete program does not make any changes to the original NTFS hard drive. If you want to take those files and put them back in the NTFS hard drive, you will have to move them there after they are undeleted with ntfsundelete. Of course, you can also put them on your flash drive or open Firefox and email them to yourself – the sky’s the limit! We have one more file to undelete – our MPG. Note the first column on the far left. It contains a number, its Inode. Think of this as the file’s unique identifier. Note this number. To undelete a file by its Inode, enter the following in the terminal: sudo ntfsundelete <HD name> –u –i <Inode> In our case, this is: sudo ntfsundelete /dev/sda1 –u –i 14159 This recovers the file, along with an identifier that we don’t really care about. All three of our recoverable files are now recovered. However, Ubuntu lets us know visually that we can’t use these files yet. That’s because the ntfsundelete program saves the files as the “root” user, not the “ubuntu” user. We can verify this by typing the following in our terminal window: ls –l We want these three files to be owned by ubuntu, not root. To do this, enter the following in the terminal window: sudo chown ubuntu <Files> If the current folder has other files in it, you may not want to change their owner to ubuntu. However, in our case, we only have these three files in this folder, so we will use the * wildcard to change the owner of all three files. sudo chown ubuntu * The files now look normal, and we can do whatever we want with them. Hopefully you won’t need to use this tip, but if you do, ntfsundelete is a nice command-line utility. It doesn’t have a fancy GUI like many of the similar Windows programs, but it is a powerful tool that can recover your files quickly. See ntfsundelete’s manual page for more detailed usage information Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDUse Ubuntu Live CD to Backup Files from Your Dead Windows ComputerCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy WayGuide to Using Check Disk in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 2

    - by shiju
    In my previous post Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1, we have discussed on how to work with ASP.NET MVC 3 and EF Code First for developing web apps. We have created generic repository and unit of work with EF Code First for our ASP.NET MVC 3 application and did basic CRUD operations against a simple domain entity. In this post, I will demonstrate on working with domain entity with deep object graph, Service Layer and View Models and will also complete the rest of the demo application. In the previous post, we have done CRUD operations against Category entity and this post will be focus on Expense entity those have an association with Category entity. You can download the source code from http://efmvc.codeplex.com . The following frameworks will be used for this step by step tutorial.    1. ASP.NET MVC 3 RTM    2. EF Code First CTP 5    3. Unity 2.0 Domain Model Category Entity public class Category   {       public int CategoryId { get; set; }       [Required(ErrorMessage = "Name Required")]       [StringLength(25, ErrorMessage = "Must be less than 25 characters")]       public string Name { get; set;}       public string Description { get; set; }       public virtual ICollection<Expense> Expenses { get; set; }   } Expense Entity public class Expense     {                public int ExpenseId { get; set; }                public string  Transaction { get; set; }         public DateTime Date { get; set; }         public double Amount { get; set; }         public int CategoryId { get; set; }         public virtual Category Category { get; set; }     } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. Repository class for Expense Transaction Let’s create repository class for handling CRUD operations for Expense entity public class ExpenseRepository : RepositoryBase<Expense>, IExpenseRepository     {     public ExpenseRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface IExpenseRepository : IRepository<Expense> { } Service Layer If you are new to Service Layer, checkout Martin Fowler's article Service Layer . According to Martin Fowler, Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations. Controller classes should be lightweight and do not put much of business logic onto it. We can use the service layer as the business logic layer and can encapsulate the rules of the application. Let’s create a Service class for coordinates the transaction for Expense public interface IExpenseService {     IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime ednDate);     Expense GetExpense(int id);             void CreateExpense(Expense expense);     void DeleteExpense(int id);     void SaveExpense(); } public class ExpenseService : IExpenseService {     private readonly IExpenseRepository expenseRepository;            private readonly IUnitOfWork unitOfWork;     public ExpenseService(IExpenseRepository expenseRepository, IUnitOfWork unitOfWork)     {                  this.expenseRepository = expenseRepository;         this.unitOfWork = unitOfWork;     }     public IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime endDate)     {         var expenses = expenseRepository.GetMany(exp => exp.Date >= startDate && exp.Date <= endDate);         return expenses;     }     public void CreateExpense(Expense expense)     {         expenseRepository.Add(expense);         unitOfWork.Commit();     }     public Expense GetExpense(int id)     {         var expense = expenseRepository.GetById(id);         return expense;     }     public void DeleteExpense(int id)     {         var expense = expenseRepository.GetById(id);         expenseRepository.Delete(expense);         unitOfWork.Commit();     }     public void SaveExpense()     {         unitOfWork.Commit();     } }   View Model for Expense Transactions In real world ASP.NET MVC applications, we need to design model objects especially for our views. Our domain objects are mainly designed for the needs for domain model and it is representing the domain of our applications. On the other hand, View Model objects are designed for our needs for views. We have an Expense domain entity that has an association with Category. While we are creating a new Expense, we have to specify that in which Category belongs with the new Expense transaction. The user interface for Expense transaction will have form fields for representing the Expense entity and a CategoryId for representing the Category. So let's create view model for representing the need for Expense transactions. public class ExpenseViewModel {     public int ExpenseId { get; set; }       [Required(ErrorMessage = "Category Required")]     public int CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]     public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]     public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } The ExpenseViewModel is designed for the purpose of View template and contains the all validation rules. It has properties for mapping values to Expense entity and a property Category for binding values to a drop-down for list values of Category. Create Expense transaction Let’s create action methods in the ExpenseController for creating expense transactions public ActionResult Create() {     var expenseModel = new ExpenseViewModel();     var categories = categoryService.GetCategories();     expenseModel.Category = categories.ToSelectListItems(-1);     expenseModel.Date = DateTime.Today;     return View(expenseModel); } [HttpPost] public ActionResult Create(ExpenseViewModel expenseViewModel) {                      if (!ModelState.IsValid)         {             var categories = categoryService.GetCategories();             expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);             return View("Save", expenseViewModel);         }         Expense expense=new Expense();         ModelCopier.CopyModel(expenseViewModel,expense);         expenseService.CreateExpense(expense);         return RedirectToAction("Index");              } In the Create action method for HttpGet request, we have created an instance of our View Model ExpenseViewModel with Category information for the drop-down list and passing the Model object to View template. The extension method ToSelectListItems is shown below   public static IEnumerable<SelectListItem> ToSelectListItems(         this IEnumerable<Category> categories, int  selectedId) {     return           categories.OrderBy(category => category.Name)                 .Select(category =>                     new SelectListItem                     {                         Selected = (category.CategoryId == selectedId),                         Text = category.Name,                         Value = category.CategoryId.ToString()                     }); } In the Create action method for HttpPost, our view model object ExpenseViewModel will map with posted form input values. We need to create an instance of Expense for the persistence purpose. So we need to copy values from ExpenseViewModel object to Expense object. ASP.NET MVC futures assembly provides a static class ModelCopier that can use for copying values between Model objects. ModelCopier class has two static methods - CopyCollection and CopyModel.CopyCollection method will copy values between two collection objects and CopyModel will copy values between two model objects. We have used CopyModel method of ModelCopier class for copying values from expenseViewModel object to expense object. Finally we did a call to CreateExpense method of ExpenseService class for persisting new expense transaction. List Expense Transactions We want to list expense transactions based on a date range. So let’s create action method for filtering expense transactions with a specified date range. public ActionResult Index(DateTime? startDate, DateTime? endDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!startDate.HasValue)     {         startDate = new DateTime(dtNow.Year, dtNow.Month, 1);         endDate = startDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of start date's month, if end date is not passed     if (startDate.HasValue && !endDate.HasValue)     {         endDate = (new DateTime(startDate.Value.Year, startDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }     var expenses = expenseService.GetExpenses(startDate.Value ,endDate.Value);     //if request is Ajax will return partial view     if (Request.IsAjaxRequest())     {         return PartialView("ExpenseList", expenses);     }     //set start date and end date to ViewBag dictionary     ViewBag.StartDate = startDate.Value.ToShortDateString();     ViewBag.EndDate = endDate.Value.ToShortDateString();     //if request is not ajax     return View(expenses); } We are using the above Index Action method for both Ajax requests and normal requests. If there is a request for Ajax, we will call the PartialView ExpenseList. Razor Views for listing Expense information Let’s create view templates in Razor for showing list of Expense information ExpenseList.cshtml @model IEnumerable<MyFinance.Domain.Expense>   <table>         <tr>             <th>Actions</th>             <th>Category</th>             <th>                 Transaction             </th>             <th>                 Date             </th>             <th>                 Amount             </th>         </tr>       @foreach (var item in Model) {              <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.ExpenseId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.ExpenseId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divExpenseList" })             </td>              <td>                 @item.Category.Name             </td>             <td>                 @item.Transaction             </td>             <td>                 @String.Format("{0:d}", item.Date)             </td>             <td>                 @String.Format("{0:F}", item.Amount)             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New Expense", "Create") |         @Html.ActionLink("Create New Category", "Create","Category")     </p> Index.cshtml @using MyFinance.Helpers; @model IEnumerable<MyFinance.Domain.Expense> @{     ViewBag.Title = "Index"; }    <h2>Expense List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery-ui.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.ui.datepicker.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/jquery-ui-1.8.6.custom.css")" rel="stylesheet" type="text/css" />      @using (Ajax.BeginForm(new AjaxOptions{ UpdateTargetId="divExpenseList", HttpMethod="Get"})) {     <table>         <tr>         <td>         <div>           Start Date: @Html.TextBox("StartDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["StartDate"].ToString())), new { @class = "ui-datepicker" })         </div>         </td>         <td><div>            End Date: @Html.TextBox("EndDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["EndDate"].ToString())), new { @class = "ui-datepicker" })          </div></td>          <td> <input type="submit" value="Search By TransactionDate" /></td>         </tr>     </table>         }   <div id="divExpenseList">             @Html.Partial("ExpenseList", Model)     </div> <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Ajax search functionality using Ajax.BeginForm The search functionality of Index view is providing Ajax functionality using Ajax.BeginForm. The Ajax.BeginForm() method writes an opening <form> tag to the response. You can use this method in a using block. In that case, the method renders the closing </form> tag at the end of the using block and the form is submitted asynchronously by using JavaScript. The search functionality will call the Index Action method and this will return partial view ExpenseList for updating the search result. We want to update the response UI for the Ajax request onto divExpenseList element. So we have specified the UpdateTargetId as "divExpenseList" in the Ajax.BeginForm method. Add jQuery DatePicker Our search functionality is using a date range so we are providing two date pickers using jQuery datepicker. You need to add reference to the following JavaScript files to working with jQuery datepicker. jquery-ui.js jquery.ui.datepicker.js For theme support for datepicker, we can use a customized CSS class. In our example we have used a CSS file “jquery-ui-1.8.6.custom.css”. For more details about the datepicker component, visit jquery UI website at http://jqueryui.com/demos/datepicker . In the jQuery ready event, we have used following JavaScript function to initialize the UI element to show date picker. <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script>   Source Code You can download the source code from http://efmvc.codeplex.com/ . Summary In this two-part series, we have created a simple web application using ASP.NET MVC 3 RTM, Razor and EF Code First CTP 5. I have demonstrated patterns and practices  such as Dependency Injection, Repository pattern, Unit of Work, ViewModel and Service Layer. My primary objective was to demonstrate different practices and options for developing web apps using ASP.NET MVC 3 and EF Code First. You can implement these approaches in your own way for building web apps using ASP.NET MVC 3. I will refactor this demo app on later time.

    Read the article

  • What is SharePoint Out of the Box?

    - by Bil Simser
    It’s always fun in the blog-o-sphere and SharePoint bloggers always keep the pot boiling. Bjorn Furuknap recently posted a blog entry titled Why Out-of-the-Box Makes No Sense in SharePoint, quickly followed up by a rebuttal by Marc Anderson on his blog. Okay, now that we have all the players and the stage what’s the big deal? Bjorn started his post saying that you don’t use “out-of-the-box” (OOTB) SharePoint because it makes no sense. I have to disagree with his premise because what he calls OOTB is basically installing SharePoint and admiring it, but not using it. In his post he lays claim that modifying say the OOTB contacts list by removing (or I suppose adding) a column, now puts you in a situation where you’re no longer using the OOTB functionality. Really? Side note. Dear Internet, please stop comparing building software to building houses. Or comparing software architecture to building architecture. Or comparing web sites to making dinner. Are you trying to dumb down something so the general masses understand it? Comparing a technical skill to a construction operation isn’t the way to do this. Last time I checked, most people don’t know how to build houses and last time I checked people reading technical SharePoint blogs are generally technical people that understand the terms you use. Putting metaphors around software development to make it easy to understand is detrimental to the goal. </rant> Okay, where were we? Right, adding columns to lists means you are no longer using the OOTB functionality. Yeah, I still don’t get it. Another statement Bjorn makes is that using the OOTB functionality kills the flexibility SharePoint has in creating exactly what you want. IMHO this really flies in the absolute face of *where* SharePoint *really* shines. For the past year or so I’ve been leaning more and more towards OOTB solutions over custom development for the simple reason that its expensive to maintain systems and code and assets. SharePoint has enabled me to do this simply by providing the tools where I can give users what they need without cracking open up Visual Studio. This might be the fact that my day job is with a regulated company and there’s more scrutiny with spending money on anything new, but frankly that should be the position of any responsible developer, architect, manager, or PM. Do you really want to throw money away because some developer tells you that you need a custom web part when perhaps with some creative thinking or expectation setting with customers you can meet the need with what you already have. The way I read Bjorn’s terminology of “out-of-the-box” is install the software and tell people to go to a website and admire the OOTB system, but don’t change it! For those that know things like WordPress, DotNetNuke, SubText, Drupal or any of those content management/blogging systems, its akin to installing the software and setting up the “Hello World” blog post or page, then staring at it like it’s useful. “Yes, we are using WordPress!”. Then not adding a new post, creating a new category, or adding an About page. Perhaps I’m wrong in my interpretation. This leads us to what is OOTB SharePoint? To many people I’ve talked to the last few hours on twitter, email, etc. it is *not* just installing software but actually using it as it was fit for purpose. What’s the purpose of SharePoint then? It has many purposes, but using the OOTB templates Microsoft has given you the ability to collaborate on projects, author/share/publish documents, create pages, track items/contacts/tasks/etc. in a multi-user web based interface, and so on. Microsoft has pretty clear definitions of these different levels of SharePoint we’re talking about and I think it’s important for everyone to know what they are and what they mean. Personalization and Administration To me, this is the OOTB experience. You install the product and then are able to do things like create new lists, sites, edit and personalize pages, create new views, etc. Basically use the platform services available to you with Windows SharePoint Services (or SharePoint Foundation in 2010) to your full advantage. No code, no special tools needed, and very little user training required. Could you take someone who has never done anything in a website or piece of software and unleash them onto a site? Probably not. However I would argue that anyone who’s configured the Outlook reading layout or applied styles to a Word document probably won’t have too much difficulty in using SharePoint OUT OF THE BOX. Customization Here’s where things might get a bit murky but to me this is where you start looking at HTML/ASPX page code through SharePoint Designer, using jQuery scripts and plugging them into Web Part Pages via a Content Editor Web Part, and generally enhancing the site. The JavaScript debate might kick in here claiming it’s no different than C#, and frankly you can totally screw a site up with jQuery on a CEWP just as easily as you can with a C# delegate control deployed to the server file system. However (again, my blog, my opinion) the customization label comes in when I need to access the server (for example creating a custom theme) or have some kind of net-new element I add to the system that wasn’t there OOTB. It’s not content (like a new list or site), it’s code and does something functional. Development Here’s were the propeller hats come on and we’re talking algorithms and unit tests and compilers oh my. Software is deployed to the server, people are writing solutions after some kind of training (perhaps), there might be some specialized tools they use to craft and deploy the solutions, there’s the possibility of exceptions being thrown, etc. There are a lot of definitions here and just like customization it might get murky (do you let non-developers build solutions using development, i.e. jQuery/C#?). In my experience, it’s much more cost effective keeping solutions under the first two umbrellas than leaping into the third one. Arguably you could say that you can’t build useful solutions without *some* kind of code (even just some simple jQuery). I think you can get a *lot* of value just from using the OOTB experience and I don’t think you’re constraining your users that much. I’m not saying Marc or Bjorn are wrong. Like Obi-Wan stated, they’re both correct “from a certain point of view”. To me, SharePoint Out of the Box makes total sense and should not be dismissed. I just don’t agree with the premise that Bjorn is basing his statements on but that’s just my opinion and his is different and never the twain shall meet.

    Read the article

  • Know more about Enqueue Deadlock Detection

    - by Liu Maclean(???)
    ??? ORACLE ALLSTAR???????????????????,??????? ???????enqueue lock?????????3 ??????,????????????????????????????ora-00060 dead lock??process???3s: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com PROCESS A: set timing on; update maclean1 set t1=t1+1; PROCESS B: update maclean2 set t1=t1+1; PROCESS A: update maclean2 set t1=t1+1; PROCESS B: update maclean1 set t1=t1+1; ??3s? PROCESS A ?? ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:03.02 ????Process A????????????? 3s,?????????????,??????? ?????????? ???????: SQL> col name for a30 SQL> col value for a5 SQL> col DESCRIB for a50 SQL> set linesize 140 pagesize 1400 SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm='_enqueue_deadlock_scan_secs'; NAME VALUE DESCRIB ------------------------------ ----- -------------------------------------------------- _enqueue_deadlock_scan_secs 0 deadlock scan interval SQL> alter system set "_enqueue_deadlock_scan_secs"=18 scope=spfile; System altered. Elapsed: 00:00:00.01 SQL> startup force; ORACLE instance started. Total System Global Area 851443712 bytes Fixed Size 2100040 bytes Variable Size 738198712 bytes Database Buffers 104857600 bytes Redo Buffers 6287360 bytes Database mounted. Database opened. PROCESS A: SQL> set timing on; SQL> update maclean1 set t1=t1+1; 1 row updated. Elapsed: 00:00:00.06 Process B SQL> update maclean2 set t1=t1+1; 1 row updated. SQL> update maclean1 set t1=t1+1; Process A: SQL> SQL> alter session set events '10704 trace name context forever,level 10:10046 trace name context forever,level 8'; Session altered. SQL> update maclean2 set t1=t1+1; update maclean2 set t1=t1+1 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource  Elapsed: 00:00:18.05 ksqcmi: TX,90011,4a9 mode=6 timeout=21474836 WAIT #12: nam='enq: TX - row lock contention' ela= 2930070 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114759849120 WAIT #12: nam='enq: TX - row lock contention' ela= 2930636 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114762779801 WAIT #12: nam='enq: TX - row lock contention' ela= 2930439 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114765710430 *** 2012-06-12 09:58:43.089 WAIT #12: nam='enq: TX - row lock contention' ela= 2931698 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114768642192 WAIT #12: nam='enq: TX - row lock contention' ela= 2930428 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114771572755 WAIT #12: nam='enq: TX - row lock contention' ela= 2931408 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114774504207 DEADLOCK DETECTED ( ORA-00060 ) [Transaction Deadlock] The following deadlock is not an ORACLE error. It is a deadlock due to user error in the design of an application or from issuing incorrect ad-hoc SQL. The following information may aid in determining the deadlock: ??????Process A?’enq: TX – row lock contention’ ?????ORA-00060 deadlock detected????3s ??? 18s , ???hidden parameter “_enqueue_deadlock_scan_secs”?????,????????0? ??????????: SQL> alter system set "_enqueue_deadlock_scan_secs"=4 scope=spfile; System altered. Elapsed: 00:00:00.01 SQL> alter system set "_enqueue_deadlock_time_sec"=9 scope=spfile; System altered. Elapsed: 00:00:00.00 SQL> startup force; ORACLE instance started. Total System Global Area 851443712 bytes Fixed Size 2100040 bytes Variable Size 738198712 bytes Database Buffers 104857600 bytes Redo Buffers 6287360 bytes Database mounted. Database opened. SQL> set linesize 140 pagesize 1400 SQL> show parameter dead NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _enqueue_deadlock_scan_secs integer 4 _enqueue_deadlock_time_sec integer 9 SQL> set timing on SQL> select * from maclean1 for update wait 8; T1 ---------- 11 Elapsed: 00:00:00.01 PROCESS B SQL> select * from maclean2 for update wait 8; T1 ---------- 3 SQL> select * from maclean1 for update wait 8; select * from maclean1 for update wait 8 PROCESS A SQL> select * from maclean2 for update wait 8; select * from maclean2 for update wait 8 * ERROR at line 1: ORA-30006: resource busy; acquire with WAIT timeout expired Elapsed: 00:00:08.00 ???????? ??? select for update wait?enqueue request timeout ?????8s? ,???????”_enqueue_deadlock_scan_secs”=4(deadlock scan interval),?4s???deadlock detected,????Process A????deadlock ???, ??????? ??Process A?????8s?raised??”ORA-30006: resource busy; acquire with WAIT timeout expired”??,??ORA-00060,?????process A???????? ????????”_enqueue_deadlock_time_sec”(requests with timeout <= this will not have deadlock detection)???,?enqueue request time < “_enqueue_deadlock_time_sec”?Server process?????dead lock detection,?????????enqueue request ??????timeout??????(_enqueue_deadlock_time_sec????5,?timeout<5s),???????????????;??????timeout>”_enqueue_deadlock_time_sec”???,Oracle????????????????????? ??????????: SQL> show parameter dead NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _enqueue_deadlock_scan_secs integer 4 _enqueue_deadlock_time_sec integer 9 Process A: SQL> set timing on; SQL> select * from maclean1 for update wait 10; T1 ---------- 11 Process B: SQL> select * from maclean2 for update wait 10; T1 ---------- 3 SQL> select * from maclean1 for update wait 10; PROCESS A: SQL> select * from maclean2 for update wait 10; select * from maclean2 for update wait 10 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:06.02 ??????? select for update wait 10?10s??, ?? 10s?????_enqueue_deadlock_time_sec???(9s),??Process A???????? ???????????????6s ???????_enqueue_deadlock_scan_secs?4s ? ???????????,???????????_enqueue_deadlock_scan_secs?????????3???? ??: enqueue lock?????????????? 1. ?????????deadlock detection??3s????, ????????_enqueue_deadlock_scan_secs(deadlock scan interval)???,??????0,????????_enqueue_deadlock_scan_secs?????????3???, ?_enqueue_deadlock_scan_secs=0 ??3s??, ?_enqueue_deadlock_scan_secs=4??6s??,????? 2. ???????_enqueue_deadlock_time_sec(requests with timeout <= this will not have deadlock detection)???,?enqueue request timeout< _enqueue_deadlock_time_sec(????5),?Server process?????????enqueue request timeout>_enqueue_deadlock_time_sec ????_enqueue_deadlock_scan_secs???????, ??request timeout??????select for update wait [TIMEOUT]??? ??: ???10.2.0.1?????????2?hidden parameter , ???patchset 10.2.0.3????? _enqueue_deadlock_time_sec, ?patchset 10.2.0.5??????_enqueue_deadlock_scan_secs? ?????RAC???????????10s, ???????_lm_dd_interval(dd time interval in seconds) ,????????8.0.6???? ???????????????,??????,  ?10g???????60s,?11g???????10s?  ???????11g??_lm_dd_interval?????????????,?????11g??LMD????????????,??????????RAC?LMD?Deadlock Detection???????CPU,???11g?Oracle????Team???LMD????????CPU????: ????????11g?LMD???????,???????11g??? UTS TRACE ????? DD???: SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> SQL> select * from global_name 2 ; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> alter system set "_lm_dd_interval"=20 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1570009088 bytes Fixed Size 2228704 bytes Variable Size 1325403680 bytes Database Buffers 234881024 bytes Redo Buffers 7495680 bytes Database mounted. Database opened. SQL> set linesize 140 pagesize 1400 SQL> show parameter lm_dd NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _lm_dd_interval integer 20 SQL> select count(*) from gv$instance; COUNT(*) ---------- 2 instance 1: SQL> oradebug setorapid 12 Oracle pid: 12, Unix process pid: 8608, image: [email protected] (LMD0) ? LMD0??? UTS TRACE??RAC???????????? SQL> oradebug event 10046 trace name context forever,level 8:10708 trace name context forever,level 103: trace[rac.*] disk high; Statement processed. Elapsed: 00:00:00.00 SQL> update maclean1 set t1=t1+1; 1 row updated. instance 2: SQL> update maclean2 set t1=t1+1; 1 row updated. SQL> update maclean1 set t1=t1+1; Instance 1: SQL> update maclean2 set t1=t1+1; update maclean2 set t1=t1+1 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:20.51 LMD0???UTS TRACE 2012-06-12 22:27:00.929284 : [kjmpbmsg:process][type 22][msg 0x7fa620ac85a8][from 1][seq 8148.0][len 192] 2012-06-12 22:27:00.929346 : [kjmxmpm][type 22][seq 0.0][msg 0x7fa620ac85a8][from 1] *** 2012-06-12 22:27:00.929 * kjddind: received DDIND msg with subtype x6 * reqp->dd_master_inst_kjxmddi == 1 * kjddind: dump sgh: 2012-06-12 22:27:00.929346*: kjddind: req->timestamp [0.15], kjddt [0.13] 2012-06-12 22:27:00.929346*: >> DDmsg:KJX_DD_REMOTE,TS[0.15],Inst 1->2,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.929346*: lock [0x95023930,829], op = [mast] 2012-06-12 22:27:00.929346*: reqp->timestamp [0.15], kjddt [0.13] 2012-06-12 22:27:00.929346*: kjddind: updated local timestamp [0.15] * kjddind: case KJX_DD_REMOTE 2012-06-12 22:27:00.929346*: ADD IO NODE WFG: 0 frame pointer 2012-06-12 22:27:00.929346*: PUSH: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: POP: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: kjddopr[TX 0xe000c.0x32][ext 0x5,0x0]: blocking lock 0xbbb9a800, owner 2097154 of inst 2 2012-06-12 22:27:00.929346*: PUSH: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: ADD NODE TO WFG: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: POP: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: kjddopt: converting lock 0xbbce92f8 on 'TX' 0x80016.0x5d4,txid [2097154,34]of inst 2 2012-06-12 22:27:00.929346*: PUSH: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: ADD NODE TO WFG: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929855 : GSIPC:AMBUF: rcv buff 0x7fa620aa8cd8, pool rcvbuf, rqlen 1102 2012-06-12 22:27:00.929878 : GSIPC:GPBMSG: new bmsg 0x7fa620aa8d48 mb 0x7fa620aa8cd8 msg 0x7fa620aa8d68 mlen 192 dest x100 flushsz -1 2012-06-12 22:27:00.929878*: << DDmsg:KJX_DD_REMOTE,TS[0.15],Inst 2->1,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.929878*: lock [0xbbce92f8,287], op = [mast] 2012-06-12 22:27:00.929878*: ADD IO NODE WFG: 0 frame pointer 2012-06-12 22:27:00.929923 : [kjmpbmsg:compl][msg 0x7fa620ac8588][typ p][nmsgs 1][qtime 0][ptime 0] 2012-06-12 22:27:00.929947 : GSIPC:PBAT: flush start. flag 0x79 end 0 inc 4.4 2012-06-12 22:27:00.929963 : GSIPC:PBAT: send bmsg 0x7fa620aa8d48 blen 224 dest 1.0 2012-06-12 22:27:00.929979 : GSIPC:SNDQ: enq msg 0x7fa620aa8d48, type 65521 seq 8325, inst 1, receiver 0, queued 1 012-06-12 22:27:00.929979 : GSIPC:SNDQ: enq msg 0x7fa620aa8d48, type 65521 seq 8325, inst 1, receiver 0, queued 1 2012-06-12 22:27:00.929996 : GSIPC:BSEND: flushing sndq 0xb491dd28, id 0, dcx 0xbc517770, inst 1, rcvr 0 qlen 0 1 2012-06-12 22:27:00.930014 : GSIPC:BSEND: no batch1 msg 0x7fa620aa8d48 type 65521 len 224 dest (1:0) 2012-06-12 22:27:00.930088 : kjbsentscn[0x0.3f72dc][to 1] 2012-06-12 22:27:00.930144 : GSIPC:SENDM: send msg 0x7fa620aa8d48 dest x10000 seq 8325 type 65521 tkts x1 mlen xe00110 2012-06-12 22:27:00.930531 : GSIPC:KSXPCB: msg 0x7fa620aa8d48 status 30, type 65521, dest 1, rcvr 0 WAIT #0: nam='ges remote message' ela= 1372 waittime=80 loop=0 p3=74 obj#=-1 tim=1339554420931640 2012-06-12 22:27:00.931728 : GSIPC:RCVD: ksxp msg 0x7fa620af6490 sndr 1 seq 0.8149 type 65521 tkts 1 2012-06-12 22:27:00.931746 : GSIPC:RCVD: watq msg 0x7fa620af6490 sndr 1, seq 8149, type 65521, tkts 1 2012-06-12 22:27:00.931763 : GSIPC:RCVD: seq update (0.8148)->(0.8149) tp -15 fg 0x4 from 1 pbattr 0x0 2012-06-12 22:27:00.931779 : GSIPC:TKT: collect msg 0x7fa620af6490 from 1 for rcvr 0, tickets 1 2012-06-12 22:27:00.931794 : kjbrcvdscn[0x0.3f72dc][from 1][idx 2012-06-12 22:27:00.931810 : kjbrcvdscn[no bscn dd_master_inst_kjxmddi == 1 * kjddind: dump sgh: NXTIN (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 NXTOUT (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 2012-06-12 22:27:00.932058*: kjddind: req->timestamp [0.15], kjddt [0.15] 2012-06-12 22:27:00.932058*: >> DDmsg:KJX_DD_VALIDATE,TS[0.15],Inst 1->2,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.932058*: lock [(nil),0], op = [vald_dd] 2012-06-12 22:27:00.932058*: kjddind: updated local timestamp [0.15] * kjddind: case KJX_DD_VALIDATE *** 2012-06-12 22:27:00.932 * kjddvald called: kjxmddi stuff: * cont_lockp (nil) * dd_lockp 0x95023930 * dd_inst 1 * dd_master_inst 1 * sgh graph: NXTIN (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 NXTOUT (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 POP WFG NODE: lock=(nil) * kjddvald: dump the PRQ: BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 * kjddvald: KJDD_NXTONOD ->node_kjddsg.dinst_kjddnd =1 * kjddvald: ... which is not my node, my subgraph is validated but the cycle is not complete Global blockers dump start:--------------------------------- DUMP LOCAL BLOCKER/HOLDER: block level 5 res [0x80016][0x5d4],[TX][ext 0x2,0x0] ??dead lock!!! ???????11.2.0.3???? RAC LMD???????????”_lm_dd_interval”????????????20s?  ???????10g?_lm_dd_interval???60s,??????Processes?????????????????,????????????Server Process????????60s??????11g?????(??????LMD???????)???????,???????????10s??? Enqueue Deadlock Detection? ?11g??? RAC?LMD???????hidden parameter ????”_lm_dd_interval”???,RAC????????????????,???????????: SQL> col name for a50 SQL> col describ for a60 SQL> col value for a20 SQL> set linesize 140 pagesize 1400 SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm like '_lm_dd%'; NAME VALUE DESCRIB -------------------------------------------------- -------------------- ------------------------------------------------------------ _lm_dd_interval 20 dd time interval in seconds _lm_dd_scan_interval 5 dd scan interval in seconds _lm_dd_search_cnt 3 number of dd search per token get _lm_dd_max_search_time 180 max dd search time per token _lm_dd_maxdump 50 max number of locks to be dumped during dd validation _lm_dd_ignore_nodd FALSE if TRUE nodeadlockwait/nodeadlockblock options are ignored 6 rows selected.

    Read the article

  • Installing XULrunner, adding mozilla daily ppa doesn't help

    - by Lester Peabody
    I need to install xulrunner so I can use Redcar, but am unable to do so. I followed the steps in this question verbatim Install XULRunner 2.0. Here's the output from adding the Mozilla PPA: lpeabody@ubuntu:/var/www/drupal-7.8$ sudo add-apt-repository ppa:ubuntu-mozilla-daily/ppa You are about to add the following PPA to your system: Official PPA for Firefox and Thunderbird daily builds daily (or even multiple builds per day) for various mozilla projects and branches. For questions and bugs with software in this archive, please contact <email address hidden> or visit #ubuntu-mozillateam on freenode. More info: https://launchpad.net/~ubuntu-mozilla-daily/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.n2JGd679HN --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv B34505EA326FEAEA07E3618DEF4186FE247510BE gpg: requesting key 247510BE from hkp server keyserver.ubuntu.com gpg: key 247510BE: "Launchpad PPA for Ubuntu Mozilla Daily Build Team" not changed gpg: Total number processed: 1 gpg: unchanged: 1 lpeabody@ubuntu:/var/www/drupal-7.8$ sudo apt-get update Ign http://us.archive.ubuntu.com oneiric InRelease Ign http://extras.ubuntu.com oneiric InRelease Ign http://us.archive.ubuntu.com oneiric-updates InRelease Ign http://us.archive.ubuntu.com oneiric-backports InRelease Get:1 http://extras.ubuntu.com oneiric Release.gpg [72 B] Hit http://us.archive.ubuntu.com oneiric Release.gpg Hit http://extras.ubuntu.com oneiric Release Hit http://us.archive.ubuntu.com oneiric-updates Release.gpg Hit http://us.archive.ubuntu.com oneiric-backports Release.gpg Hit http://extras.ubuntu.com oneiric/main Sources Hit http://extras.ubuntu.com oneiric/main i386 Packages Ign http://extras.ubuntu.com oneiric/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric Release Hit http://us.archive.ubuntu.com oneiric-updates Release Hit http://us.archive.ubuntu.com oneiric-backports Release Ign http://extras.ubuntu.com oneiric/main Translation-en_US Hit http://us.archive.ubuntu.com oneiric/main Sources Hit http://us.archive.ubuntu.com oneiric/restricted Sources Hit http://us.archive.ubuntu.com oneiric/universe Sources Ign http://extras.ubuntu.com oneiric/main Translation-en Hit http://us.archive.ubuntu.com oneiric/multiverse Sources Hit http://us.archive.ubuntu.com oneiric/main i386 Packages Hit http://us.archive.ubuntu.com oneiric/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/main Sources Hit http://us.archive.ubuntu.com oneiric-updates/restricted Sources Hit http://us.archive.ubuntu.com oneiric-updates/universe Sources Hit http://us.archive.ubuntu.com oneiric-updates/multiverse Sources Hit http://us.archive.ubuntu.com oneiric-updates/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/main Sources Hit http://us.archive.ubuntu.com oneiric-backports/restricted Sources Hit http://us.archive.ubuntu.com oneiric-backports/universe Sources Hit http://us.archive.ubuntu.com oneiric-backports/multiverse Sources Hit http://us.archive.ubuntu.com oneiric-backports/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric/main Translation-en Hit http://us.archive.ubuntu.com oneiric/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/main Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/main Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/universe Translation-en Ign http://security.ubuntu.com oneiric-security InRelease Ign http://ppa.launchpad.net oneiric InRelease Ign http://ppa.launchpad.net oneiric InRelease Hit http://security.ubuntu.com oneiric-security Release.gpg Hit http://ppa.launchpad.net oneiric Release.gpg Hit http://security.ubuntu.com oneiric-security Release Get:2 http://ppa.launchpad.net oneiric Release.gpg [316 B] Hit http://ppa.launchpad.net oneiric Release Hit http://security.ubuntu.com oneiric-security/main Sources Get:3 http://ppa.launchpad.net oneiric Release [9,788 B] Hit http://security.ubuntu.com oneiric-security/restricted Sources Hit http://security.ubuntu.com oneiric-security/universe Sources Hit http://security.ubuntu.com oneiric-security/multiverse Sources Hit http://security.ubuntu.com oneiric-security/main i386 Packages Hit http://security.ubuntu.com oneiric-security/restricted i386 Packages Hit http://security.ubuntu.com oneiric-security/universe i386 Packages Hit http://security.ubuntu.com oneiric-security/multiverse i386 Packages Hit http://security.ubuntu.com oneiric-security/main TranslationIndex Hit http://security.ubuntu.com oneiric-security/multiverse TranslationIndex Hit http://security.ubuntu.com oneiric-security/restricted TranslationIndex Hit http://security.ubuntu.com oneiric-security/universe TranslationIndex Hit http://ppa.launchpad.net oneiric/main Sources Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Hit http://security.ubuntu.com oneiric-security/main Translation-en Get:4 http://ppa.launchpad.net oneiric/main Sources [1,650 B] Get:5 http://ppa.launchpad.net oneiric/main i386 Packages [12.6 kB] Hit http://security.ubuntu.com oneiric-security/multiverse Translation-en Hit http://security.ubuntu.com oneiric-security/restricted Translation-en Ign http://ppa.launchpad.net oneiric/main TranslationIndex Hit http://security.ubuntu.com oneiric-security/universe Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Fetched 24.5 kB in 5s (4,853 B/s) Reading package lists... Done After running the update, I attempted to install xulrunner, but get the following back: lpeabody@ubuntu:/var/www/drupal-7.8$ sudo apt-get install xulrunner-2.0 xulrunner-2.0-dev xulrunner-2.0-gnome-support xulrunner-dev Reading package lists... Done Building dependency tree Reading state information... Done Package xulrunner-2.0-dev is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source Package xulrunner-2.0 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'xulrunner-2.0' has no installation candidate E: Package 'xulrunner-2.0-dev' has no installation candidate E: Unable to locate package xulrunner-2.0-gnome-support E: Couldn't find any package by regex 'xulrunner-2.0-gnome-support' E: Unable to locate package xulrunner-dev If you need to know anything else please specify, I will be prompt with my answers if asked during the next 2 hours... so 2am EDT.

    Read the article

  • Introducción a ENUM (E.164 Number Mapping)

    - by raul.goycoolea
    E.164 Number Mapping (ENUM o Enum) se diseñó para resolver la cuestión de como se pueden encontrar servicios de internet mediante un número telefónico, es decir cómo se pueden usar los los teléfonos, que solamente tienen 12 teclas, para acceder a servicios de Internet. La parte más básica de ENUM es por tanto la convergencia de las redes del STDP y la IP; ENUM hace que pueda haber una correspondencia entre un número telefónico y un identificador de Internet. En síntesis, Enum es un conjunto de protocolos para convertir números E.164 en URIs, y viceversa, de modo que el sistema de numeración E.164 tenga una función de correspondencia con las direcciones URI en Internet. Esta función es necesaria porque un número telefónico no tiene sentido en el mundo IP, ni una dirección IP tiene sentido en las redes telefónicas. Así, mediante esta técnica, las comunicaciones cuyo destino se marque con un número E.164, puedan terminar en el identificador correcto (número E.164 si termina en el STDP, o URI si termina en redes IP). La solución técnica de mirar en una base de datos cual es el identificador de destino tiene consecuencias muy interesantes, como que la llamada se pueda terminar donde desee el abonado llamado. Esta es una de las características que ofrece ENUM : el destino concreto, el terminal o terminales de terminación, no lo decide quien inicia la llamada o envía el mensaje sino la persona que es llamada o recibe el mensaje, que ha escrito sus preferencias en una base de datos. En otras palabras, el destinatario de la llamada decide cómo quiere ser contactado, tanto si lo que se le comunica es un email, o un sms, o telefax, o una llamada de voz. Cuando alguien quiera llamarle a usted, lo que tiene que hacer el llamante es seleccionar su nombre (el del llamado) en la libreta de direcciones del terminal o marcar su número ENUM. Una aplicación informática obtendrá de una base de datos los datos de contacto y disponibilidad que usted decidió. Y el mensaje le será remitido tal como usted especificó en dicha base de datos. Esto es algo nuevo que permite que usted, como persona llamada, defina sus preferencias de terminación para cualquier tipo de contenido. Por ejemplo, usted puede querer que todos los emails le sean enviados como sms o que los mensajes de voz se le remitan como emails; las comunicaciones ya no dependen de donde esté usted o deque tipo de terminal utiliza (teléfono, pda, internet). Además, con ENUM usted puede gestionar la portabilidad de sus números fijos y móviles. ENUM emplea una técnica de búsqueda indirecta en una base de datos que tiene los registros NAPTR ("Naming Authority Pointer Resource Records" tal como lo define el RFC 2915), y que utiliza el número telefónico Enum como clave de búsqueda, para obtener qué URIs corresponden a cada número telefónico. La base de datos que almacena estos registros es del tipo DNS.Si bien en uno de sus diversos usos sirve para facilitar las llamadas de usuarios de VoIP entre redes tradicionales del STDP y redes IP, debe tenerse en cuenta que ENUM no es una función de VoIP sino que es un mecanismo de conversión entre números/identificadores. Por tanto no debe ser confundido con el uso normal de enrutar las llamadas de VoIP mediante los protocolos SIP y H.323. ENUM puede ser muy útil para aquellas organizaciones que quieran tener normalizada la manera en que las aplicaciones acceden a los datos de comunicación de cada usuario. FundamentosPara que la convergencia entre el Sistema Telefónico Disponible al Público (STDP) y la Telefonía por Internet o Voz sobre IP (VoIP) y que el desarrollo de nuevos servicios multimedia tengan menos obstáculos, es fundamental que los usuarios puedan realizar sus llamadas tal como están acostumbrados a hacerlo, marcando números. Para eso, es preciso que haya un sistema universal de correspondencia de número a direcciones IP (y viceversa) y que las diferentes redes se puedan interconectar. Hay varias fórmulas que permiten que un número telefónico sirva para establecer comunicación con múltiples servicios. Una de estas fórmulas es el Electronic Number Mapping System ENUM, normalizado por el grupo de tareas especiales de ingeniería en Internet (IETF, Internet engineering task force), del que trata este artículo, que emplea la numeración E.164, los protocolos y la infraestructura telefónica para acceder indirectamente a diferentes servicios. Por tanto, se accede a un servicio mediante un identificador numérico universal: un número telefónico tradicional. ENUM permite comunicar las direcciones del mundo IP con las del mundo telefónico, y viceversa, sin problemas. Antes de entrar en mayores profundidades, conviene dar una breve pincelada para aclarar cómo se organiza la correspondencia entre números o URI. Para ello imaginemos una llamada que se inicia desde el servicio telefónico tradicional con destino a un número Enum. En ENUM Público, el abonado o usuario Enum a quien va destinada lallamada, habrá decidido incluir en la base de datos Enum uno o varios URI o números E.164, que forman una lista con sus preferencias para terminar la llamada. Y el sistema como se explica más adelante, elegirá cual es el número o URI adecuado para dicha terminación. Por tanto como resultado de la consulta a la base dedatos Enum siempre se da una relación unívoca entre el número Enum marcado y el de terminación, conforme a los deseos de la persona llamada.Variedades de ENUMUna posible fuente de confusión cuando se trata sobre ENUM es la variedad de soluciones o sistemas que emplean este calificativo. Lo habitual es que cuando se haga una referencia a ENUM se trate de uno de los siguientes casos: ENUM Público: Es la visión original de ENUM, como base de datos pública, parecida a un directorio, donde el abonado "opta" a ser incluido en la base de datos, que está gestionada en el dominio e164.arpa, delegando a cada país la gestión de la base de datos y la numeración. También se conoce como ENUM de usuario. Carrier ENUM, o ENUM Infraestructura, o de Operador: Cuando grupos de operadores proveedores de servicios de comunicaciones electrónicas acuerdan compartir la información de los abonados por medio de ENUM mediante acuerdos privados. En este caso son los operadores quienes controlan la información del abonado en vez de hacerlo (optar) los propios abonados. Carrier ENUM o ENUM de Operador también se conoce como Infrastructure ENUM o ENUM Infraestructura, y está siendo normalizado por IETF para la interconexión de VoIP (mediante acuerdos de peering). Como se explicará en la correspondiente sección, también se puede utilizar para la portabilidad o conservación de número. ENUM Privado: Un operador de telefonía o de VoIP, o un ISP, o un gran usuario, puede utilizar las técnicas de ENUM en sus redes y en las de sus clientes sin emplear DNS públicos, con DNS privados o internos. Resulta fácil imaginar como puede utilizarse esta técnica para que compañías multinacionales, o bancos, o agencias de viajes, tengan planes de numeración muy coherentes y eficaces. Cómo funciona ENUMPara conocer cómo funciona Enum, le remitimos a la página correspondiente a ENUM Público, puesto que esa variedad de Enum es la típica, la que dió lugar a todos los procedimientos y normas de IETF .Más detalles sobre: @page { margin: 0.79in } P { margin-bottom: 0.08in } H4 { margin-bottom: 0.08in } H4.ctl { font-family: "Lohit Hindi" } A:link { so-language: zxx } -- ENUM Público. En esta página se explica con cierto detalle como funciona Enum Carrier ENUM o ENUM de Operador ENUM Privado Normas técnicas: RFC 2915: NAPTR RR. The Naming Authority Pointer (NAPTR) DNS Resource Record RFC 3761: ENUM Protocol. The E.164 to Uniform Resource Identifiers (URI) Dynamic Delegation Discovery System (DDDS) Application (ENUM). (obsoletes RFC 2916). RFC 3762: Usage of H323 addresses in ENUM Protocol RFC 3764: Usage of SIP addresses in ENUM Protocol RFC 3824: Using E.164 numbers with SIP RFC 4769: IANA Registration for an Enumservice Containing Public Switched Telephone Network (PSTN) Signaling Information RFC 3026: Berlin Liaison Statement RFC 3953: Telephone Number Mapping (ENUM) Service Registration for Presence Services RFC 2870: Root Name Server Operational Requirements RFC 3482: Number Portability in the Global Switched Telephone Network (GSTN): An Overview RFC 2168: Resolution of Uniform Resource Identifiers using the Domain Name System Organizaciones relacionadas con ENUM RIPE - Adimistrador del nivel 0 de ENUM e164.arpa. ITU-T TSB - Unión Internacional de Telecomunicaciones ETSI - European Telecommunications Standards Institute VisionNG - Administrador del rango ENUM 878-10 IETF ENUM Chapter

    Read the article

  • Keeping track of File System Utilization in Ops Center 12c

    - by S Stelting
    Enterprise Manager Ops Center 12c provides significant monitoring capabilities, combined with very flexible incident management. These capabilities even extend to monitoring the file systems associated with Solaris or Linux assets. Depending on your needs you can monitor and manage incidents, or you can fine tune alert monitoring rules to specific file systems. This article will show you how to use Ops Center 12c to Track file system utilization Adjust file system monitoring rules Disable file system rules Create custom monitoring rules If you're interested in this topic, please join us for a WebEx presentation! Date: Thursday, November 8, 2012 Time: 11:00 am, Eastern Standard Time (New York, GMT-05:00) Meeting Number: 598 796 842 Meeting Password: oracle123 To join the online meeting ------------------------------------------------------- 1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&RT=MiMxMQ%3D%3D 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: oracle123 4. Click "Join". To view in other time zones or languages, please click the link: https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&ORT=MiMxMQ%3D%3D   Monitoring File Systems for OS Assets The Libraries tab provides basic, device-level information about the storage associated with an OS instance. This tab shows you the local file system associated with the instance and any shared storage libraries mounted by Ops Center. More detailed information about file system storage is available under the Analytics tab under the sub-tab named Charts. Here, you can select and display the individual mount points of an OS, and export the utilization data if desired: In this example, the OS instance has a basic root file partition and several NFS directories. Each file system mount point can be independently chosen for display in the Ops Center chart. File Systems and Incident  Reporting Every asset managed by Ops Center has a "monitoring policy", which determines what represents a reportable issue with the asset. The policy is made up of a bunch of monitoring rules, where each rule describes An attribute to monitor The conditions which represent an issue The level or levels of severity for the issue When the conditions are met, Ops Center sends a notification and creates an incident. By default, OS instances have three monitoring rules associated with file systems: File System Reachability: Triggers an incident if a file system is not reachable NAS Library Status: Triggers an incident for a value of "WARNING" or "DEGRADED" for a NAS-based file system File System Used Space Percentage: Triggers an incident when file system utilization grows beyond defined thresholds You can view these rules in the Monitoring tab for an OS: Of course, the default monitoring rules is that they apply to every file system associated with an OS instance. As a result, any issue with NAS accessibility or disk utilization will trigger an incident. This can cause incidents for file systems to be reported multiple times if the same shared storage is used by many assets, as shown in this screen shot: Depending on the level of control you'd like, there are a number of ways to fine tune incident reporting. Note that any changes to an asset's monitoring policy will detach it from the default, creating a new monitoring policy for the asset. If you'd like, you can extract a monitoring policy from an asset, which allows you to save it and apply the customized monitoring profile to other OS assets. Solution #1: Modify the Reporting Thresholds In some cases, you may want to modify the basic conditions for incident reporting in your file system. The changes you make to a default monitoring rule will apply to all of the file systems associated with your operating system. Selecting the File Systems Used Space Percentage entry and clicking the "Edit Alert Monitoring Rule Parameters" button opens a pop-up dialog which allows you to modify the rule. The first screen lets you decide when you will check for file system usage, and how long you will wait before opening an incident in Ops Center. By default, Ops Center monitors continuously and reports disk utilization issues which exist for more than 15 minutes. The second screen lets you define actual threshold values. By default, Ops Center opens a Warning level incident is utilization rises above 80%, and a Critical level incident for utilization above 95% Solution #2: Disable Incident Reporting for File System If you'd rather not report file system incidents, you can disable the monitoring rules altogether. In this case, you can select the monitoring rules and click the "Disable Alert Monitoring Rule(s)" button to open the pop-up confirmation dialog. Like the first solution, this option affects all file system monitoring. It allows you to completely disable incident reporting for NAS library status or file system space consumption. Solution #3: Create New Monitoring Rules for Specific File Systems If you'd like to have the greatest flexibility when monitoring file systems, you can create entirely new rules. Clicking the "Add Alert Monitoring Rule" (the icon with the green plus sign) opens a wizard which allows you to define a new rule.  This rule will be based on a threshold, and will be used to monitor operating system assets. We'd like to add a rule to track disk utilization for a specific file system - the /nfs-guest directory. To do this, we specify the following attribute FileSystemUsages.name=/nfs-guest.usedSpacePercentage The value of name in the attribute allows us to define a specific NFS shared directory or file system... in the case of this OS, we could have chosen any of the values shown in the File Systems Utilization chart at the beginning of this article. usedSpacePercentage lets us define a threshold based on the percentage of total disk space used. There are a number of other values that we could use for threshold-based monitoring of FileSystemUsages, including freeSpace freeSpacePercentage totalSpace usedSpace usedSpacePercentage The final sections of the screen allow us to determine when to monitor for disk usage, and how long to wait after utilization reaches a threshold before creating an incident. The next screen lets us define the threshold values and severity levels for the monitoring rule: If historical data is available, Ops Center will display it in the screen. Clicking the Apply button will create the new monitoring rule and active it in your monitoring policy. If you combine this with one of the previous solutions, you can precisely define which file systems will generate incidents and notifications. For example, this monitoring policy has the default "File System Used Space Percentage" rule disabled, but the new rule reports ONLY on utilization for the /nfs-guest directory. Stay Connected: Twitter |  Facebook |  YouTube |  Linkedin |  Newsletter

    Read the article

  • Win a set of Infragistics Silverlight Controls with Data Visualization!

    - by mbcrump
    Infragistics recently released their new Silverlight Data Visualization Controls. I saw a couple of samples and had to take a look. I headed over to their website and downloaded the controls. I first noticed the hospital floor-plan demo shown on their site and started thinking of ways that I could use this in my own organization. I emailed them asking if I could give away the Silverlight Data Visualization controls on my site and they said, Yes! They also wanted to throw in the standard Silverlight Line of Business controls. (combined they are worth about $3000 US). I am very thankful they were willing to help the Silverlight community with this giveaway. So some quick rules below: ----------------------------------------------------------------------------------------------------------------------------------------------------------- Win a FREE developer’s license of Infragistics Silverlight Controls with Data Visualization ($3000 Value) Random winner will be announced on January 1st, 2011! To be entered into the contest do the following things: Subscribe to my feed. Leave a comment below with a valid email account (I WILL NOT share this info with anyone.) For extra entries simply: Retweet a link to this page using the following URL [ http://mcrump.me/iscfree ]. It does not matter what the tweet says, just as long as the URL is the same. Unlimited tweets, but please don’t go crazy! This URL will allow me to track the users that Tweet this page. Don’t forget to visit Infragistics because they made this possible. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Before we get started with the Silverlight Controls, here is a couple of links to bookmark: The Silverlight Line of Business Control page is here. You can also check out the live demos here. The Data Visualization page is here. You can also check out the live demos here. Don’t worry about the Samples/Help Documentation. You can install all of that to your local HDD when you are installing it. I am going to walk you through the Silverlight Controls recently released by Infragistics. Begin by downloading the trial version and running the executable. If you downloaded the Complete bundle then you will have the following options to pick from. I like having help documentation and samples on my local HDD in case I do not have access to the internet and want to code. After it is installed, you may want to take a look at your Toolbox in Visual Studio 2010. Look for NetAdvantage 10.3 Silverlight and you will see that you now have access to all of these controls. At this point, to use the controls it’s as simple as drag/drop onto your Silverlight container. It will create the proper Namespaces for you. I wanted to highlight a few of the controls that I liked the most: Grid – After using the Infragistics grid you will wonder how you ever survived using the grid supplied by Microsoft standard controls.  This grid was designed to get your application up and running very fast. It’s simple to bind, it handles LARGE DataSets, easy to filter and allows endless possibilities of formatting data. The screenshot below is an example of the grid. For a real-time updating demo click here. SpellChecker- If your users are creating emails or performing any other function that requires Spell Checking then this control is great. Check out the screenshots below: In this first screen, I have a word that is not in the dictionary [DotNet]. The Spell Checker finds the word and allows the user to correct it. What is so great about Infragistics controls is that it only takes a few lines of code to have a full-featured Spell Checker in your application. TagCloud – This is a control that I haven’t seen anywhere else. It allows you to create keywords for popular search terms. This is very similar to TagCloud seen all over the internet.  Below is a screenshot that shows “Facebook” being a very popular item in the cloud. You can link these items to a hyperlink if you wanted. Importing/Exporting from Excel – I work with data a majority of the time. We all know the importance of Excel in our organizations, its used a lot. With Infragistics controls it make importing and exporting data from a Grid into Excel a snap. One of the things that I liked most about this control was the option to choose the Excel format (2003 or 2007). I haven’t seen this feature in other controls. Creating/Saving/Extracting/Uploading Zip Files – This is another control that I haven’t seen many others making. It allows you to basically manipulate a zip file in any way you like. You can even create a password on the zip file. Schedule – The Schedule that Infragistics provides resembles Outlook’s calendar. I think that it’s important for a user to see your app for the first time and immediately be able to start using because they are already familiar with the UI. The Schedule control accomplishes that in my opinion. I have just barely scratched the surface with the Infragistics Silverlight Line of Business controls. To check all of them then click here. A quick thing to note is that this giveaway also comes with the following Silverlight Data Visualization Controls. Below is a screenshot that list all of them:   I wanted to highlight 2 of the controls that I liked the most: xamBarcode– The xamBarcode supports the following Symbologies: Below is an example of the barcode generated by Infragistics controls. This is a high resolution barcode that you will not have to wonder if your scanner can read it. As long as you have ink in your printer your barcode will read it. I used a Symbol barcode reader to test this barcode. xamTreemap– I’ve never seen a way of displaying data like this before, but I like it. You can style this anyway that you like of course and it also comes with an Office 2010 Theme. Thanks to Infragistics for providing the controls to one lucky reader. I hope that you enjoyed this post and good luck to those that entered the contest.  Subscribe to my feed

    Read the article

  • Base de Datos Oracle, su mejor opción para reducir costos de IT

    - by Ivan Hassig
    Por Victoria Cadavid Sr. Sales Cosultant Oracle Direct Uno de los principales desafíos en la administración de centros de datos es la reducción de costos de operación. A medida que las compañías crecen y los proveedores de tecnología ofrecen soluciones cada vez más robustas, conservar el equilibrio entre desempeño, soporte al negocio y gestión del Costo Total de Propiedad es un desafío cada vez mayor para los Gerentes de Tecnología y para los Administradores de Centros de Datos. Las estrategias más comunes para conseguir reducción en los costos de administración de Centros de Datos y en la gestión de Tecnología de una organización en general, se enfocan en la mejora del desempeño de las aplicaciones, reducción del costo de administración y adquisición de hardware, reducción de los costos de almacenamiento, aumento de la productividad en la administración de las Bases de Datos y mejora en la atención de requerimientos y prestación de servicios de mesa de ayuda, sin embargo, las estrategias de reducción de costos deben contemplar también la reducción de costos asociados a pérdida y robo de información, cumplimiento regulatorio, generación de valor y continuidad del negocio, que comúnmente se conciben como iniciativas aisladas que no siempre se adelantan con el ánimo de apoyar la reducción de costos. Una iniciativa integral de reducción de costos de TI, debe contemplar cada uno de los factores que  generan costo y pueden ser optimizados. En este artículo queremos abordar la reducción de costos de tecnología a partir de la adopción del que según los expertos es el motor de Base de Datos # del mercado.Durante años, la base de datos Oracle ha sido reconocida por su velocidad, confiabilidad, seguridad y capacidad para soportar cargas de datos tanto de aplicaciones altamente transaccionales, como de Bodegas de datos e incluso análisis de Big Data , ofreciendo alto desempeño y facilidades de administración, sin embrago, cuando pensamos en proyectos de reducción de costos de IT, además de la capacidad para soportar aplicaciones (incluso aplicaciones altamente transaccionales) con alto desempeño, pensamos en procesos de automatización, optimización de recursos, consolidación, virtualización e incluso alternativas más cómodas de licenciamiento. La Base de Datos Oracle está diseñada para proveer todas las capacidades que un área de tecnología necesita para reducir costos, adaptándose a los diferentes escenarios de negocio y a las capacidades y características de cada organización.Es así, como además del motor de Base de Datos, Oracle ofrece una serie de soluciones para optimizar la administración de la información a través de mecanismos de optimización del uso del storage, continuidad del Negocio, consolidación de infraestructura, seguridad y administración automática, que propenden por un mejor uso de los recursos de tecnología, ofrecen opciones avanzadas de configuración y direccionan la reducción de los tiempos de las tareas operativas más comunes. Una de las opciones de la base de datos que se pueden provechar para reducir costos de hardware es Oracle Real Application Clusters. Esta solución de clustering permite que varios servidores (incluso servidores de bajo costo) trabajen en conjunto para soportar Grids o Nubes Privadas de Bases de Datos, proporcionando los beneficios de la consolidación de infraestructura, los esquemas de alta disponibilidad, rápido desempeño y escalabilidad por demanda, haciendo que el aprovisionamiento, el mantenimiento de las bases de datos y la adición de nuevos nodos se lleve e cabo de una forma más rápida y con menos riesgo, además de apalancar las inversiones en servidores de menor costo. Otra de las soluciones que promueven la reducción de costos de Tecnología es Oracle In-Memory Database Cache que permite almacenar y procesar datos en la memoria de las aplicaciones, permitiendo el máximo aprovechamiento de los recursos de procesamiento de la capa media, lo que cobra mucho valor en escenarios de alta transaccionalidad. De este modo se saca el mayor provecho de los recursos de procesamiento evitando crecimiento innecesario en recursos de hardware. Otra de las formas de evitar inversiones innecesarias en hardware, aprovechando los recursos existentes, incluso en escenarios de alto crecimiento de los volúmenes de información es la compresión de los datos. Oracle Advanced Compression permite comprimir hasta 4 veces los diferentes tipos de datos, mejorando la capacidad de almacenamiento, sin comprometer el desempeño de las aplicaciones. Desde el lado del almacenamiento también se pueden conseguir reducciones importantes de los costos de IT. En este escenario, la tecnología propia de la base de Datos Oracle ofrece capacidades de Administración Automática del Almacenamiento que no solo permiten una distribución óptima de los datos en los discos físicos para garantizar el máximo desempeño, sino que facilitan el aprovisionamiento y la remoción de discos defectuosos y ofrecen balanceo y mirroring, garantizando el uso máximo de cada uno de los dispositivos y la disponibilidad de los datos. Otra de las soluciones que facilitan la administración del almacenamiento es Oracle Partitioning, una opción de la Base de Datos que permite dividir grandes tablas en estructuras más pequeñas. Esta aproximación facilita la administración del ciclo de vida de la información y permite por ejemplo, separar los datos históricos (que generalmente se convierten en información de solo lectura y no tienen un alto volumen de consulta) y enviarlos a un almacenamiento de bajo costos, conservando la data activa en dispositivos de almacenamiento más ágiles. Adicionalmente, Oracle Partitioning facilita la administración de las bases de datos que tienen un gran volumen de registros y mejora el desempeño de la base de datos gracias a la posibilidad de optimizar las consultas haciendo uso únicamente de las particiones relevantes de una tabla o índice en el proceso de búsqueda. Otros factores adicionales, que pueden generar costos innecesarios a los departamentos de Tecnología son: La pérdida, corrupción o robo de datos y la falta de disponibilidad de las aplicaciones para dar soporte al negocio. Para evitar este tipo de situaciones que pueden acarrear multas y pérdida de negocios y de dinero, Oracle ofrece soluciones que permiten proteger y auditar la base de datos, recuperar la información en caso de corrupción o ejecución de acciones que comprometan la integridad de la información y soluciones que permitan garantizar que la información de las aplicaciones tenga una disponibilidad de 7x24. Ya hablamos de los beneficios de Oracle RAC, para facilitar los procesos de Consolidación y mejorar el desempeño de las aplicaciones, sin embrago esta solución, es sumamente útil en escenarios dónde las organizaciones de quieren garantizar una alta disponibilidad de la información, ante fallo de los servidores o en eventos de desconexión planeada para realizar labores de mantenimiento. Además de Oracle RAC, existen soluciones como Oracle Data Guard y Active Data Guard que permiten replicar de forma automática las bases de datos hacia un centro de datos de contingencia, permitiendo una recuperación inmediata ante eventos que deshabiliten por completo un centro de datos. Además de lo anterior, Active Data Guard, permite aprovechar la base de datos de contingencia para realizar labores de consulta, mejorando el desempeño de las aplicaciones. Desde el punto de vista de mejora en la seguridad, Oracle cuenta con soluciones como Advanced security que permite encriptar los datos y los canales a través de los cueles se comparte la información, Total Recall, que permite visualizar los cambios realizados a la base de datos en un momento determinado del tiempo, para evitar pérdida y corrupción de datos, Database Vault que permite restringir el acceso de los usuarios privilegiados a información confidencial, Audit Vault, que permite verificar quién hizo qué y cuándo dentro de las bases de datos de una organización y Oracle Data Masking que permite enmascarar los datos para garantizar la protección de la información sensible y el cumplimiento de las políticas y normas relacionadas con protección de información confidencial, por ejemplo, mientras las aplicaciones pasan del ambiente de desarrollo al ambiente de producción. Como mencionamos en un comienzo, las iniciativas de reducción de costos de tecnología deben apalancarse en estrategias que contemplen los diferentes factores que puedan generar sobre costos, los factores de riesgo que puedan acarrear costos no previsto, el aprovechamiento de los recursos actuales, para evitar inversiones innecesarias y los factores de optimización que permitan el máximo aprovechamiento de las inversiones actuales. Como vimos, todas estas iniciativas pueden ser abordadas haciendo uso de la tecnología de Oracle a nivel de Base de Datos, lo más importante es detectar los puntos críticos a nivel de riesgo, diagnosticar las proporción en que están siendo aprovechados los recursos actuales y definir las prioridades de la organización y del área de IT, para así dar inicio a todas aquellas iniciativas que de forma gradual, van a evitar sobrecostos e inversiones innecesarias, proporcionando un mayor apoyo al negocio y un impacto significativo en la productividad de la organización. Más información http://www.oracle.com/lad/products/database/index.html?ssSourceSiteId=otnes 1Fuente: Market Share: All Software Markets, Worldwide 2011 by Colleen Graham, Joanne Correia, David Coyle, Fabrizio Biscotti, Matthew Cheung, Ruggero Contu, Yanna Dharmasthira, Tom Eid, Chad Eschinger, Bianca Granetto, Hai Hong Swinehart, Sharon Mertz, Chris Pang, Asheesh Raina, Dan Sommer, Bhavish Sood, Marianne D'Aquila, Laurie Wurster and Jie Zhang. - March 29, 2012 2Big Data: Información recopilada desde fuentes no tradicionales como blogs, redes sociales, email, sensores, fotografías, grabaciones en video, etc. que normalmente se encuentran de forma no estructurada y en un gran volumen

    Read the article

  • Company Review: Google Products

    Google, Inc offers an array of products and services to all of its end-users. However their search capabilities are the foundation for Google’s current success and their primary business focus. Currently, Google offers over twenty different search applications that allow users to search the internet for books, maps, videos, images, products and much more. Their product decisions have allowed users demands to be met while focusing on the free based model. This allows users to access Google data free of charge and indirectly gives Google a strong competitive advantage of other competitors along with the accuracy of the search results. According to Google, Inc, they offer the following types of searching capabilities: Alerts Get email updates on the topics of your choice Blog Search Find blogs on your favorite topics  Books Search the full text of books  Custom Search Create a customized search experience for your community  Desktop Search and personalize your computer  Dictionary Search for definitions of words and phrases Directory Search the web, organized by topic or category Earth Explore the world from your computer Finance Business info, news and interactive charts GOOG-411 Find and connect for free with businesses from your phone  Images Search for images on the web Maps View maps and directions News Search thousands of news stories Patent Search Search the full text of US Patents Product Search Search for stuff to buy Scholar Search scholarly papers Toolbar Add a search box to your browser Trends Explore past and present search trends Videos Search for videos on the web Web Search Search billions of web pages Web Search Features Find movies, music, stocks, books and more mapping Google’s free based business model is only one way it differentiates itself from its competition. There is also a strong focus on the accuracy of search results and the speed in which they are returned to the end-user. Quality function deployment (QFD) is a structured method used to help connect user needs to the design features of a project proposed to address those needs. This method is particularly useful in accounting for needs that are not easily articulated or precisely defined according to the U. S. Department of Transportation Federal Highway Administration. Due to the fact that QFD is so customer driven Google is always in a constant state of change in attempt to reengineer its search algorithms, and other dependant systems so that end-users requirements are constantly being met. Value engineering is a key example of this, Google is constantly trying to improve all aspects of its products, improve system maintainability, and system interoperability. Bridgefield Group defines value engineering as an organized methodology that identifies and selects the lowest lifecycle cost options in design, materials and processes that achieves the desired level of performance, reliability and customer satisfaction. In addition, it seeks to remove unnecessary costs in the above areas and is often a joint effort with cross-functional internal teams and relevant suppliers. Common issues that appear when developing large scale systems like Google’s search applications include modular design of a product and/or service and providing accurate value analysis. A design approach that adheres to four fundamental tenets of cohesiveness, encapsulation, self-containment, and high binding to design a system component as an independently operable unit subject to change is how the Open System Joint Task Force defines modular design. More specifically M. S. Schmaltz defines modular software design as having a large collection of statements strung together in one partition of in-line code; we segment or divide the statements into logical groups called modules. Each module performs one or two tasks, and then passes control to another module. By breaking up the code into "bite-sized chunks", so to speak, we are able to better control the flow of data and control. This is especially true in large software systems. Value analysis is a process to evaluate products and services based on effectiveness, safety, and cost. Value analysis involves assessing the quality as well as the cost of a product or service as defined by the Healthcare Financial Management Association.  “Operations Management deals with the design and management of products, processes, services and supply chains. It considers the acquisition, development, and utilization of resources that firms need to deliver the goods and services their clients want.” (MIT,2010) Google, Inc encourages an open environment between all employees, also known as Googlers. This is reinforced by a cross-section team or cross-functional teams comprised from multiple departments assigned to every project so that every department like marketing, finance, and quality assurance has input on every project. In addition, Google is known for their openness to new ideas regardless of the status or seniority of an employee. In fact, Google allows for 20% of an employee’s time can be devoted to developing new ideas and/or pet projects. HumTech.com defines a cross-functional team as a collection of people with varied levels of skills and experience brought together to accomplish a task. As the name implies, Cross-Functional Team members come from different organizational units. Cross-Functional Teams may be permanent or ad hoc. Google’s search application product strategy primarily focuses on mass customization. This is allows Google to create a base search application and allows results to be returned to the end-users quickly based on specific parameters and search settings. In addition, they also store the data that is returned in case other desire the same results based on other end-users supplying the same customized settings. This allows Google to appear to render search results in virtually real-time to the user while allowing for complete customization of the searching criteria. Greg Vogl, a professor at Uganda Martyrs University, defines mass customization as when a business gives its customers the opportunity to tailor its products or services to the customer's specifications. The IT staff at Google play a key role in ensuring that the search application’s product strategy is maintained simply because the IT staff designs, develops, and maintains all of their proprietary applications. In fact, they also maintain all network infrastructure to ensure that it is available to all end-users. References: http://www.google.com/intl/en/options/ http://ops.fhwa.dot.gov/freight/publications/ftat_user_guide/sec5.htm http://www.bridgefieldgroup.com/bridgefieldgroup/glos9.htm#V http://www.acq.osd.mil/osjtf/termsdef.html http://www.cise.ufl.edu/~mssz/Pascal-CGS2462/prog-dsn.html http://www.hfma.org/publications/business_caring_newsletter/exclusives/Supply+and+Inventory+Terms+Defined.htm http://mitsloan.mit.edu/omg/om-definition.php http://www.humtech.com/opm/grtl/ols/ols3.cfm http://www.gregvogl.net/courses/mis1/glossary.htm

    Read the article

  • TechEd 2010 Day One – How I Travel

    - by BuckWoody
    Normally when I blog on the first day of a conference, well, there hasn’t been a first day yet. So I talk about the value of a conference or some other facet. And normally in my (non-conference) blogs, I show you how I have learned to be a data professional – things I’ve learned how to do over the years. But in all that time, I don’t think I’ve ever talked about a big part of my job – traveling. I’ve traveled a lot throughout the years, when I’ve taught, gone to conferences, consulted and in my current role assisting Microsoft customers with large-scale database system designs.  So I’ll share a few thoughts about what I do. Keep in mind that I travel for short durations, just a day or so, and sometimes I travel internationally. For those I prepare differently – what I’m talking about here is what I do for a multi-day, same-country trip. Hopefully you find it useful. I’ll tag a few other travelers I know to add their thoughts.  Preparing for Travel   When I’m notified of a trip, I begin researching the location. I find the flights, hotel and (if I have to) a car to use while I’m away. We have an in-house system we use to book the travel, but when I travel not-for-Microsoft I use Expedia and Kayak to find what I need.  Traveling on Sunday and Friday is the worst. I have to do it sometimes (like this week) and it’s always a bad idea. But you can blunt the impact by booking as early as you can stand it. That means I have to be up super-early, but the flights are normally on time. I stay flexible, and always have a backup plan in case the flights are delayed or canceled.  For the hotel, I tend to go on the cheaper side, and I look for older hotels that have been renovated, or quirky ones. For instance, in Boise, ID recently I stayed at a 60’s-themed (think Mad-Men) hotel that was very cool. Always I go on the less expensive side – I find the “luxury” hotels nail me for Internet, food, everything. The cheaper places include all kinds of things, and even have breakfasts, shuttles and all kinds of things that start to add up. I even call ahead to make sure there’s an iron and ironing board available, since I’ll need those when I get there.  I find any way I can not to get a car. I use mass-transit wherever possible, and try to make friends and pay their gas to take me places. In a pinch, I’ll use a taxi. It ends up being cheaper, faster, and less stressful all around.  Packing  Over the years I’ve learned never to check luggage whenever I can. To do that, I lay out everything I want to take with me on the bed, and then try and make sure I’m really going to use it. I wear a dark wool set of pants, which I can clean and wear in hot and cold climates. I bring undies and socks of course, and for most places I have to wear “dress up” shirts. I bring at least two print T-Shirts in case I want to dress down for something while I’m gone, but I only bring one set of shoes. All the  clothes are rolled as tightly as possible as I learned in the military. Then I use those to cushion the electronics I take.  For toiletries I bring a shaver, toothpaste and toothbrush, D/O and a small brush. Everything else the hotel will provide.  For entertainment, I take a small Zune, a full PC-Headset (so I can make IP calls on the road) and my laptop. I don’t take books or anything else – everything is electronic. I use E-books (downloaded from our Library), Audio-Books (on the Zune) and I also bring along a Kaossilator (more here) to play music in the hotel room or even on the plane without being heard.  If I can, I pack into one roll-on bag. There’s not a lot better than this one, but I also have a Bag I was given as a prize for something or other here at Microsoft. Either way, I like something with less pockets and more big, open compartments. Everything gets rolled up and packed in, with all of the wires and charges in small bags my wife made for me. The laptop (and anything I don’t want gate-checked) goes on top or in an outside pouch so I can grab it quickly if I have to gate-check the bag. As much as I can, I try to go in one bag. When I can’t (like this week) I use this bag since it can expand, roll up, crush and even be put away later. It’s super-heavy canvas and worth the price. This allows me to not check a bag.  Journey Logistics The day of the trip, I have everything ready since I’m getting up early. I pack a few small snacks inside a plastic large-mouth water bottle, which protects the snacks and lets me get water in the terminal. I bring along those little powdered drink mixes to add to the water.  At the airport, I make a beeline for the power-outlets. I charge up my laptop and phone, and download all my e-mails so I can work on them off-line in the air. I don’t travel as often as I used to – just every month or so now, so I don’t have a membership to an airline club. If I travel much more, I’ll invest in one again – they are WELL worth the money, for the wifi, food and quiet if for nothing else.  I print out my logistics on paper and put that in my pocket – flight numbers, hotel addresses and phones for everything. That way if I have to make a change, I don’t have to boot up anything or even have power to be able to roll with the punches if things change.  Working While Away  While I’m away I realize I’m going to be swamped with things at the conference or with my clients. So I turn on Out-Of-Office notifications to let people know I won’t be as responsive, and I keep my Outlook calendar up to date so my co-workers know what I’m up to. I even update it with hotel and phone info in case they really need to reach me. I share my calendar with my wife so my family knows what I’m doing as well.  I check my e-mail during breaks, but I only respond to them in the evening or early morning at the hotel. I tweet during conferences. The point is to be as present as possible during the event or when I’m at the clients. Both deserve it.  So those are my initial thoughts. I’ll tag Brent Ozar, Brad McGeHee and Paul Randal, and they can tag whomever they wish. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CodePlex Daily Summary for Friday, September 07, 2012

    CodePlex Daily Summary for Friday, September 07, 2012Popular ReleasesUmbraco CMS: Umbraco 4.9.0: Whats newThe media section has been overhauled to support HTML5 uploads, just drag and drop files in, even multiple files are supported on any HTML5 capable browser. The folder content overview is also much improved allowing you to filter it and perform common actions on your media items. The Rich Text Editor’s “Media” button now uses an embedder based on the open oEmbed standard (if you’re upgrading, enable the media button in the Rich Text Editor datatype settings and set TidyEditorConten...menu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.B INI Sharp Library: B INI Sharp Library v1.0.0.0 Final Realsed: The frist realsedActive Social Migrator: ActiveSocialMigrator 1.0.0 Beta: Beta release for the Active Social Migration tool.EntLib.com????????: ??????demo??-For SQL 2005-2008: EntLibShopping ???v3.0 - ??????demo??,?????SQL SERVER 2005/2008/2008 R2/2012 ??????。 ??(??)??????。 THANKS.Sistem LPK Pemkot Semarang: Panduan Penggunaan Sistem LPK: Panduan cara menggunakan Aplikasi Sistem LPK Bagian Pembangunan Kota SemarangActive Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.New ProjectsAdding 2013 Jewish Holidays for Outlook2003: Instruction: Copy the outlook.hol file to your compuer where Outlook2003 is installed. Double click the file, choose "Israel" and continue. That's it Agilcont System: Sistema de contabilidad para empresas privadas de preferencia para cajas que trabajan con efectivo en soles, dolares y con el material oroARB (A Request Broker): The idea is something like a Request Broker, but with some additional functionality.BATTLE.NET - SDK: This SDK provides the ability to use the Battle.net (Blizzard) Services for all supported Games such Diablo 3, World of Warcraft. Container Terminal System: SummaryDeclarative UX Streaming Data Language for the Cloud: Bringing a better communication paradigm for media and data..Get User Profile Information from SharePoint UserProfile Service: Used SharePoint object model to get the user profile information from the User Profile Service.Guess The City & State Windows 8 Source Code: Source code for Guess The City & State in Malaysia Windows 8 AppJquery Tree: This project is to demonstrate tree basic functionality.MCEBuddy 2.x: Convert and Remove Commercials for your Windows Media CenterMvcDesign: MvcDesign engine implementation projectMy Task Manager: This is a task manager module for DotNetNuke. I am using it to get started developing modules.MyAppwithbranches: MyAppwithbranchesProjecte prova: rpyGEO: pyGEO is a python package capable of parsing microarray data files. It also has a primitive plotting function.Scarlet Road: Scarlet Road is a top-down shooter. It's you against an unending horde of monsters.simplecounter: A simple counter, cick and counter.SiteEmpires: ????????Soundcloud Loader: Simple Tool for downloading Tracks from Soundcloud.Windows Phone Samples: Windows Phone code samples.

    Read the article

  • CodePlex Daily Summary for Wednesday, June 02, 2010

    CodePlex Daily Summary for Wednesday, June 02, 2010New ProjectsBackupCleaner.Net: A C#.Net-based tool for automatically removing old backups selectively. It can be used when you already make daily backups on disk, and want to cle...C# Dotnetnuke Module development Template for Visual Studio 2010: C# DNN Module Development template for Visual Studio 2010 Get a head-start on DNN Module development. Whether you're a pro or just starting with D...Christoc's DotNetNuke C# Module Development Template: A quick and easy to use C# Module development template for DotNetNuke 5, Visual Studio 2008.Client per la digitalizzazione di documenti integrato con DotNetNuke: Questo applicativo in ambiente windows 32bit consente di digitalizzare documenti con piu scanner contemporaneamente, processare OCR in 17 Lingue (p...ContainerOne - C# application server: An application server completely written in c# (.net 4.0).Drop7 Silverlight: It's a clone of the original Drop7 written in Silverlight (C#). Echo.Net: Echo.Net is an embedded task manager for web and windows apps. It allows for simple management of background tasks at specific times. It's develope...energy: Smartgrid Demand Response ImplementationGenerate Twitter Message for Live Writer: This is a plug-in for Windows Live Writer that generates a twitter message with your blog post name and a TinyUrl link to the blog post. It will d...HomingCMS.Net: A lightweight cms.Information Système & Shell à distance: Un web service qui permet d'avoir des informations sur le système et de lancer de commande (terminal) à distance.Javascript And Jquery: gqq's javascript and jquery examplesMemory++: "Tweak the memory to speed up your computer" Memory ++ is basically an application that will speed up your computer ensuring comfort in their norma...Microformat Parsers for .NET: Microformat's Parsers for .NET helps you to collect information you run into on the web, that is stored by means of microformats. It's written in C...MoneyManager: Trying to make Personal Finances management System for my needs. Microsoft stopped to support MSMoney - it makes me so sad, so I wanna to make my ...Open source software for running a financial markets trading business: The core conceptual model will support running a business in the financial markets, for example running a trading exchange business.Ovik: Open Video Converter with simple and intuitive interface for converting video files to the open formats.Oxygen Smil Player: The <project name> is a open a-smil player implementation that is meaned to be connected to a Digital Signage CMS like Oxygen media platform ( www....Protect The Carrot: Protect The Carrot is a small fastpaced XNA game. You are a farmer whose single carrot is under attack by ravenous rabbits. You have to shoot the r...Race Day Commander: The core project is designed to support coaches of "long distance" or "endurance" sporting events coach their athletes during a race. The idea bein...Raygun Diplomacy: Raygun Diplomacy is an action shooter sandbox game set in a futuristic world. It will use procedural generation for the world, weapons, and vehicle...Resx-Translator-Bot: Resx-Translator-Bot uses Google Tanslate to automatically translate the .resx-files in your .NET and ASP.NET applications.Sistema de Expedición del Permiso Único de Siembra: Sistema de Expedición del Permiso Único de Siembra.SiteOA: 一个基于asp.net mvc2的OAStraighce: This is a low-featured, cyclic (log files reside in appname\1.txt to at mose 31.txt), thread-safe and non-blocking TraceListener with some extensio...Touch Mice: Touch Mice turns multiple mice on a computer into individual touch devices. This allows you to create multi-touch applications using the new touch...TStringValidator: A project helper to validate strings. Use this class to hold your regex strings for use with any project's string validation.Ultimate Dotnetnuke Skin Object: Ultimate Skin Object is a Dotnetnuke 5.4.2+ extension that will allow you to easily change your skins doc type, remove unneeded css files, inject e...Ventosus: Ventosus is an upcoming partially text-based game. No further information is available at this time.vit: vit based on asp.net mvcW7 Auto Playlist Generator: Purpose: This application is designed to create W7MC playlist automatically whenever you want. You can select if you want the playlist sorted Alpha...W7 Video Playlist Creator: Purpose: This program allows you to quickly create wvx video play list for Windows Media Center. This functionality is not included in WMC and is u...New ReleasesBCryptTool: BCryptTool v0.2.1: The Microsoft .NET Framework 4.0 is needed to run this program.BFBC2 PRoCon: PRoCon 0.5.2.0: Notes available on phogue.netC# Dotnetnuke Module development Template for Visual Studio 2010: DNNModule 1.0: This is the initial release of DNNModule as was available for download from http://www.subodh.com/Projects/DNNModule In this release: Contains one...Client per la digitalizzazione di documenti integrato con DotNetNuke: Versione 3.0.1: Versione 3.0.1CommonLibrary.NET: CommonLibrary.NET 0.9.4 - Final Release: A collection of very reusable code and components in C# 3.5 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars,...Community Forums NNTP bridge: Community Forums NNTP Bridge V20: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V21: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...DirectQ: Release 1.8.4: Significant bug fixes and improvments over 1.8.3c; there may be some bugs that I haven't caught here as development became a little disjointed towa...DotNetNuke 5 Thai Language Pack: 1.0.1: Fixed Installation Problem. Version 1.0 -> 1.0.1 Type : Character encoding. Change : ().dnn description file to "ไทย (ไทย)".dnnEcho.Net: Echo.Net 1.0: Initial release of Echo.Net.Extend SmallBasic: Teaching Extensions v.018: ShapeMaker, Program Window, Timer, and many threading errors fixedExtend SmallBasic: Teaching Extensions v.019: Added Rectangles to shapemaker, and the bubble quizGenerate Twitter Message for Live Writer: Final Source Code Plus Binaries: Compete C# source code available below. I have also included the binary for those that just want to run it.GoogleMap Control: GoogleMap Control 4.5: Map and map events are only functional. New state, persistence and event implementation in progress. Javascript classes are implemented as MS AJAX ...Industrial Dashboard: ID 3.1: -Added new widget IndustrialSlickGrid. -Added example with IndustrialChart.LongBar: LongBar 2.1 Build 310: - Library: Double-clicking on tile will install it - Feedback: Now you can type your e-mail and comment for errorMavention: Mavention Insert Lorem ipsum: A Sandbox Solution for SharePoint 2010 that allows you to easily insert Lorem ipsum text into RTE. More information and screenshots available @ htt...Memory++: Memory ++: Tweak the memory to speed up your computer Memory is basically an application that will speed up your computer ensuring comfort in their normal ac...MyVocabulary: Version 2.0: Improvements over version 1.0: Several bug fixes New shortcuts added to increase usability A new section for testing verbs was addednopCommerce. Open Source online shop e-commerce solution.: nopCommerce 1.60: You can also install nopCommerce using the Microsoft Web Platform Installer. Simply click the button below: nopCommerce To see the full list of f...Nuntio Content: Nuntio Content 4.2.1: Patch release that fixes a couple of minor issues with version numbers and priority settings for role content. The release one package for DNN 4 an...Ovik: Ovik v0.0.1 (Preview Release): This is a very early preview release of Ovik. It contains only the pure processes of selecting files and launching a conversion process. Preview r...PHPExcel: PHPExcel 1.7.3c Production: This is a patch release for 26477. Want to contribute?Please refer the Contribute page. DonationsDonate via PayPal. If you want to, we can also a...PowerShell Admin Modules: PAM 0.2: Version 0.2 contains the PAMShare module with Get-Share Get-ShareAccessMask Get-ShareSecurity New-Share Remove-Share Set-Share and the PAMath modu...Professional MRDS: RDS 2008 R3 Release: This is an updated version of the code to work with RDS 2008 R3 (version 2.2.76.0). IMPORTANT NOTE These samples are supplied as a ZIP file. Unzip...Protect The Carrot: First release: We provide two ways to install the game. The first is PTC 1.0.0.0 Zip which contains a Click-Once installer (the DVD type since codeplex does not...PST File Format SDK: PST File Format SDK v0.2.0: Updated version of pstsdk with several bug fixes, including: Improved compiler support (several changes, including two patches from hub) Fixed Do...Race Day Commander: Race Day Commander v1: First release. The exact code that was written on the day in 6 hours.Resx-Translator-Bot: Release 1.0: Initial releaseSalient.StackApps: JavaScript API Wrapper beta 2: This is the first draft of the generated JS wrapper. Added basic test coverage for each route that can also serve as basic usage examples. More i...SharePoint 2010 PowerShell Scripts & Utilities: PSSP2010 Utils 0.2: Added Install-SPIFilter script cmdlet More information can be found at http://www.ravichaganti.com/blog/?p=1439SharePoint Tools from China Community: ECB菜单控制器: ECB菜单控制器Shopping Cart .NET: 1.5: Shopping Cart .NET 1.5 has received an upgrade to .NET 4.0 along with SQL Server 2005/8. There is a new AJAX Based inventory system that helps you ...sMODfix: sMODfix v1.0b: Added: provisional support for ecm_v54 Added: provisional support for gfx_v88SNCFT Gadget: SNCFT gadget v1: cette version est la version 1 de ma gadgetSnippet Designer: Snippet Designer 1.3: Change logChanges for Visual Studio 2010Fixed bug where "Export as Snippet" was failing in a website project Changed Snippet Explorer search to u...sNPCedit: sNPCedit v0.9b: + Fixed: structure of resources + Changed: some labels in GUISoftware Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+05: Sw. Is Hw. Lib. 3.0.0.x+05SQL Server PowerShell Extensions: 2.2.3 Beta: Release 2.2 re-implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 9 modules with 133 advanced functions, 2 cmdlets and 7 scri...StackOverflow.Net: Preview Beta: Goes with the Stack Apps API version 0.8Ultimate Dotnetnuke Skin Object: Ultimate Skin Object V1.00.00: Ultimate Skin Object is a Dotnetnuke 5.4.2+ extension that will allow you to easily change your skins doc type, remove unneeded css files, inject e...VCC: Latest build, v2.1.30601.0: Automatic drop of latest buildVelocity Shop: JUNE 2010: Source code aligned to .NET Framework 4.0, ASP.NET 4.0 and Windows Server AppFabric Caching RC.ViperWorks Ignition: ViperWorks_5.0.1005.31: ViperWorks Ignition Source, version 5.0.1005.31.Visual Studio 2010 and Team Foundation Server 2010 VM Factory: Session Recordings: This release contains the "raw" and undedited session recordings and slides delivered by the team. 2010-06-01 Create package and add two session r...W7 Auto Playlist Generator: Source Code plus Binaries: Compete C# and WinForm source code available below. I have also included the binary for those that just want to run it.W7 Video Playlist Creator: Source Code plus Binaries: Compete C# and WPF source code available below. I have also included the binary for those that just want to run it.Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgepatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationBlogEngine.NETIonics Isapi Rewrite FilterMirror Testing SystemRawrCaliburn: An Application Framework for WPF and SilverlightPHPExcelCustomer Portal Accelerator for Microsoft Dynamics CRM

    Read the article

  • CodePlex Daily Summary for Friday, May 28, 2010

    CodePlex Daily Summary for Friday, May 28, 2010New ProjectsBang: BangBox Office: Event Management for Community Theater Groups: Box Office is an event management web application to help theater groups manage & promote their shows. Manage performance schedules, sell tickets, ...CellsOnWeb: El espacio de las células del Programa Académico Microsoft en Argentina. CRM 4.0 Plugin Queue Item Counter: This is a crm 4.0 plugin to count queue items in each folder and display the number at the end of the name. For example, if the queue name is "Tes...Date Calculator: Date Calculator is a small desktop utility developed using Windows Forms .NET technology. This utility is analogous to the "Date calculation" modul...Enterprise Library Investigate: Enterprise Library Investigate ProjecteProject Management: Ứng dụng nền tảng web hỗ trợ quản lí và giám sát tiến độ dự án của tổ chức doanh nghiệp.Fiddler TreeView Panel Extension: Extension for Fiddler, to display the session information in a TreeView panel instead of the default ListBox, so it groups the information logicall...Git Source Control Provider: Git Source Control Provider is a Visual Studio Plug-in that integrates Git with Visual Studio.InspurProjects: Project on Inspur Co.Kryptonite: The Kryptonite project aims to improve development of websites based on the Kentico CMS. MLang .NET Wrapper: Detect the encoding of a text without BOM (Byte Order Mask) and choose the best Encoding for persistence or network transport of textMondaze: Proof of concept using Windows Azure.MultipointControls: A collection of controls that applied Windows Multipoint Mouse SDK. Windows Multipoint Mouse SDK enable app to have multiple mice interact simultan...Mundo De Bloques: "Mundo de bloques" makes it easier for analists to find the shortest way between two states in a problem using an heuristic function for Artificial...MyRPGtests: Just some tests :)OffInvoice Add-in for MS Office 2010: Project Description: The project it's based in the ability to extend funtionality in the Microsoft Office 2010 suite.OpenGraph .NET: A C# client for the Facebook Graph API. Supports desktop, web, ASP.NET MVC, and Silverlight connections and real-time updates. PLEASE NOTE: I dis...Portable Extensible Metadata (PEM) Data Annotation Generator: This project intends to help developers who uses PEM - Portable Extensible Metadata for Entity Framework generating Data Annotation information fro...Production and sale of plastic window systems: Automation company produces window design, production and sale of plastic window systems, management of sales contracts and their execution, print ...Renjian Storm (Renjian Image Viewer Uploader): Renjian Image Viewer UploaderShark Web Intelligence CMS: Shark Web Intelligence Inc. Content Management System.Shuffleboard Game for Windows Phone 7: This is a sample Shuffleboard game written in Silverlight for Windows Phone 7. It demonstrates physics, procedural animation, perspective transform...Silverlight Property Grid: Visual Studio Style PropertyGrid for Silverlight.SvnToTfs: Simple tool that migrates every Subversion revision toward Team Foundation Server 2010. It is developed in C# witn a WPF front-end.Tamias: Basic Cms Mvc Contrib Portable Area: The goal of this project is to have a easy-to-integrate basic cms for ASP.NET MVC applications based on MVC Contrib Portable Areas.TwitBy: TwitBy is a Twitter client for anyone who uses Twitter. It's easy to use and all of the major features are there. More features to come. H...Under Construction: A simple site that can be used as a splash for sites being upgraded or developed. UO Editor: The Owner & Organisation Editor makes it easy to view and edit the names of the registered owner and registered organization for your Windows OS. N...webform2010: this is the test projectWireless Network: ssWiX Toolset: The Windows Installer XML (WiX) is a toolset that builds Windows installation packages from XML source code. The toolset supports a command line en...Xna.Extend: A collection of easy to use Xna components for aiding a game programmer in developing thee next big thing. I plan on using the components from this...New ReleasesA Guide to Parallel Programming: Drop 4 - Guide Preface, Chapters 1 - 5, and code: This is Drop 4 with Guide Preface, Chapters 1 - 5, and References, and the accompanying code samples. This drop requires Visual Studio 2010 Beta 2 ...Ajax Toolkit for ASP.NET MVC: MAT 1.1: MAT 1.1Community Forums NNTP bridge: Community Forums NNTP Bridge V09: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release solves ...Community Forums NNTP bridge: Community Forums NNTP Bridge V10: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Community Forums NNTP bridge: Community Forums NNTP Bridge V11: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...CSS 360 Planetary Calendar: Beta Release: =============================================================================== Beta Release Version: 0.2 Description: This is the beta release de...Date Calculator: DateCalculator v1.0: This is the first release and as far as I know this is a stable version.eComic: eComic 2010.0.0.4: Version 2010.0.0.4 Change LogFixed issues in the "Full Screen Control Panel" causing it to lack translucence Added loupe magnification control ...Expression Encoder Batch Processor: Runtime Application v0.2: New in this version: Added more error handling if files not exist. Added button/feature to quit after current encoding job. Added code to handl...Fiddler TreeView Panel Extension: FiddlerTreeViewPanel 0.7: Initial compiled version of the assembly, ready to use. Please refer to http://fiddlertreeviewpanel.codeplex.com/ for instructions and installation.Gardens Point LEX: Gardens Point LEX v1.1.4: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.1.4 corre...Gardens Point Parser Generator: Gardens Point Parser Generator v1.4.1: Version 1.4.1 differs from version 1.4.0 only in containing a corrected version of a previously undocumented feature which allows the generation of...IsWiX: IsWiX 1.0.264.0: Build 1.0.264.0 - built against Fireworks 1.0.264.0. Adds support for autogenerating the SourceDir prepreprocessor variable and gives user choice t...Matrix: Matrix 0.5.2: Updated licenseMesopotamia Experiment: Mesopotamia 1.2.90: Release Notes - Ugraded to Microsoft Robotics Developer Studio 2008 R3 Bug Fixes - Fix to keep any sole organisms that penetrate to the next fitne...Microsoft Crm 4.0 Filtered Lookup: Microsoft Crm 4.0 Filtered Lookup: How to use: Allow passing custom querystring values: Create a DWORD registry key named [DisableParameterFilter] under [HKEY_LOCAL_MACHINE\SOFTWAR...MSBuild Extension Pack: May 2010: The MSBuild Extension Pack May 2010 release provides a collection of over 340 MSBuild tasks. A high level summary of what the tasks currently cover...MultiPoint Vote: MultiPointVote v.1: This accepts user inputs: number of participants, poll/survey title and the list of options A text file containing the items listed line per line...Mundo De Bloques: Mundo de Bloques, Release 1: "Mundo de bloques" makes it easier for analists to find the shortest way between two states in a problem using an heuristic function for Artificial...OffInvoice Add-in for MS Office 2010: OffInvoice for Office 2010 V1.0 Installer: Add-in for MS Word 2010 or MS Excel 2010 to allow the management (issuing, visualization and reception) of electronic invoices, based in the XML fo...OpenGraph .NET: 0.9.1 Beta: This is the first public release of OpenGraph .NET.patterns & practices: Composite WPF and Silverlight: Prism v2.2 - May 2010 Release: Composite Application Guidance for WPF and Silverlight - May 2010 Release (Prism V2.2) The Composite Application Guidance for WPF and Silverlight ...Portable Extensible Metadata (PEM) Data Annotation Generator: Release 49376: First release.Production and sale of plastic window systems: Yanuary 2009: NOTEBefore loading program, make sure you have installed MySQL and created DataBase that store in Source Code (look at below) Where Is The Source?...PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT: PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT--3.2: The current version of the Programmable Software Development Environment has the capability of reading an optional text file in each source develop...Rapidshare Episode Downloader: RED 0.8.6: - Fixed Edit form to actually save the data - Added Bypass Validation to enable future episodes - Added Search parameter to Edit form - Added refr...Renjian Storm (Renjian Image Viewer Uploader): Renjian Storm 0.6: 人间风暴 v0.6 稳定版sELedit: sELedit v1.1b: + Fixed: when export and import items to text files, there was a bug with "NULL" bytes in the unicode stringShake - C# Make: Shake v0.1.21: Changes: FileTask CopyDir method modified, see documentationSharePoint Labs: SPLab7001A-ENU-Level100: SPLab7001A-ENU-Level100 This SharePoint Lab will teach how to analyze and audit WSP files. WSP files are somewhere in a no man's land between ITPro...SharePoint Rsync List: 1.0.0.3: Fix spcontext dispose bug in menu try and run jobs only on central admin server mark a single file failure if file not copied don't delete destinat...Shuffleboard Game for Windows Phone 7: Shuffleboard 1.0.0.1: Source code, solution files, and assets.Software Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+04: Sw. Is Hw. Lib. 3.0.0.x+04SoulHackers Demon Unite(Chinese version): WPFClient pre alpha: can unite 2, 3 or more demons. can un-unite 1 demon to 2 demon (no triple un-unite yet).Team Deploy: Team Deploy 2010 R1: This is the initial release for Team Deploy 2010 for TFS Team Build 2010. All features from Team Build 2.x are functional in this version. Comple...Under Construction: Under Construction: All Files required to show under construction page. The Page will pull through the Domain name that the site is being run on this allows you to use...Unit Driven: Version 0.0.5: - Tests nested by namespace parts. - Run buttons properly disabled based on currently running tests. - Timeouts for async tests enabled.UO Editor: UO Editor v1.0: Initial ReleaseVCC: Latest build, v2.1.30527.0: Automatic drop of latest buildWeb Service Software Factory Contrib: Import WSDL 2010: Generate Service Contract models from existing WSDL documents for Web Service Software Factory 2010. Usage: Install the vsix and right click on a S...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsAStar.netpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationSqlServerExtensionsBlogEngine.NETRawrpatterns & practices: Windows Azure Security GuidanceCodeReviewCustomer Portal Accelerator for Microsoft Dynamics CRMIonics Isapi Rewrite Filter

    Read the article

  • How to Organize a Programming Language Club

    - by Ben Griswold
    I previously noted that we started a language club at work.  You know, I searched around but I couldn’t find a copy of the How to Organize a Programming Language Club Handbook. Maybe it’s sold out?  Yes, Stack Overflow has quite a bit of information on how to learn and teach new languages and there’s also a good number of online tutorials which provide language introductions but I was interested in group learning.  After   two months of meetings, I present to you the Unofficial How to Organize a Programming Language Club Handbook.  1. Gauge interest. Start by surveying prospects. “Excuse me, smart-developer-whom-I-work-with-and-I-think-might-be-interested-in-learning-a-new-coding-language-with-me. Are you interested in learning a new language with me?” If you’re lucky, you work with a bunch of really smart folks who aren’t shy about teaching/learning in a group setting and you’ll have a collective interest in no time.  Simply suggesting the idea is the only effort required.  If you don’t work in this type of environment, maybe you should consider a new place of employment.  2. Make it official. Send out a “Welcome to the Club” email: There’s been talk of folks itching to learn new languages – Python, Scala, F# and Haskell to name a few.  Rather than taking on new languages alone, let’s learn in the open.  That’s right.  Let’s start a languages club.  We’ll have everything a real club needs – secret handshake, goofy motto and a high-and-mighty sense that we’re better than everybody else. T-shirts?  Hell YES!  Anyway, I’ve thrown this idea around the office and no one has laughed at me yet so please consider this your very official invitation to be in THE club. [Insert your ideas about how the club might be run, solicit feedback and suggestions, ask what other folks would like to get out the club, comment about club hazing practices and talk up the T-shirts even more. Finally, call out the languages you are interested in learning and ask the group for their list.] 3.  Send out invitations to the first meeting.  Don’t skimp!  Hallmark greeting cards for everyone.  Personalized.  Hearts over the I’s and everything.  Oh, and be sure to include the list of suggested languages with vote count.  Here the list of languages we are interested in: Python 5 Ruby 4 Objective-C 3 F# 2 Haskell 2 Scala 2 Ada 1 Boo 1 C# 1 Clojure 1 Erlang 1 Go 1 Pi 1 Prolog 1 Qt 1 4.  At the first meeting, there must be cake.  Lots of cake. And you should tackle some very important questions: Which language should we start with?  You can immediately go with the top vote getter or you could do as we did and designate each person to provide a high-level review of each of the proposed languages over the next two weeks.  After all presentations are completed, vote on the language. Our high-level review consisted of answers to a series of questions. Decide how often and where the group will meet.  We, for example, meet for a brown bag lunch every Wednesday.  Decide how you’re going to learn.  We determined that the best way to learn is to just dive in and write code.  After choosing our first language (Python), we talked about building an application, or performing coding katas, but we ultimately choose to complete a series of Project Euler problems.  We kept it simple – each member works out the same two problems each week in preparation of a code review the following Wednesday. 5.  Code, Review, Learn.  Prior to the weekly meeting, everyone uploads their solutions to our internal wiki.  Each Project Euler problem has a dedicated page.  In the meeting, we use a really fancy HD projector to show off each member’s solution.  It is very important to use an HD projector.  Again, don’t skimp!  Each code author speaks to their solution, everyone else comments, applauds, points fingers and laughs, etc.  As much as I’ve learned from solving the problems on my own, I’ve learned at least twice as much at the group code review.  6.  Rinse. Lather. Repeat.  We’ve hosted the language club for 7 weeks now.  The first meeting just set the stage.  The next two meetings provided a review of the languages followed by a first language selection.  The remaining meetings focused on Python and Project Euler problems.  Today we took a vote as to whether or not we’re ready to switch to another language and/or another problem set.  Pretty much everyone wants to stay the course for a few more weeks at least.  Until then, we’ll continue to code the next two solutions, review and learn. Again, we’ve been having a good time with the programming language club.  I’m glad it got off the ground.  What do you think?  Would you be interested in a language club?  Any suggestions on what we might do better?

    Read the article

  • EM12c Release 4: New Compliance features including DB STIG Standard

    - by DaveWolf
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Enterprise Manager’s compliance framework is a powerful and robust feature that provides users the ability to continuously validate their target configurations against a specified standard. Enterprise Manager’s compliance library is filled with a wide variety of standards based on Oracle’s recommendations, best practices and security guidelines. These standards can be easily associated to a target to generate a report showing its degree of conformance to that standard. ( To get an overview of  Database compliance management in Enterprise Manager see this screenwatch. ) Starting with release 12.1.0.4 of Enterprise Manager the compliance library will contain a new standard based on the US Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) for Oracle Database 11g. According to the DISA website, “The STIGs contain technical guidance to ‘lock down’ information systems/software that might otherwise be vulnerable to a malicious computer attack.” In essence, a STIG is a technical checklist an administrator can follow to secure a system or software. Many US government entities are required to follow these standards however many non-US government entities and commercial companies base their standards directly or partially on these STIGs. You can find more information about the Oracle Database and other STIGs on the DISA website. The Oracle Database 11g STIG consists of two categories of checks, installation and instance. Installation checks focus primarily on the security of the Oracle Home while the instance checks focus on the configuration of the running database instance itself. If you view the STIG compliance standard in Enterprise Manager, you will see the rules organized into folders corresponding to these categories. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The rule names contain a rule ID ( DG0020 for example ) which directly map to the check name in the STIG checklist along with a helpful brief description. The actual description field contains the text from the STIG documentation to aid in understanding the purpose of the check. All of the rules have also been documented in the Oracle Database Compliance Standards reference documentation. In order to use this standard both the OMS and agent must be at version 12.1.0.4 as it takes advantage of several features new in this release including: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-Side Compliance Rules Manual Compliance Rules Violation Suppression Additional BI Publisher Compliance Reports /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-Side Compliance Rules Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-side compliance rules are essentially the result of a tighter integration between Configuration Extensions and Compliance Rules. If you ever created customer compliance content in past versions of Enterprise Manager, you likely used Configuration Extensions to collect additional information into the EM repository so it could be used in a Repository compliance rule. This process although powerful, could be confusing to correctly model the SQL in the rule creation wizard. With agent-side rules, the user only needs to choose the Configuration Extension/Alias combination and that’s it. Enterprise Manager will do the rest for you. This tighter integration also means their lifecycle is managed together. When you associate an agent-side compliance standard to a target, the required Configuration Extensions will be deployed automatically for you. The opposite is also true, when you unassociated the compliance standard, the Configuration Extensions will also be undeployed. The Oracle Database STIG compliance standard is implemented as an agent-side standard which is why you simply need to associate the standard to your database targets without previously deploying the associated Configuration Extensions. You can learn more about using Agent-Side compliance rules in the screenwatch Using Agent-Side Compliance Rules on Enterprise Manager's Lifecycle Management page on OTN. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Manual Compliance Rules Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} There are many checks in the Oracle Database STIG as well as other common standards which simply cannot be automated. This could be something as simple as “Ensure the datacenter entrance is secured.” or complex as Oracle Database STIG Rule DG0186 – “The database should not be directly accessible from public or unauthorized networks”. These checks require a human to perform and attest to its successful completion. Enterprise Manager now supports these types of checks in Manual rules. When first associated to a target, each manual rule will generate a single violation. These violations must be manually cleared by a user who is in essence attesting to its successful completion. The user is able to permanently clear the violation or give a future date on which the violation will be regenerated. Setting a future date is useful when policy dictates a periodic re-validation of conformance wherein the user will have to reperform the check. The optional reason field gives the user an opportunity to provide details of the check results. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Violation Suppression There are situations that require the need to permanently or temporarily suppress a legitimate violation or finding. These include approved exceptions and grace periods. Enterprise Manager now supports the ability to temporarily or permanently suppress a violation. Unlike when you clear a manual rule violation, suppression simply removes the violation from the compliance results UI and in turn its negative impact on the score. The violation still remains in the EM repository and can be accounted for in compliance reports. Temporarily suppressing a violation can give users a grace period in which to address an issue. If the issue is not addressed within the specified period, the violation will reappear in the results automatically. Again the user may enter a reason for the suppression which will be permanently saved with the event along with the suppressing user ID. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Additional BI Publisher compliance reports As I am sure you have learned by now, BI Publisher now ships and is integrated with Enterprise Manager 12.1.0.4. This means users can take full advantage of the powerful reporting engine by using the Oracle provided reports or building their own. There are many new compliance related reports available in 12.1.0.4 covering all aspects including the association status, library as well as summary and detailed results reports.  10 New Compliance Reports Compliance Summary Report Example showing STIG results Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Conclusion Together with the Oracle Database 11g STIG compliance standard these features provide a complete solution for easily auditing and reporting the security posture of your Oracle Databases against this well known benchmark. You can view an overview presentation and demo in the screenwatch Using the STIG Compliance Standard on Enterprise Manager's Lifecycle Management page on OTN. Additional EM12c Compliance Management Information Compliance Management - Overview ( Presentation ) Compliance Management - Custom Compliance on Default Data (How To) Compliance Management - Custom Compliance using SQL Configuration Extension (How To) Compliance Management - Customer Compliance using Command Configuration Extension (How To)

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >