Search Results

Search found 18677 results on 748 pages for 'current'.

Page 701/748 | < Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >

  • Reporting Services - It's a Wrap!

    - by smisner
    If you have any experience at all with Reporting Services, you have probably developed a report using the matrix data region. It's handy when you want to generate columns dynamically based on data. If users view a matrix report online, they can scroll horizontally to view all columns and all is well. But if they want to print the report, the experience is completely different and you'll have to decide how you want to handle dynamic columns. By default, when a user prints a matrix report for which the number of columns exceeds the width of the page, Reporting Services determines how many columns can fit on the page and renders one or more separate pages for the additional columns. In this post, I'll explain two techniques for managing dynamic columns. First, I'll show how to use the RepeatRowHeaders property to make it easier to read a report when columns span multiple pages, and then I'll show you how to "wrap" columns so that you can avoid the horizontal page break. Included with this post are the sample RDLs for download. First, let's look at the default behavior of a matrix. A matrix that has too many columns for one printed page (or output to page-based renderer like PDF or Word) will be rendered such that the first page with the row group headers and the inital set of columns, as shown in Figure 1. The second page continues by rendering the next set of columns that can fit on the page, as shown in Figure 2.This pattern continues until all columns are rendered. The problem with the default behavior is that you've lost the context of employee and sales order - the row headers - on the second page. That makes it hard for users to read this report because the layout requires them to flip back and forth between the current page and the first page of the report. You can fix this behavior by finding the RepeatRowHeaders of the tablix report item and changing its value to True. The second (and subsequent pages) of the matrix now look like the image shown in Figure 3. The problem with this approach is that the number of printed pages to flip through is unpredictable when you have a large number of potential columns. What if you want to include all columns on the same page? You can take advantage of the repeating behavior of a tablix and get repeating columns by embedding one tablix inside of another. For this example, I'm using SQL Server 2008 R2 Reporting Services. You can get similar results with SQL Server 2008. (In fact, you could probably do something similar in SQL Server 2005, but I haven't tested it. The steps would be slightly different because you would be working with the old-style matrix as compared to the new-style tablix discussed in this post.) I created a dataset that queries AdventureWorksDW2008 tables: SELECT TOP (100) e.LastName + ', ' + e.FirstName AS EmployeeName, d.FullDateAlternateKey, f.SalesOrderNumber, p.EnglishProductName, sum(SalesAmount) as SalesAmount FROM FactResellerSales AS f INNER JOIN DimProduct AS p ON p.ProductKey = f.ProductKey INNER JOIN DimDate AS d ON d.DateKey = f.OrderDateKey INNER JOIN DimEmployee AS e ON e.EmployeeKey = f.EmployeeKey GROUP BY p.EnglishProductName, d.FullDateAlternateKey, e.LastName + ', ' + e.FirstName, f.SalesOrderNumber ORDER BY EmployeeName, f.SalesOrderNumber, p.EnglishProductName To start the report: Add a matrix to the report body and drag Employee Name to the row header, which also creates a group. Next drag SalesOrderNumber below Employee Name in the Row Groups panel, which creates a second group and a second column in the row header section of the matrix, as shown in Figure 4. Now for some trickiness. Add another column to the row headers. This new column will be associated with the existing EmployeeName group rather than causing BIDS to create a new group. To do this, right-click on the EmployeeName textbox in the bottom row, point to Insert Column, and then click Inside Group-Right. Then add the SalesOrderNumber field to this new column. By doing this, you're creating a report that repeats a set of columns for each EmployeeName/SalesOrderNumber combination that appears in the data. Next, modify the first row group's expression to group on both EmployeeName and SalesOrderNumber. In the Row Groups section, right-click EmployeeName, click Group Properties, click the Add button, and select [SalesOrderNumber]. Now you need to configure the columns to repeat. Rather than use the Columns group of the matrix like you might expect, you're going to use the textbox that belongs to the second group of the tablix as a location for embedding other report items. First, clear out the text that's currently in the third column - SalesOrderNumber - because it's already added as a separate textbox in this report design. Then drag and drop a matrix into that textbox, as shown in Figure 5. Again, you need to do some tricks here to get the appearance and behavior right. We don't really want repeating rows in the embedded matrix, so follow these steps: Click on the Rows label which then displays RowGroup in the Row Groups pane below the report body. Right-click on RowGroup,click Delete Group, and select the option to delete associated rows and columns. As a result, you get a modified matrix which has only a ColumnGroup in it, with a row above a double-dashed line for the column group and a row below the line for the aggregated data. Let's continue: Drag EnglishProductName to the data textbox (below the line). Add a second data row by right-clicking EnglishProductName, pointing to Insert Row, and clicking Below. Add the SalesAmount field to the new data textbox. Now eliminate the column group row without eliminating the group. To do this, right-click the row above the double-dashed line, click Delete Rows, and then select Delete Rows Only in the message box. Now you're ready for the fit and finish phase: Resize the column containing the embedded matrix so that it fits completely. Also, the final column in the matrix is for the column group. You can't delete this column, but you can make it as small as possible. Just click on the matrix to display the row and column handles, and then drag the right edge of the rightmost column to the left to make the column virtually disappear. Next, configure the groups so that the columns of the embedded matrix will wrap. In the Column Groups pane, right-click ColumnGroup1 and click on the expression button (labeled fx) to the right of Group On [EnglishProductName]. Replace the expression with the following: =RowNumber("SalesOrderNumber" ). We use SalesOrderNumber here because that is the name of the group that "contains" the embedded matrix. The next step is to configure the number of columns to display before wrapping. Click any cell in the matrix that is not inside the embedded matrix, and then double-click the second group in the Row Groups pane - SalesOrderNumber. Change the group expression to the following expression: =Ceiling(RowNumber("EmployeeName")/3) The last step is to apply formatting. In my example, I set the SalesAmount textbox's Format property to C2 and also right-aligned the text in both the EnglishProductName and the SalesAmount textboxes. And voila - Figure 6 shows a matrix report with wrapping columns. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CodePlex Daily Summary for Sunday, February 28, 2010

    CodePlex Daily Summary for Sunday, February 28, 2010New ProjectsESB Toolkit Extensions: ESB Extensions is a solution containing multiple .Net Projects and artifacts: Unit Tests, Itineraries, Business Rules, Binding Files, and C# Class ...Event-Based Components Binder: The Binder automatically connects output-pins to input-pins of Event-Based Components based on message type information and naming conventions. ...Haze Anti-Virus: Haze Anti-Virus is a anti virus written in C# and has features such a realtime process watching and a Process Blacklist, and is able to download Da...latex2mathml: A .NET 2.0 library written in C# which allows the conversion of LaTeX documents to XHTML+MathML format. A stand-alone converter is included. The li...Project Lyrebird: Project lyrebird is a attempt to create a all-purpose media player. It is designed to be simple, yet powerful. Its written in C#QueryToGrid Module for DotNetNuke®: This is a module that allows you to execute and display the results of T-SQL queries in DotNetNuke using your choice of AJAX grids.Reusable Library Demo: A demonstration of reusable abstractions for enterprise application developerSharePoint 2010 Conference Samples: This project contains source code from various SharePoint 2010 conferences where Scot Hillier presented.Silverlight Photo Blogger: Silverlight Photo Blogger gives you the tools you need to capture and blog about your travels in a rich and interactive web experience. Enjoy som...SMTP Test: Several times we are faced with applications that send email, the SMTP Tester principle objective is to test various possibilities of sendingSolution Tools - tools for Visual Studio solutions and projects: Solution Tools are a collection of tools that you can use with your Visual Studio Solutions and projects.New ReleasesAgile Poker Cards for Windows Mobile: Agile Poker Cards v1.1.0.0: Agile Poker Cards v1.1.0.0 Use this application to display poker cards in a planning session on a Windows Mobile device. Release notes Added new ...BuildTools - Toolset for automated builds: BuildTools 2.0 Feb 2010 Milestone: The Feb 2010 Milestone release is a complete rewrite of the old codebase in Visual Studio 2010 RC. It features MSBuild tasks for generating build v...Composure: NHibernate-Trunk-2010-02-25-VS2010.NET4 Alpha1: Recent NHibernate-Trunk conversion for Visual Studio 2010 Beta2 against .NET 4.0. Although all of the tests pass (other than the "Ignored"), this ...Employee Scheduler: Employee Scheduler 2.4: Extract the files to a directory and run Lab Hours.exe. Add an employee. Double click an employee to modify their times. Please contact me through ...ESB Toolkit Extensions: Tellago BizTalk ESB 2.0 Toolkit Extensions: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...Haze Anti-Virus: Haze Anti-Virus Binary v1.0.3: This is the Compiled version of Haze Anti-Virus, please let me know about any bugs, thanks Please Note that Database updating is currently not avai...Haze Anti-Virus: Haze Anti-Virus Source v1.0.3: This is the source for Haze Anti-VirusHOG Project: HOG Visual Studio Template: This is Visual Studio HOG Template. Created by the great tool: Solution FactoryHOG Project: Template user guide: HOW TOiTuner - The iTunes Companion: iTuner 1.1.3711: Two new features are available: the Automated Librarian and Playlist Exporter. The iTuner Automated Librarian automatically cleans the iTunes libr...johanleino.codeplex.com: SilverlightMultiLevelNavigationExample: The source code for SilverlightMultiLevelNavigationExample (VS 2010)MDownloader: MDownloader-0.15.3.56128: Fixed filefactory provider implementation after site changes.MiniTwitter: 1.09: MiniTwitter 1.09 更新内容 変更 スクロール位置がトップ以外の時は自動更新や発言時に位置を保持するように変更 タブ毎にスクロール位置が変わらないように変更 URL に ? や ! が含まれている時は短縮 URL に変換するように変更NMock3: NMock3 - Beta 4, .NET 3.5: This release includes the most current version of the NMock2 project code from Source Forge. Please start providing feedback on the tutorials. The...QueryUnit: QueryUnitPOC v. 0.0.0.7: - This version fixes problems related to the fact that in previous releases you had to specify expected values using locale-specific formats. Now e...RapidWebDev - .NET Enterprise Software Development Infrastructure: RapidWebDev 1.51: This is a hot-fix version for 1.5 which is added a new restful web service for concrete data and fixed some major bugs. The change list is as follo...Rawr: Rawr 2.3.11: - Load from Armory code cleaned up. - Tiny Abomination in a Jar's proc how now been more accurately modeled. - You should now be able to reload...Resharper Settings Manager: RSM v1.2: Changes Added Default Settings File option. The selected settings file will be loaded automatically for solutions with no settings sharing. Added...Reusable Library Demo: Reusable Library Demo v1.0.0: A demonstration of reusable abstractions for enterprise application developerRounded Corners / DIV Container: MJC RoundedDiv 3.2: This is the first public release on Codeplex.com. Versions previous to 3.2 were created before this control was made available on Codeplex.com.SharePoint 2010 Conference Samples: Samples: Download the samples from the conferencesSharePoint Outlook Connector: Version 1.2.2.8: Saving email message as list item and attachments as attachment of the list item functionality has been addedSharePoint URL Ping Tool: Url Ping Tool Solution: A solution that contain one fram fature that will add a link under Site Administration section in the Site Settings page.SMTP Test: Fist SMTP Tester: First ReleaseSolution Tools - tools for Visual Studio solutions and projects: SolutionTools binary: Initial release of the tool. Turns out, this project was just a big waste of effort - use Project Linker instead!Solution Tools - tools for Visual Studio solutions and projects: SolutionTools source - don't use this tool: Initial release of the tool. Turns out, this project was just a big waste of effort - use Project Linker instead! Anyway, here's the source code...Spark View Engine: Spark v1.1 RC1: Overview This build is a preview of v1.1. Among other changes it provides support for ASP.NET MVC 2 RC2. Spark v1.1 release will be created soon ...Sprite Sheet Packer: 2.0 Release: I'm calling this a full new release because I can. Refactored all of the build logic to sspack.exe. This allows you to run this from the command l...SPSF SharePoint Software Factory: SPSF SharePoint Software Factory 2.4.3: New features: WSPBuilder support, Simple Application now with optional multilanguage support, Extending deployment skript for large deployments Fix...TortoiseHg: Beta for TortoiseHg 1.0 (0.9.31201): Beta for TortoiseHg 1.0 (0.9.31201) Please backup your user Mercurial.ini file and then uninstall any 0.9.X release before installing Use the x86...UI Compiler .NET - JavaScript compiler/minifier built on Google Closure Compiler: UI Compiler .NET 1.5 Beta: UI Compiler .NET does not include Java. To be able to run Google Closure Compiler locally you must make sure that Java 6 is installed. If Java 6 (o...VCC: Latest build, v2.1.30227.0: Automatic drop of latest buildVisual Studio DSite: File Encryption and Decryption (Visual Basic 2008): This program will create an encrypted copy of the file specified. Also decrypt the file specified. This program contains the source code but if yo...Visual Studio DSite: Visual C++ 2008 CLR Console Application Random Int: This source code includes an example of generating a random integer between the numbers 1-100.Weather Forecast Control: MJC MyWeather 2.2: This is the first public release on Codeplex.com. Versions previous to 2.2 were created before this control was made available on Codeplex.com.Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Microsoft SQL Server Community & SamplesASP.NETDotNetNuke® Community EditionBlogEngine.NETMost Active ProjectsDinnerNow.netRawrBlogEngine.NETMapWindow GISSLARToolkit - Silverlight Augmented Reality ToolkitCommon Context Adapterspatterns & practices – Enterprise LibrarySharpMap - Geospatial Application Framework for the CLRNB_Store - Free DotNetNuke Ecommerce Catalog ModuleRapid Entity Framework. (ORM). CTP 2

    Read the article

  • Securing an ASP.NET MVC 2 Application

    - by rajbk
    This post attempts to look at some of the methods that can be used to secure an ASP.NET MVC 2 Application called Northwind Traders Human Resources.  The sample code for the project is attached at the bottom of this post. We are going to use a slightly modified Northwind database. The screen capture from SQL server management studio shows the change. I added a new column called Salary, inserted some random salaries for the employees and then turned off AllowNulls.   The reporting relationship for Northwind Employees is shown below.   The requirements for our application are as follows: Employees can see their LastName, FirstName, Title, Address and Salary Employees are allowed to edit only their Address information Employees can see the LastName, FirstName, Title, Address and Salary of their immediate reports Employees cannot see records of non immediate reports.  Employees are allowed to edit only the Salary and Title information of their immediate reports. Employees are not allowed to edit the Address of an immediate report Employees should be authenticated into the system. Employees by default get the “Employee” role. If a user has direct reports, they will also get assigned a “Manager” role. We use a very basic empId/pwd scheme of EmployeeID (1-9) and password test$1. You should never do this in an actual application. The application should protect from Cross Site Request Forgery (CSRF). For example, Michael could trick Steven, who is already logged on to the HR website, to load a page which contains a malicious request. where without Steven’s knowledge, a form on the site posts information back to the Northwind HR website using Steven’s credentials. Michael could use this technique to give himself a raise :-) UI Notes The layout of our app looks like so: When Nancy (EmpID 1) signs on, she sees the default page with her details and is allowed to edit her address. If Nancy attempts to view the record of employee Andrew who has an employeeID of 2 (Employees/Edit/2), she will get a “Not Authorized” error page. When Andrew (EmpID 2) signs on, he can edit the address field of his record and change the title and salary of employees that directly report to him. Implementation Notes All controllers inherit from a BaseController. The BaseController currently only has error handling code. When a user signs on, we check to see if they are in a Manager role. We then create a FormsAuthenticationTicket, encrypt it (including the roles that the employee belongs to) and add it to a cookie. private void SetAuthenticationCookie(int employeeID, List<string> roles) { HttpCookiesSection cookieSection = (HttpCookiesSection) ConfigurationManager.GetSection("system.web/httpCookies"); AuthenticationSection authenticationSection = (AuthenticationSection) ConfigurationManager.GetSection("system.web/authentication"); FormsAuthenticationTicket authTicket = new FormsAuthenticationTicket( 1, employeeID.ToString(), DateTime.Now, DateTime.Now.AddMinutes(authenticationSection.Forms.Timeout.TotalMinutes), false, string.Join("|", roles.ToArray())); String encryptedTicket = FormsAuthentication.Encrypt(authTicket); HttpCookie authCookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket); if (cookieSection.RequireSSL || authenticationSection.Forms.RequireSSL) { authCookie.Secure = true; } HttpContext.Current.Response.Cookies.Add(authCookie); } We read this cookie back in Global.asax and set the Context.User to be a new GenericPrincipal with the roles we assigned earlier. protected void Application_AuthenticateRequest(Object sender, EventArgs e){ if (Context.User != null) { string cookieName = FormsAuthentication.FormsCookieName; HttpCookie authCookie = Context.Request.Cookies[cookieName]; if (authCookie == null) return; FormsAuthenticationTicket authTicket = FormsAuthentication.Decrypt(authCookie.Value); string[] roles = authTicket.UserData.Split(new char[] { '|' }); FormsIdentity fi = (FormsIdentity)(Context.User.Identity); Context.User = new System.Security.Principal.GenericPrincipal(fi, roles); }} We ensure that a user has permissions to view a record by creating a custom attribute AuthorizeToViewID that inherits from ActionFilterAttribute. public class AuthorizeToViewIDAttribute : ActionFilterAttribute{ IEmployeeRepository employeeRepository = new EmployeeRepository(); public override void OnActionExecuting(ActionExecutingContext filterContext) { if (filterContext.ActionParameters.ContainsKey("id") && filterContext.ActionParameters["id"] != null) { if (employeeRepository.IsAuthorizedToView((int)filterContext.ActionParameters["id"])) { return; } } throw new UnauthorizedAccessException("The record does not exist or you do not have permission to access it"); }} We add the AuthorizeToView attribute to any Action method that requires authorization. [HttpPost][Authorize(Order = 1)]//To prevent CSRF[ValidateAntiForgeryToken(Salt = Globals.EditSalt, Order = 2)]//See AuthorizeToViewIDAttribute class[AuthorizeToViewID(Order = 3)] [ActionName("Edit")]public ActionResult Update(int id){ var employeeToEdit = employeeRepository.GetEmployee(id); if (employeeToEdit != null) { //Employees can edit only their address //A manager can edit the title and salary of their subordinate string[] whiteList = (employeeToEdit.IsSubordinate) ? new string[] { "Title", "Salary" } : new string[] { "Address" }; if (TryUpdateModel(employeeToEdit, whiteList)) { employeeRepository.Save(employeeToEdit); return RedirectToAction("Details", new { id = id }); } else { ModelState.AddModelError("", "Please correct the following errors."); } } return View(employeeToEdit);} The Authorize attribute is added to ensure that only authorized users can execute that Action. We use the TryUpdateModel with a white list to ensure that (a) an employee is able to edit only their Address and (b) that a manager is able to edit only the Title and Salary of a subordinate. This works in conjunction with the AuthorizeToViewIDAttribute. The ValidateAntiForgeryToken attribute is added (with a salt) to avoid CSRF. The Order on the attributes specify the order in which the attributes are executed. The Edit View uses the AntiForgeryToken helper to render the hidden token: ......<% using (Html.BeginForm()) {%><%=Html.AntiForgeryToken(NorthwindHR.Models.Globals.EditSalt)%><%= Html.ValidationSummary(true, "Please correct the errors and try again.") %><div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %></div><div class="editor-field">...... The application uses View specific models for ease of model binding. public class EmployeeViewModel{ public int EmployeeID; [Required] [DisplayName("Last Name")] public string LastName { get; set; } [Required] [DisplayName("First Name")] public string FirstName { get; set; } [Required] [DisplayName("Title")] public string Title { get; set; } [Required] [DisplayName("Address")] public string Address { get; set; } [Required] [DisplayName("Salary")] [Range(500, double.MaxValue)] public decimal Salary { get; set; } public bool IsSubordinate { get; set; }} To help with displaying readonly/editable fields, we use a helper method. //Simple extension method to display a TextboxFor or DisplayFor based on the isEditable variablepublic static MvcHtmlString TextBoxOrLabelFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, bool isEditable){ if (isEditable) { return htmlHelper.TextBoxFor(expression); } else { return htmlHelper.DisplayFor(expression); }} The helper method is used in the view like so: <%=Html.TextBoxOrLabelFor(model => model.Title, Model.IsSubordinate)%> As mentioned in this post, there is a much easier way to update properties on an object. Download Demo Project VS 2008, ASP.NET MVC 2 RTM Remember to change the connectionString to point to your Northwind DB NorthwindHR.zip Feedback and bugs are always welcome :-)

    Read the article

  • Issue 15: Oracle PartnerNetwork Exchange @ Oracle OpenWorld

    - by rituchhibber
         ORACLE FOCUS Oracle PartnerNetwork Exchange@ ORACLE OpenWorld Sylvie MichouSenior DirectorPartner Marketing & Communications and Strategic Programs RESOURCES -- Oracle OpenWorld 2012 Oracle PartnerNetwork Exchange @ OpenWorld Oracle PartnerNetwork Exchange @ OpenWorld Registration Oracle PartnerNetwork Exchange SpecializationTest Fest Oracle OpenWorld Schedule Builder Oracle OpenWorld Promotional Toolkit for Partners Oracle Partner Events Oracle Partner Webcasts Oracle EMEA Partner News SUBSCRIBE FEEDBACK PREVIOUS ISSUES If you are attending our forthcoming Oracle OpenWorld 2012 conference in San Francisco from 30 September to 4 October, you will discover a new dedicated programme of keynotes and sessions tailored especially for you, our valued partners. Oracle PartnerNetwork Exchange @ OpenWorld has been created to enhance the opportunities for you to learn from and network with Oracle executives and experts. The programme also provides more informal opportunities than ever throughout the week to meet up with the people who are most important to your business: customers, prospects, colleagues and the Oracle EMEA Alliances & Channels management team. Oracle remains fully focused on building the industry's most admired partner ecosystem—which today spans over 25,000 partners. This new OPN Exchange programme offers an exciting change of pace for partners throughout the conference. Now it will be possible to enjoy a fully-integrated, partner-dedicated session schedule throughout the week, as well as key social events such as the Sunday night Welcome Reception, networking lunches from Monday to Thursday at the Howard Street Tent, and a fantastic closing event on the last Thursday afternoon. In addition to the regular Oracle OpenWorld conference schedule, if you have registered for the Oracle PartnerNetwork Exchange @ OpenWorld programme, you will be invited to attend a much anticipated global partner keynote presentation, plus more than 40 conference sessions aimed squarely at what's most important to you, as partners. Prominent topics for discussion will include: Oracle technologies and roadmaps and how they fit with partners' business plans; business development; regional distinctions in business practices; and much more. Each session will provide plenty of food for thought ahead of the numerous networking opportunities throughout the week, encouraging the knowledge exchange with Oracle executives, customers, prospects, and colleagues that will make this conference of even greater value for you. At Oracle we always work closely with our partners to deliver solution offerings that improve business value, simplify the IT experience and drive innovation and efficiencies for joint customers. The most important element of our new OPN Exchange is content that helps you get more from technology investments, more from your peer-to-peer connections, and more from your interactions with customers. To this end we've created some partner-specific tools which can be used by OPN members ahead of the conference itself. Crucially, a comprehensive Content Catalog already lists and organises details of every OPN Exchange session, speaker, exhibitor, demonstration and related materials. This Content Catalog can be used by all our partners to identify interesting content that you can add to your own personalised Oracle OpenWorld Schedule Builder, allowing more effective planning and pre-enrolment for vital sessions. There are numerous highlights that you will definitely want to include in those personal schedules. On Sunday morning, 30 September we will start the week with partner dedicated OPN Exchange sessions, following our Global Partner Keynote at 13:00 with Judson Althoff, SVP, Worldwide Alliances & Channels and Embedded Sales and senior executives, giving insight into Oracle's partner vision, strategy, and resources—all designed to help build and strengthen market opportunities for you. This will be followed by a number of OPN Exchange general sessions, the Oracle OpenWorld Opening Keynote with Larry Ellison, CEO, Oracle and concluded with the OPN Exchange AfterDark Welcome Reception, starting at 19:30 at the Metreon. From Monday 1 to Thursday 4 October, you can attend the OPN Exchange sessions that are most relevant to your business today and over the coming year. Oracle's top product and sales leaders will be on hand to discuss Oracle's strategic direction in 40+ targeted and in-depth sessions focussing on critical success factors to develop your business. Oracle's dedication to innovation, specialization, enablement and engineering provides Oracle partners with a huge opportunity to create new services and solutions, differentiate themselves and deliver extreme value to joint customers across the globe. Oracle will even be helping over 1000 partners to earn OPN Specialization certification during the Oracle OpenWorld OPN Exchange Test Fest, which will be providing all the study materials and exams required to drive Specialization for free at the conference. You simply need to check the list of current certification tracks available, and make sure you pre-register to reserve a seat in one of the ten sessions being offered free to OPN Exchange registered attendees. And finally, let's not forget those all-important networking opportunities, which can so often provide partners with valuable long-term alliances as well as exciting new business leads. The Oracle PartnerNetwork Lounge, located at Moscone South, exhibition hall, room 100 is the place where partners can meet formally or informally with colleagues, customers, prospects, and other industry professionals. OPN Specialized partners with OPN Exchange passes can also visit the OPN Video Blogging room to record and share ideas, and at the OPN Information Station you will find consultants available to answer your questions. "For the first time ever we will have a full partner conference within OpenWorld. OPN Exchange @ OpenWorld will kick-off on the first Sunday and run the entire week. We'll have over 40 sessions throughout that time and partners will hear from our top development executives, with special sessions dedicated to partnering throughout. It's going to be a phenomenal event, and we look forward to seeing our partners there." Judson Althoff, SVP, Oracle Worldwide Alliances & Channels and Embedded Sales So if you haven't done so already, please register for Oracle PartnerNetwork Exchange @ OpenWorld today or add OPN Exchange to your existing registration for just $100 through My Account. And if you have any further questions regarding partner activities at Oracle OpenWorld, please don't hesitate to contact the Oracle PartnerNetwork team at [email protected] will be on hand to share the very latest information about: Oracle's SPARC Superclusters: the latest Engineered Systems from Oracle, delivering radically improved performance, faster deployment and greatly reduced operational costs for mixed database and enterprise application consolidation Oracle's SPARC T4 servers: with the newly developed T4 processor and Oracle Solaris providing up to five times the single threaded performance and better overall system throughput for expanded application versatility Oracle Database Appliance: a new way to take advantage of the world's most popular database, Oracle Database 11g, in a single, easy-to-deploy and manage system. It's a complete package engineered to deliver simple, reliable and affordable database services to small and medium size businesses and departmental systems. All hardware and software components are supported together and offer customers unique pay-as-you-grow software licensing to quickly scale from two to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades Oracle Exalogic: the world's only integrated cloud machine, featuring server hardware and middleware software engineered together for maximum performance with minimum set-up and operational cost Oracle Exadata Database Machine: the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) applications, making it the ideal platform for consolidating onto grids or private clouds. It is a complete package of servers, storage, networking and software that is massively scalable, secure and redundant Oracle Sun ZFS Storage Appliances: providing enterprise-class NAS performance, price-performance, manageability and TCO by combining third-generation software with high-performance controllers, flash-based caches and disks Oracle Pillar Axiom Quality-of-Service: confidently consolidate storage for multiple applications into a single datacentre storage solution Oracle Solaris 11: delivering secure enterprise cloud deployments with the ability to run hundreds of virtual application with no overhead and co-engineered with other Oracle software products to provide the highest levels of security, manageability and performance Oracle Enterprise Manager 12c: Oracle's integrated enterprise IT management product, providing the industry's only complete, integrated and business-driven enterprise cloud management solution Oracle VM 3.0: the latest release of Oracle's server virtualisation and management solution, helping to move datacentres beyond server consolidation to improve application deployment and management. Register today and ensure your place at the Extreme Performance Tour! Extreme Performance Tour events are free to attend, but places are limited. To make sure that you don't miss out, please visit Oracle's Extreme Performance Tour website, select the city that you'd be interest in attending an event in, and then click on the 'Register Now' button for that city to secure your interest. Each individual city page also contains more in-depth information about your local event, including logistics, agenda and maybe even a preview of VIP guest speakers. -- Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Create and Consume WCF service using Visual Studio 2010

    - by sreejukg
    In this article I am going to demonstrate how to create a WCF service, that can be hosted inside IIS and a windows application that consume the WCF service. To support service oriented architecture, Microsoft developed the programming model named Windows Communication Foundation (WCF). ASMX was the prior version from Microsoft, was completely based on XML and .Net framework continues to support ASMX web services in future versions also. While ASMX web services was the first step towards the service oriented architecture, Microsoft has made a big step forward by introducing WCF. An overview of planning for WCF can be found from this link http://msdn.microsoft.com/en-us/library/ff649584.aspx . The following are the important differences between WCF and ASMX from an asp.net developer point of view. 1. ASMX web services are easy to write, configure and consume 2. ASMX web services are only hosted in IIS 3. ASMX web services can only use http 4. WCF, can be hosted inside IIS, windows service, console application, WAS(Windows Process Activation Service) etc 5. WCF can be used with HTTP, TCP/IP, MSMQ and other protocols. The detailed difference between ASMX web service and WCF can be found here. http://msdn.microsoft.com/en-us/library/cc304771.aspx Though WCF is a bigger step for future, Visual Studio makes it simpler to create, publish and consume the WCF service. In this demonstration, I am going to create a service named SayHello that accepts 2 parameters such as name and language code. The service will return a hello to user name that corresponds to the language. So the proposed service usage is as follows. Caller: SayHello(“Sreeju”, “en”) -> return value -> Hello Sreeju Caller: SayHello(“???”, “ar”) -> return value -> ????? ??? Caller: SayHello(“Sreeju”, “es”) - > return value -> Hola Sreeju Note: calling an automated translation service is not the intention of this article. If you are interested, you can find bing translator API and can use in your application. http://www.microsofttranslator.com/dev/ So Let us start First I am going to create a Service Application that offer the SayHello Service. Open Visual Studio 2010, Go to File -> New Project, from your preferred language from the templates section select WCF, select WCF service application as the project type, give the project a name(I named it as HelloService), click ok so that visual studio will create the project for you. In this demonstration, I have used C# as the programming language. Visual studio will create the necessary files for you to start with. By default it will create a service with name Service1.svc and there will be an interface named IService.cs. The screenshot for the project in solution explorer is as follows Since I want to demonstrate how to create new service, I deleted Service1.Svc and IService1.cs files from the project by right click the file and select delete. Now in the project there is no service available, I am going to create one. From the solution explorer, right click the project, select Add -> New Item Add new item dialog will appear to you. Select WCF service from the list, give the name as HelloService.svc, and click on the Add button. Now Visual studio will create 2 files with name IHelloService.cs and HelloService.svc. These files are basically the service definition (IHelloService.cs) and the service implementation (HelloService.svc). Let us examine the IHelloService interface. The code state that IHelloService is the service definition and it provides an operation/method (similar to web method in ASMX web services) named DoWork(). Any WCF service will have a definition file as an Interface that defines the service. Let us see what is inside HelloService.svc The code illustrated is implementing the interface IHelloService. The code is self-explanatory; the HelloService class needs to implement all the methods defined in the Service Definition. Let me do the service as I require. Open IHelloService.cs in visual studio, and delete the DoWork() method and add a definition for SayHello(), do not forget to add OperationContract attribute to the method. The modified IHelloService.cs will look as follows Now implement the SayHello method in the HelloService.svc.cs file. Here I wrote the code for SayHello method as follows. I am done with the service. Now you can build and run the service by clicking f5 (or selecting start debugging from the debug menu). Visual studio will host the service in give you a client to test it. The screenshot is as follows. In the left pane, it shows the services available in the server and in right side you can invoke the service. To test the service sayHello, double click on it from the above window. It will ask you to enter the parameters and click on the invoke button. See a sample output below. Now I have done with the service. The next step is to write a service client. Creating a consumer application involves 2 steps. One generating the class and configuration file corresponds to the service. Create a project that utilizes the generated class and configuration file. First I am going to generate the class and configuration file. There is a great tool available with Visual Studio named svcutil.exe, this tool will create the necessary class and configuration files for you. Read the documentation for the svcutil.exe here http://msdn.microsoft.com/en-us/library/aa347733.aspx . Open Visual studio command prompt, you can find it under Start Menu -> All Programs -> Visual Studio 2010 -> Visual Studio Tools -> Visual Studio command prompt Make sure the service is in running state in visual studio. Note the url for the service(from the running window, you can right click and choose copy address). Now from the command prompt, enter the svcutil.exe command as follows. I have mentioned the url and the /d switch – for the directory to store the output files(In this case d:\temp). If you are using windows drive(in my case it is c: ) , make sure you open the command prompt with run as administrator option, otherwise you will get permission error(Only in windows 7 or windows vista). The tool has created 2 files, HelloService.cs and output.config. Now the next step is to create a new project and use the created files and consume the service. Let us do that now. I am going to add a console application to the current solution. Right click solution name in the solution explorer, right click, Add-> New Project Under Visual C#, select console application, give the project a name, I named it TestService Now navigate to d:\temp where I generated the files with the svcutil.exe. Rename output.config to app.config. Next step is to add both files (d:\temp\helloservice.cs and app.config) to the files. In the solution explorer, right click the project, Add -> Add existing item, browse to the d:\temp folder, select the 2 files as mentioned before, click on the add button. Now you need to add a reference to the System.ServiceModel to the project. From solution explorer, right click the references under testservice project, select Add reference. In the Add reference dialog, select the .Net tab, select System.ServiceModel, and click ok Now open program.cs by double clicking on it and add the code to consume the web service to the main method. The modified file looks as follows Right click the testservice project and set as startup project. Click f5 to run the project. See the sample output as follows Publishing WCF service under IIS is similar to publishing ASP.Net application. Publish the application to a folder using Visual studio publishing feature, create a virtual directory and create it as an application. Don’t forget to set the application pool to use ASP.Net version 4. One last thing you need to check is the app.config file you have added to the solution. See the element client under ServiceModel element. There is an endpoint element with address attribute that points to the published service URL. If you permanently host the service under IIS, you can simply change the address parameter to the corresponding one and your application will consume the service. You have seen how easily you can build/consume WCF service. If you need the solution in zipped format, please post your email below.

    Read the article

  • 17 new features in Visual Studio 2010

    - by vik20000in
    Visual studio 2010 has been released to RTM a few days back. This release of Visual studio 2010 comes with a big number of improvements on many fronts. In this post I will try and point out some of the major improvements in Visual Studio 2010. 1)      Visual studio IDE Improvement. Visual studio IDE has been rewritten in WPF. The look and feel of the studio has been improved for improved readability. Start page has been redesigned and template so that anyone can change the start page as they wish. 2)      Multiple Monitor - Support for Multiple Monitor was already there in Visual studio. But in this edition it has been improved as much that we can now place the document, design and code window outside the IDE in another monitor. 3)      ZOOM in Code Editor – Making the editors in WPF has made significant improvement for them. The best one that I like is the ZOOM feature. We can now zoom in the code editor with the help of the ctrl + Mouse scroll. The zoom feature does not work on the Design surface or windows with icon like solution view and toolbox. 4)      Box Selection - Another Important improvement in the Visual studio 2010 is the box selection. We can select a rectangular by holding down the Alt Key and selecting with mouse.  Now in the rectangular selection we can insert text, Paste same code in different line etc. This is helpful if you want to convert a number of variables from public to private etc… 5)      New Improved Search – One of the best productivity improvements in Visual studio 2010 is its new search as you type support. This has been done in the Navigate To window which can be brought up by pressing (Ctrl + ,). The navigate To windows also take help of the Camel casing and will be able to search with the help of camel casing when character is entered in upper case. For example we can search AOH for AddOrederHeader. 6)      Call Hierarchy – This feature is only available to the Visual C# and Visual C++ editor. The call hierarchy windows displays the calls made to and from (yes both to and from) a selected method property or a constructor. The call hierarchy also shows the implementation of interface and the overrides of virtual or abstract methods. This window is very helpful in understanding the code flow, and evaluating the effect of making changes. The best part is it is available at design time and not at runtime only like a debugger. 7)      Highlighting references – One of the very cool stuff in Visual Studio 2010 is the fact if you select a variable then all the use of that variable will be highlighted alongside. This should work for all the result of symbols returned by Find all reference. This also works for Name of class, objects variable, properties and methods. We can also use the Ctrl + Shift + Down Arrow or Up Arror to move through them. 8)      Generate from usage - The Generate from usage feature lets you use classes and members before you define them. You can generate a stub for any undefined class, constructor, method, property, field, or enum that you want to use but have not yet defined. You can generate new types and members without leaving your current location in code, This minimizes interruption to your workflow.9)      IntelliSense Suggestion Mode - IntelliSense now provides two alternatives for IntelliSense statement completion, completion mode and suggestion mode. Use suggestion mode for situations where classes and members are used before they are defined. In suggestion mode, when you type in the editor and then commit the entry, the text you typed is inserted into the code. When you commit an entry in completion mode, the editor shows the entry that is highlighted on the members list. When an IntelliSense window is open, you can press CTRL+ALT+SPACEBAR to toggle between completion mode and suggestion mode. 10)   Application Lifecycle Management – A client application for management of application lifecycle like version control, work item tracking, build automation, team portal etc is available for free (this is not available for express edition.). 11)   Start Page – The start page has been redesigned with WPF for new functionality and look. Tabbed areas are provided for content from different source including MSDN. Once you open some project the start page closes automatically. The list of recent project also lets you remove project from the list. And above all the start page is customizable enough to be changed as per individual requirement. 12)   Extension Manager – Visual Studio 2010 has provided good ways to be extended. We can also use MEF to extend most of the features of Visual Studio. The new extension manager now can go the visual studio gallery and install the extension without even opening any explorer. 13)   Code snippets – Visual studio 2010 for HTML, Jscript and Asp.net also. 14)   Improved Intelligence for JavaScript has been improved vastly (around 2-5 times). Intelligence now also shows the XML documentation comment on the go. 15)   Web Deployment – Web Deployment has been vastly improved. We can package and publish the web application in one click. Three major supported deployment scenarios are Web packages, one click deployment and Web configuration Transformation. 16)   SharePoint - Visual Studio 2010 also brings vastly improved development experience for SharePoint. We can create, edit, debug, package, deploy and activate SharePoint project from within Visual Studio. Deployment of Site is as easy as hitting F5. 17)   Azure – Visual Studio 2010 also comes with handy improvement for developing on windows Azure environment. Vikram

    Read the article

  • Integrate Bing Search API into ASP.Net application

    - by sreejukg
    Couple of months back, I wrote an article about how to integrate Bing Search engine (API 2.0) with ASP.Net website. You can refer the article here http://weblogs.asp.net/sreejukg/archive/2012/04/07/integrate-bing-api-for-search-inside-asp-net-web-application.aspx Things are changing rapidly in the tech world and Bing has also changed! The Bing Search API 2.0 will work until August 1, 2012, after that it will not return results. Shocked? Don’t worry the API has moved to Windows Azure market place and available for you to sign up and continue using it and there is a free version available based on your usage. In this article, I am going to explain how you can integrate the new Bing API that is available in the Windows Azure market place with your website. You can access the Windows Azure market place from the below link https://datamarket.azure.com/ There is lot of applications available for you to subscribe and use. Bing is one of them. You can find the new Bing Search API from the below link https://datamarket.azure.com/dataset/5BA839F1-12CE-4CCE-BF57-A49D98D29A44 To get access to Bing Search API, first you need to register an account with Windows Azure market place. Sign in to the Windows Azure market place site using your windows live account. Once you sign in with your windows live account, you need to register to Windows Azure Market place account. From the Windows Azure market place, you will see the sign in button it the top right of the page. Clicking on the sign in button will take you to the Windows live ID authentication page. You can enter a windows live ID here to login. Once logged in you will see the Registration page for the Windows Azure market place as follows. You can agree or disagree for the email address usage by Microsoft. I believe selecting the check box means you will get email about what is happening in Windows Azure market place. Click on continue button once you are done. In the next page, you should accept the terms of use, it is not optional, you must agree to terms and conditions. Scroll down to the page and select the I agree checkbox and click on Register Button. Now you are a registered member of Windows Azure market place. You can subscribe to data applications. In order to use BING API in your application, you must obtain your account Key, in the previous version of Bing you were required an API key, the current version uses Account Key instead. Once you logged in to the Windows Azure market place, you can see “My Account” in the top menu, from the Top menu; go to “My Account” Section. From the My Account section, you can manage your subscriptions and Account Keys. Account Keys will be used by your applications to access the subscriptions from the market place. Click on My Account link, you can see Account Keys in the left menu and then Add an account key or you can use the default Account key available. Creating account key is very simple process. Also you can remove the account keys you create if necessary. The next step is to subscribe to BING Search API. At this moment, Bing Offers 2 APIs for search. The available options are as follows. 1. Bing Search API - https://datamarket.azure.com/dataset/5ba839f1-12ce-4cce-bf57-a49d98d29a44 2. Bing Search API – Web Results only - https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65 The difference is that the later will give you only web results where the other you can specify the source type such as image, video, web, news etc. Carefully choose the API based on your application requirements. In this article, I am going to use Web Results Only API, but the steps will be similar to both. Go to the API page https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65, you can see the subscription options in the right side. And in the bottom of the page you can see the free option Since I am going to use the free options, just Click the Sign Up link for that. Just select I agree check box and click on the Sign Up button. You will get a recipt pagethat detail your subscription. Now you are ready Bing Search API – Web results. The next step is to integrate the API into your ASP.Net application. Now if you go to the Search API page (as well as in the Receipt page), you can see a .Net C# Class Library link, click on the link, you will get a code file named “BingSearchContainer.cs”. In the following sections I am going to demonstrate the use of Bing Search API from an ASP.Net application. Create an empty ASP.Net web application. In the solution explorer, the application will looks as follows. Now add the downloaded code file (“BingSearchContainer.cs”) to the project. Right click your project in solution explorer, Add -> existing item, then browse to the downloaded location, select the “BingSearchContainer.cs” file and add it to the project. To build the code file you need to add reference to the following library. System.Data.Services.Client You can find the library in the .Net tab, when you select Add -> Reference Try to build your project now; it should build without any errors. Add an ASP.Net page to the project. I have included a text box and a button, then a Grid View to the page. The idea is to Search the text entered and display the results in the gridview. The page will look in the Visual Studio Designer as follows. The markup of the page is as follows. In the button click event handler for the search button, I have used the following code. Now run your project and enter some text in the text box and click the Search button, you will see the results coming from Bing, cool. I entered the text “Microsoft” in the textbox and clicked on the button and I got the following results. Searching Specific Websites If you want to search a particular website, you pass the site url with site:<site url name> and if you have more sites, use pipe (|). e.g. The following search query site:microsoft.com | site:adobe.com design will search the word design and return the results from Microsoft.com and Adobe.com See the sample code that search only Microsoft.com for the text entered for the above sample. var webResults = bingContainer.Web("site:www.Microsoft.com " + txtSearch.Text, null, null, null, null, null, null); Paging the results returned by the API By default the BING API will return 100 results based on your query. The default code file that you downloaded from BING doesn’t include any option for this. You can modify the downloaded code to perform this paging. The BING API supports two parameters $top (for number of results to return) and $skip (for number of records to skip). So if you want 3rd page of results with page size = 10, you need to pass $top = 10 and $skip=20. Open the BingSearchContainer.cs in the editor. You can see the Web method in it as follows. public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options) {  In the method signature, I have added two more parameters public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options, int resultCount, int pageNo) { and in the method, you need to pass the parameters to the query variable. query = query.AddQueryOption("$top", resultCount); query = query.AddQueryOption("$skip", (pageNo -1)*resultCount); return query; Note that I didn’t perform any validation, but you need to check conditions such as resultCount and pageCount should be greater than or equal to 1. If the parameters are not valid, the Bing Search API will throw the error. The modified method is as follows. The changes are highlighted. Now see the following code in the SearchPage.aspx.cs file protected void btnSearch_Click(object sender, EventArgs e) {     var bingContainer = new Bing.BingSearchContainer(new Uri(https://api.datamarket.azure.com/Bing/SearchWeb/));     // replace this value with your account key     var accountKey = "your key";     // the next line configures the bingContainer to use your credentials.     bingContainer.Credentials = new NetworkCredential(accountKey, accountKey);     var webResults = bingContainer.Web("site:microsoft.com" +txtSearch.Text , null, null, null, null, null, null,3,2);     lstResults.DataSource = webResults;     lstResults.DataBind(); } The following code will return 3 results starting from second page (by skipping first 3 results). See the result page as follows. Bing provides complete integration to its offerings. When you develop search based applications, you can use the power of Bing to perform the search. Integrating Bing Search API to ASP.Net application is a simple process and without investing much time, you can develop a good search based application. Make sure you read the terms of use before designing the application and decide which API usage is suitable for you. Further readings BING API Migration Guide http://go.microsoft.com/fwlink/?LinkID=248077 Bing API FAQ http://go.microsoft.com/fwlink/?LinkID=252146 Bing API Schema Guide http://go.microsoft.com/fwlink/?LinkID=252151

    Read the article

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

  • Upgrading Windows 8 boot to VHD to Windows 8.1&ndash;Step by step guide

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/10/19/upgrading-windows-8-boot-to-vhd-to-windows-8.1ndashstep-by.aspxBoot to VHD – dual booting Windows 7 and Windows 8 became easy When Windows 8 arrived, quite a few people decided that they would still dual boot their machines, and instead of mucking about with resizing disk partitions to free up space for Windows 8 they decided to use the boot from VHD feature to create a huge hard disc image into which Windows 8 could be installed.  Scott Hanselman wrote this installation guide, while I myself used the installation guide from Ed Bott of ZD net fame. Boot to VHD is a great solution, it achieves a dual boot, can be backed up easily and had virtually no effect on the original Windows 7 partition. As a developer who has dual booted Windows operating systems for years, hacking boot.ini files, the boot to VHD was a much easier solution. Upgrade to Windows 8.1 – ah, you can’t do that on a virtual disk installation (boot to VHD) Last week the final version of Windows 8.1 arrived, and I went into the Windows Store to upgrade.  Luckily I’m on a fast download service, and use an SSD, because once the upgrade was downloaded and prepared Windows informed that This PC can’t run Windows 8.1, and provided the reason, You can’t install Windows on a virtual drive.  You can see an image of the message and discussion that sparked my search for a solution in this Microsoft Technet forum post. I was determined not to have to resize partitions yet again and fiddle with VHD to disk utilities and back again, and in the end I did succeed in upgrading to a Windows 8.1 boot to VHD partition.  It takes quite a bit of effort though … tldr; Simple steps of how you upgrade Boot into Windows 7 – make a copy of your Windows 8 VHD, to become Windows 8.1 Enable Hyper-V in your Windows 8 (the original boot to VHD partition) Create a new virtual machine, attaching the copy of your Windows 8 VHD Start the virtual machine, upgrade it via the Windows Store to Windows 8.1 Shutdown the virtual machine Boot into Windows 7 – use the bcedit tool to create a new Windows 8.1 boot to VHD option (pointing at the copy) Boot into the new Windows 8.1 option Reactivate Windows 8.1 (it will have become deactivated by running under Hyper-V) Remove the original Windows 8 VHD, and in Windows 7 use bcedit to remove it from the boot menu Things you’ll need A system that can run Hyper-V under Windows 8 (Intel i5, i7 class CPU) Enough space to have your original Windows 8 boot to VHD and a copy at the same time An ISO or DVD for Windows 8 to create a bootable Windows 8 partition Step by step guide Boot to your base o/s, the real one, Windows 7. Make a copy of the Windows 8 VHD file that you use to boot Windows 8 (via boot from VHD) – I copied it from a folder on C: called VHD-Win8 to VHD-Win8.1 on my N: drive. Reboot your system into Windows 8, and enable Hyper-V if not already present (this may require reboot) Use the Hyper-V manager , create a new Hyper-V machine, using half your system memory, and use the option to attach an existing VHD on the main IDE controller – this will be the new copy you made in Step 2. Start the virtual machine, use Connect to view it, and you’ll probably discover it cannot boot as there is no boot record If this is the case, go to Hyper-V manager, edit the Settings for the virtual machine to attach an ISO of a Windows 8 DVD to the second IDE controller. Start the virtual machine, use Connect to view it, and it should now attempt a fresh installation of Windows 8.  You should select Advanced Options and choose Repair - this will make VHD bootable When the setup reboots your virtual machine, turn off the virtual machine, and remove the ISO of the Windows 8 DVD from the virtual machine settings. Start virtual machine, use Connect to view it.  You will see the devices to be re-discovered (including your quad CPU becoming single CPU).  Eventually you should see the Windows Login screen. You may notice that your desktop background (Win+D) will have turned black as your Windows installation has become deactivate due to the hardware changes between your real PC and Hyper-V. Fortunately becoming deactivated, does not stop you using the Windows Store, where you can select the update to Windows 8.1. You can now watch the progress joy of the Windows 8 update; downloading, preparing to update, checking compatibility, gathering info, preparing to restart, and finally, confirm restart - remember that you are restarting your virtual machine sitting on the copy of the VHD, not the Windows 8 boot to VHD you are currently using to run Hyper-V (confused yet?) After the reboot you get the real upgrade messages; setting up x%, xx%, (quite slow) After a while, Getting ready Applying PC Settings x%, xx% (really slow) Updating your system (fast) Setting up a few more things x%, (quite slow) Getting ready, again Accept license terms Express settings Confirmed previous password Next, I had to set up a Microsoft account – which is possibly now required, and not optional Using the Microsoft account required a 2 factor authorization, via text message, a 7 digit code for me Finalising settings Blank screen, HI .. We're setting up things for you (similar to original Windows 8 install) 'You can get new apps from the Store', below which is ’Installing your apps’ - I had Windows Media Center which is counts as an app from the Store ‘Taking care of a few things’, below which is ‘Installing your apps’ ‘Taking care of a few things’, below ‘Don't turn off your PC’ ‘Getting your apps ready’, below ‘Don't turn off your PC’ ‘Almost ready’, below ‘Don't turn off your PC’ … finally, we get the Windows 8.1 start menu, and a quick Win+D to check the desktop confirmed all the application icons I expected, pinned items on the taskbar, and one app moaning about a missing drive At this point the upgrade is complete – you can shutdown the virtual machine Reboot from the original Windows 8 and return to Windows 7 to configure booting to the Windows 8.1 copy of the VHD In an administrator command prompt do following use the bcdedit tool (from an MSDN blog about configuring VHD to boot in Windows 7) Type bcedit to list the current boot options, so you can copy the GUID (complete with brackets/braces) for the original Windows 8 boot to VHD Create a new menu option, copy of the Windows 8 option; bcdedit /copy {originalguid} /d "Windows 8.1" Point the new Windows 8.1 option to the copy of the VHD; bcdedit /set {newguid} device vhd=[D:]\Image.vhd Point the new Windows 8.1 option to the copy of the VHD; bcdedit /set {newguid} osdevice vhd=[D:]\Image.vhd Set autodetection of the HAL (may already be set); bcdedit /set {newguid} detecthal on Reboot from Windows 7 and select the new option 'Windows 8.1' on the boot menu, and you’ll have some messages to look at, as your hardware is redetected (as you are back from 1 CPU to 4 CPUs) ‘Getting devices ready, blank then %xx, with occasional blank screen, for the graphics driver, (fast-ish) Getting Ready message (fast) You will have to suffer one final reboots, choose 'Windows 8.1' and you can now login to a lovely Windows 8.1 start screen running on non virtualized hardware via boot to VHD After checking everything is running fine, you can now choose to Activate Windows, which for me was a toll free phone call to the automated system where you type in lots of numbers to be given a whole bunch of new activation codes. Once you’re happy with your new Windows 8.1 boot to VHD, and no longer need the Windows 8 boot to VHD, feel free to delete the old one.  I do believe once you upgrade, you are no longer licensed to use it anyway. There, that was simple wasn’t it? Looking at the huge list of steps it took to perform this upgrade, you may wonder whether I think this is worth it.  Well, I think it is worth booting to VHD.  It makes backups a snap (go to Windows 7, copy the VHD, you backed up the o/s) and helps with disk management – want to move the o/s, you can move the VHD and repoint the boot menu to the new location. The downside is that Microsoft has complete neglected to support boot to VHD as an upgradable option.  Quite a poor decision in my opinion, and if you read twitter and the forums quite a few people agree with that view.  It’s a shame this got missed in the work on creating the upgrade packages for Windows 8.1.

    Read the article

  • Building and Deploying Windows Azure Web Sites using Git and GitHub for Windows

    - by shiju
    Microsoft Windows Azure team has released a new version of Windows Azure which is providing many excellent features. The new Windows Azure provides Web Sites which allows you to deploy up to 10 web sites  for free in a multitenant shared environment and you can easily upgrade this web site to a private, dedicated virtual server when the traffic is grows. The Meet Windows Azure Fact Sheet provides the following information about a Windows Azure Web Site: Windows Azure Web Sites enable developers to easily build and deploy websites with support for multiple frameworks and popular open source applications, including ASP.NET, PHP and Node.js. With just a few clicks, developers can take advantage of Windows Azure’s global scale without having to worry about operations, servers or infrastructure. It is easy to deploy existing sites, if they run on Internet Information Services (IIS) 7, or to build new sites, with a free offer of 10 websites upon signup, with the ability to scale up as needed with reserved instances. Windows Azure Web Sites includes support for the following: Multiple frameworks including ASP.NET, PHP and Node.js Popular open source software apps including WordPress, Joomla!, Drupal, Umbraco and DotNetNuke Windows Azure SQL Database and MySQL databases Multiple types of developer tools and protocols including Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix Signup to Windows and Enable Azure Web Sites You can signup for a 90 days free trial account in Windows Azure from here. After creating an account in Windows Azure, go to https://account.windowsazure.com/ , and select to preview features to view the available previews. In the Web Sites section of the preview features, click “try it now” which will enables the web sites feature Create Web Site in Windows Azure To create a web sites, login to the Windows Azure portal, and select Web Sites from and click New icon from the left corner  Click WEB SITE, QUICK CREATE and put values for URL and REGION dropdown. You can see the all web sites from the dashboard of the Windows Azure portal Set up Git Publishing Select your web site from the dashboard, and select Set up Git publishing To enable Git publishing , you must give user name and password which will initialize a Git repository Clone Git Repository We can use GitHub for Windows to publish apps to non-GitHub repositories which is well explained by Phil Haack on his blog post. Here we are going to deploy the web site using GitHub for Windows. Let’s clone a Git repository using the Git Url which will be getting from the Windows Azure portal. Let’s copy the Git url and execute the “git clone” with the git url. You can use the Git Shell provided by GitHub for Windows. To get it, right on the GitHub for Windows, and select open shell here as shown in the below picture. When executing the Git Clone command, it will ask for a password where you have to give password which specified in the Windows Azure portal. After cloning the GIT repository, you can drag and drop the local Git repository folder to GitHub for Windows GUI. This will automatically add the Windows Azure Web Site repository onto GitHub for Windows where you can commit your changes and publish your web sites to Windows Azure. Publish the Web Site using GitHub for Windows We can add multiple framework level files including ASP.NET, PHP and Node.js, to the local repository folder can easily publish to Windows Azure from GitHub for Windows GUI. For this demo, let me just add a simple Node.js file named Server.js which handles few request handlers. 1: var http = require('http'); 2: var port=process.env.PORT; 3: var querystring = require('querystring'); 4: var utils = require('util'); 5: var url = require("url"); 6:   7: var server = http.createServer(function(req, res) { 8: switch (req.url) { //checking the request url 9: case '/': 10: homePageHandler (req, res); //handler for home page 11: break; 12: case '/register': 13: registerFormHandler (req, res);//hamdler for register 14: break; 15: default: 16: nofoundHandler (req, res);// handler for 404 not found 17: break; 18: } 19: }); 20: server.listen(port); 21: //function to display the html form 22: function homePageHandler (req, res) { 23: console.log('Request handler home was called.'); 24: res.writeHead(200, {'Content-Type': 'text/html'}); 25: var body = '<html>'+ 26: '<head>'+ 27: '<meta http-equiv="Content-Type" content="text/html; '+ 28: 'charset=UTF-8" />'+ 29: '</head>'+ 30: '<body>'+ 31: '<form action="/register" method="post">'+ 32: 'Name:<input type=text value="" name="name" size=15></br>'+ 33: 'Email:<input type=text value="" name="email" size=15></br>'+ 34: '<input type="submit" value="Submit" />'+ 35: '</form>'+ 36: '</body>'+ 37: '</html>'; 38: //response content 39: res.end(body); 40: } 41: //handler for Post request 42: function registerFormHandler (req, res) { 43: console.log('Request handler register was called.'); 44: var pathname = url.parse(req.url).pathname; 45: console.log("Request for " + pathname + " received."); 46: var postData = ""; 47: req.on('data', function(chunk) { 48: // append the current chunk of data to the postData variable 49: postData += chunk.toString(); 50: }); 51: req.on('end', function() { 52: // doing something with the posted data 53: res.writeHead(200, "OK", {'Content-Type': 'text/html'}); 54: // parse the posted data 55: var decodedBody = querystring.parse(postData); 56: // output the decoded data to the HTTP response 57: res.write('<html><head><title>Post data</title></head><body><pre>'); 58: res.write(utils.inspect(decodedBody)); 59: res.write('</pre></body></html>'); 60: res.end(); 61: }); 62: } 63: //Error handler for 404 no found 64: function nofoundHandler(req, res) { 65: console.log('Request handler nofound was called.'); 66: res.writeHead(404, {'Content-Type': 'text/plain'}); 67: res.end('404 Error - Request handler not found'); 68: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If there is any change in the local repository folder, GitHub for Windows will automatically detect the changes. In the above step, we have just added a Server.js file so that GitHub for Windows will detect the changes. Let’s commit the changes to the local repository before publishing the web site to Windows Azure. After committed the all changes, you can click publish button which will publish the all changes to Windows Azure repository. The following screen shot shows deployment history from the Windows Azure portal.   GitHub for Windows is providing a sync button which can use for synchronizing between local repository and Windows Azure repository after making any commit on the local repository after any changes. Our web site is running after the deployment using Git Summary Windows Azure Web Sites lets the developers to easily build and deploy websites with support for multiple framework including ASP.NET, PHP and Node.js and can easily deploy the Web Sites using Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix. In this demo, we have deployed a Node.js Web Site to Windows Azure using Git. We can use GitHub for Windows to publish apps to non-GitHub repositories and can use to publish Web SItes to Windows Azure.

    Read the article

  • CodePlex Daily Summary for Monday, April 12, 2010

    CodePlex Daily Summary for Monday, April 12, 2010New Projects3 Hour Game Design Contest: The 3 Hour Game Design Contest is a programming contest for making simple games in 3 hours. 3 hours may not seem like enough time to make a game, b...BI Monkey SSIS ETL Framework: The BI Monkey SSIS ETL Framework is an ETL Execution, Control and Logging system for ETL projects using SSIS. It is supported by a SQL Server metad...Blend Sample Data Helpers: Helper behavior classes to generate sample images and data from Internet sources such as Flickr images. Bold TCP for Delphi 7: Open Sourcing the Bold TCP for Delphi 7.cfThreadingTools: This library project contains classes and extensions which will allow easy handling of multi-threaded UI-accesses.CuBiX_SDL: CuBiX_SDL : CuBiX est un projet personnel.Draglets: Draglets makes it easier for editors and CMS-developers to move and reorder content at their web sites. It's developed in ASP.NET, C# with WCF and ...DSQLT - Dynamic SQL Templates: DSQLT - Dynamic SQL Templates Use Stored Procedures as templates for dynamic SQL statements. Substitute parameters @0-@9 with values like objectna...Edtter: Edtter is a sample web application built on ASP.NET MVC 2 Framework. (Japanese Version Only)Forms Based Authentication Management - SharePoint2007FBA: This is my own update to Stacy Draper's FBABasic project for Forms Based Authentication in MOSS 2007. In additon to managing your fba user's roles,...Height Map to 3D World at XNA: Height Map to 3D World is a XNA project that developed firstly by Eric Grossinger and secondly improved by Karadeniz Technical University Computer ...HouseFly: A simple contact and note taking applicationITM 495 - iPhone Web App: School ProjectKaufleute: This will be finished laterLR: this project is about connecting toPowerShell Integration Services: A set of tools aimed at Extract Transform and Load tasks. Focused on getting the most common ETL tasks done without SSIS. Salient: A collection of, hopefully, useful libraries.Samurai.Validation: Extensible and flexible .Net object validation frameworkSamurai.Workflow: Samurai Workflow is a slim, easy-to-use workflow framework for WPF applications.SharePoint User Management WebPart: SharePoint User Management WebPartUrl shorte(ne)r: It's simple Url Shortener (like: http://tinyurl.com) Currently only Polish language is supported. In future will be provided multi language suppor...Yasbg: Yasbg (pronounced yas-bug) is Yet Another Static Blog Generator. It is made in C# using MarkdownSharp for markdown. Currently in alpha. New Releases.NET Extensions - Extension Methods Library: Release 2010.06: Added an universal approach for grouping extension methods like conversions. Conversion are now available on any data type (it's actually extension...3 Hour Game Design Contest: 3H-GDC mVII: This is the collection of game files for the 7th 3H-GDCB&W Port Scanner: Black`n`White Port Scanner 3.0: B&W Port Scanner 3 includes FTP Server detection tool, Better stability, Optimized memory management, Saving & Opening Result sets ... and more new...BI Monkey SSIS ETL Framework: Framework v1 Alpha: This Alpha release is not fully tested and some functionality is not operating as intended.Bluetooth Radar: Version 1.7: UI Changes Device UserControl Randomly placed devices.BugTracker.NET: BugTracker.NET 3.4.1: For the tasks/time tracking feature, added a way of viewing all the tasks at once, not just the tasks for one bug. Also added a way of exporting a...cfThreadingTools: cfThreadingTools 0.1.1.8: This is the first public available release. Following items are included: BaseTools-class which allows thread-safe setting of properties and callin...DeepZoom Pivot Constructor: DeepZoom Pivot Constructor v0.1: This is a test release of the library platform - Targets .NET 3.5 No samples yet, etc., but it works well :-)DSQLT - Dynamic SQL Templates: Initial release with License Included: nothing changed but license print procedure included the zip file contains database backup SQL script readmeForms Based Authentication Management - SharePoint2007FBA: SharePoint2007FBA 1.0.0.0: Downloads for the Project solution and the WSP package. Please read the Setup Guide. If you are unfamiliar with setting up Forms Based Authenticati...Foursquare BlogEngine Widget: foursquare widget for BlogEngine.NET version 0.3: To see the changes which have been made, visit http://philippkueng.ch/post/Foursquare-BlogEngineNET-widget-version-03.aspx For installation instruc...Framework Detector: FrameworkDetect Support .NET 4 v2: FrameworkDetect Support .NET 4Happy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: Happy Turtle 1.0.46860: This is the second beta release of the SVN based version incrementor. Please feel free to create a thread in the discussion tabs and provide feedb...Height Map to 3D World at XNA: 3DWorld: Just open .rar file and extract it any folder and run Proje2Dto3D.exe file.HTML Ruby: 6.20.2: Removed rubyLineSpace option Improved options panel Fixed ruby text font-size rendering issue with complex ruby annotation Removed more waste...HTML Ruby: 6.20.3: Removed unused code Temporary partial fix for Firefox 3.7a4pre nightly buildHTML Ruby: 6.21.0: Added support for current HTML5 ruby annotation format. All ruby annotations are converted to XHTML 1.1 complex ruby annotation.Kooboo HTML form: Kooboo HTML Form Module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Menu: Kooboo CMS Menu for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Meta: Kooboo Meta Module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo PageMenu: Kooboo CMS PageMenu for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Search: Kooboo CMS Search module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Numina Application/Security Framework: Numina.Framework Core 50212: Added bulk import user page Added General settings page for updating Company Name, Theme, and API Key Add/Edit application calls Full URL to h...Rawr: Rawr 2.3.14: - Rawr3: Tons of fixes for Rawr3 compatability and UI. - Significant performance improvements all around. - More fixes and improvements to Wowhea...Rich Ajax empowered Web/Cloud Applications: 6.4 beta 2: The first fully featured version of Visual webGui offering web/cloud development tool that puts all ASP.NET Ajax limits behind with enhanced perfor...SharePoint User Management WebPart: User Management Web part 1.0: Most of the organization have one SharePoint Site which is configured with windows authenticated which is for internal employees having AD authenti...SkeinLibManaged: SkeinLibManaged 1.1.0.0 (Beta): This is the compiled DLL with XML documentation, so there should be plenty of context sensitive help and Intellisense. This is the Release version,...VCC: Latest build, v2.1.30411.0: Automatic drop of latest buildVFPX: Code References 1.1 Beta: Visit the Code References Info Page for complete information about this release.VisioAutomation: VisioAutomation 2.5.0: VisioAutomation 2.5.0- General cleanup/bugfixes - Many low-level changes the the VisioAutomation extension methods - these are far fewer now - This...Visual Studio DSite: English To Spanish Translator (Visual C++ 2008): A simple english to spanish translator made in visual c 2008, using the Google Translate API.WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.10.00: Whats NewFile Browser: Inherits Folder Permissions from DotNetNuke Updated the Editor to Version 3.2.1 revision 5372 Added CkEditor jQuery Adap...Web/Cloud Applications Development Framework | Visual WebGui: 6.4 beta 2: The first fully featured version of Visual webGui offering web/cloud development tool that puts all ASP.NET Ajax limits behind with unique develope...WPF Data Virtualization: 1.0.0.0: First ReleaseYasbg: Yasbg Alpha: ReadmeYet Another Static Blog Generator is a command line utility that generates static html files for blogs. Currently, it is NOT feed enabled. I...異世界の新着動画: Ver. 10-04-12: ニコ生の仕様変更に対応 アンケート時間の設定追加Most Popular ProjectsWBFS ManagerRawrASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitMost Active ProjectsRawrnopCommerce. Open Source online shop e-commerce solution.AutoPocopatterns & practices – Enterprise LibraryShweet: SharePoint 2010 Team Messaging built with PexFarseer Physics EngineNB_Store - Free DotNetNuke Ecommerce Catalog ModuleIonics Isapi Rewrite FilterBlogEngine.NETBeanProxy

    Read the article

  • Use an Ubuntu Live CD to Securely Wipe Your PC’s Hard Drive

    - by Trevor Bekolay
    Deleting files or quickly formatting a drive isn’t enough for sensitive personal information. We’ll show you how to get rid of it for good using a Ubuntu Live CD. When you delete a file in Windows, Ubuntu, or any other operating system, it doesn’t actually destroy the data stored on your hard drive, it just marks that data as “deleted.” If you overwrite it later, then that data is generally unrecoverable, but if the operating system don’t happen to overwrite it, then your data is still stored on your hard drive, recoverable by anyone who has the right software. By securely delete files or entire hard drives, your data will be gone for good. Note: Modern hard drives are extremely sophisticated, as are the experts who recover data for a living. There is no guarantee that the methods covered in this article will make your data completely unrecoverable; however, they will make your data unrecoverable to the majority of recovery methods, and all methods that are readily available to the general public. Shred individual files Most of the data stored on your hard drive is harmless, and doesn’t reveal anything about you. If there are just a few files that you know you don’t want someone else to see, then the easiest way to get rid of them is a built-in Linux utility called shred. Open a terminal window by clicking on Applications at the top-left of the screen, then expanding the Accessories menu and clicking on Terminal. Navigate to the file that you want to delete using cd to change directories and ls to list the files and folders in the current directory. As an example, we’ve got a file called BankInfo.txt on a Windows NTFS-formatted hard drive. We want to delete it securely, so we’ll call shred by entering the following in the terminal window: shred <file> which is, in our example: shred BankInfo.txt Notice that our BankInfo.txt file still exists, even though we’ve shredded it. A quick look at the contents of BankInfo.txt make it obvious that the file has indeed been securely overwritten. We can use some command-line arguments to make shred delete the file from the hard drive as well. We can also be extra-careful about the shredding process by upping the number of times shred overwrites the original file. To do this, in the terminal, type in: shred –remove –iterations=<num> <file> By default, shred overwrites the file 25 times. We’ll double this, giving us the following command: shred –remove –iterations=50 BankInfo.txt BankInfo.txt has now been securely wiped on the physical disk, and also no longer shows up in the directory listing. Repeat this process for any sensitive files on your hard drive! Wipe entire hard drives If you’re disposing of an old hard drive, or giving it to someone else, then you might instead want to wipe your entire hard drive. shred can be invoked on hard drives, but on modern file systems, the shred process may be reversible. We’ll use the program wipe to securely delete all of the data on a hard drive. Unlike shred, wipe is not included in Ubuntu by default, so we have to install it. Open up the Synaptic Package Manager by clicking on System in the top-left corner of the screen, then expanding the Administration folder and clicking on Synaptic Package Manager. wipe is part of the Universe repository, which is not enabled by default. We’ll enable it by clicking on Settings > Repositories in the Synaptic Package Manager window. Check the checkbox next to “Community-maintained Open Source software (universe)”. Click Close. You’ll need to reload Synaptic’s package list. Click on the Reload button in the main Synaptic Package Manager window. Once the package list has been reloaded, the text over the search field will change to “Rebuilding search index”. Wait until it reads “Quick search,” and then type “wipe” into the search field. The wipe package should come up, along with some other packages that perform similar functions. Click on the checkbox to the left of the label “wipe” and select “Mark for Installation”. Click on the Apply button to start the installation process. Click the Apply button on the Summary window that pops up. Once the installation is done, click the Close button and close the Synaptic Package Manager window. Open a terminal window by clicking on Applications in the top-left of the screen, then Accessories > Terminal. You need to figure our the correct hard drive to wipe. If you wipe the wrong hard drive, that data will not be recoverable, so exercise caution! In the terminal window, type in: sudo fdisk -l A list of your hard drives will show up. A few factors will help you identify the right hard drive. One is the file system, found in the System column of  the list – Windows hard drives are usually formatted as NTFS (which shows up as HPFS/NTFS). Another good identifier is the size of the hard drive, which appears after its identifier (highlighted in the following screenshot). In our case, the hard drive we want to wipe is only around 1 GB large, and is formatted as NTFS. We make a note of the label found under the the Device column heading. If you have multiple partitions on this hard drive, then there will be more than one device in this list. The wipe developers recommend wiping each partition separately. To start the wiping process, type the following into the terminal: sudo wipe <device label> In our case, this is: sudo wipe /dev/sda1 Again, exercise caution – this is the point of no return! Your hard drive will be completely wiped. It may take some time to complete, depending on the size of the drive you’re wiping. Conclusion If you have sensitive information on your hard drive – and chances are you probably do – then it’s a good idea to securely delete sensitive files before you give away or dispose of your hard drive. The most secure way to delete your data is with a few swings of a hammer, but shred and wipe from a Ubuntu Live CD is a good alternative! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDScan a Windows PC for Viruses from a Ubuntu Live CDRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Sending Messages to SignalR Hubs from the Outside

    - by Ricardo Peres
    Introduction You are by now probably familiarized with SignalR, Microsoft’s API for real-time web functionality. This is, in my opinion, one of the greatest products Microsoft has released in recent time. Usually, people login to a site and enter some page which is connected to a SignalR hub. Then they can send and receive messages – not just text messages, mind you – to other users in the same hub. Also, the server can also take the initiative to send messages to all or a specified subset of users on its own, this is known as server push. The normal flow is pretty straightforward, Microsoft has done a great job with the API, it’s clean and quite simple to use. And for the latter – the server taking the initiative – it’s also quite simple, just involves a little more work. The Problem The API for sending messages can be achieved from inside a hub – an instance of the Hub class – which is something that we don’t have if we are the server and we want to send a message to some user or group of users: the Hub instance is only instantiated in response to a client message. The Solution It is possible to acquire a hub’s context from outside of an actual Hub instance, by calling GlobalHost.ConnectionManager.GetHubContext<T>(). This API allows us to: Broadcast messages to all connected clients (possibly excluding some); Send messages to a specific client; Send messages to a group of clients. So, we have groups and clients, each is identified by a string. Client strings are called connection ids and group names are free-form, given by us. The problem with client strings is, we do not know how these map to actual users. One way to achieve this mapping is by overriding the Hub’s OnConnected and OnDisconnected methods and managing the association there. Here’s an example: 1: public class MyHub : Hub 2: { 3: private static readonly IDictionary<String, ISet<String>> users = new ConcurrentDictionary<String, ISet<String>>(); 4:  5: public static IEnumerable<String> GetUserConnections(String username) 6: { 7: ISet<String> connections; 8:  9: users.TryGetValue(username, out connections); 10:  11: return (connections ?? Enumerable.Empty<String>()); 12: } 13:  14: private static void AddUser(String username, String connectionId) 15: { 16: ISet<String> connections; 17:  18: if (users.TryGetValue(username, out connections) == false) 19: { 20: connections = users[username] = new HashSet<String>(); 21: } 22:  23: connections.Add(connectionId); 24: } 25:  26: private static void RemoveUser(String username, String connectionId) 27: { 28: users[username].Remove(connectionId); 29: } 30:  31: public override Task OnConnected() 32: { 33: AddUser(this.Context.Request.User.Identity.Name, this.Context.ConnectionId); 34: return (base.OnConnected()); 35: } 36:  37: public override Task OnDisconnected() 38: { 39: RemoveUser(this.Context.Request.User.Identity.Name, this.Context.ConnectionId); 40: return (base.OnDisconnected()); 41: } 42: } As you can see, I am using a static field to store the mapping between a user and its possibly many connections – for example, multiple open browser tabs or even multiple browsers accessing the same page with the same login credentials. The user identity, as is normal in .NET, is obtained from the IPrincipal which in SignalR hubs case is stored in Context.Request.User. Of course, this property will only have a meaningful value if we enforce authentication. Another way to go is by creating a group for each user that connects: 1: public class MyHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: this.Groups.Add(this.Context.ConnectionId, this.Context.Request.User.Identity.Name); 6: return (base.OnConnected()); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: this.Groups.Remove(this.Context.ConnectionId, this.Context.Request.User.Identity.Name); 12: return (base.OnDisconnected()); 13: } 14: } In this case, we will have a one-to-one equivalence between users and groups. All connections belonging to the same user will fall in the same group. So, if we want to send messages to a user from outside an instance of the Hub class, we can do something like this, for the first option – user mappings stored in a static field: 1: public void SendUserMessage(String username, String message) 2: { 3: var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); 4: 5: foreach (String connectionId in HelloHub.GetUserConnections(username)) 6: { 7: context.Clients.Client(connectionId).sendUserMessage(message); 8: } 9: } And for using groups, its even simpler: 1: public void SendUserMessage(String username, String message) 2: { 3: var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); 4:  5: context.Clients.Group(username).sendUserMessage(message); 6: } Using groups has the advantage that the IHubContext interface returned from GetHubContext has direct support for groups, no need to send messages to individual connections. Of course, you can wrap both mapping options in a common API, perhaps exposed through IoC. One example of its interface might be: 1: public interface IUserToConnectionMappingService 2: { 3: //associate and dissociate connections to users 4:  5: void AddUserConnection(String username, String connectionId); 6:  7: void RemoveUserConnection(String username, String connectionId); 8: } SignalR has built-in dependency resolution, by means of the static GlobalHost.DependencyResolver property: 1: //for using groups (in the Global class) 2: GlobalHost.DependencyResolver.Register(typeof(IUserToConnectionMappingService), () => new GroupsMappingService()); 3:  4: //for using a static field (in the Global class) 5: GlobalHost.DependencyResolver.Register(typeof(IUserToConnectionMappingService), () => new StaticMappingService()); 6:  7: //retrieving the current service (in the Hub class) 8: var mapping = GlobalHost.DependencyResolver.Resolve<IUserToConnectionMappingService>(); Now all you have to do is implement GroupsMappingService and StaticMappingService with the code I shown here and change SendUserMessage method to rely in the dependency resolver for the actual implementation. Stay tuned for more SignalR posts!

    Read the article

  • How to Manage Your Movies in Boxee

    - by DigitalGeekery
    Boxee is a free cross platform HTPC application that plays media locally and via the Internet. Today we’ll take a look at how to manage your local movie collection in Boxee. Note: We are using the most recent version of Boxee running on Windows 7. Your experience on an earlier version or a Mac or Linux build may vary slightly. If you are using an earlier version of Boxee, we recommend you update to the current version (0.9.21.11487). The latest update features significant improvements in file and media identification. Naming your Movie Files Proper file naming is important for Boxee to correctly identify your movie files. Before you get started you may want to take some time to name your files properly. Boxee supports the following naming conventions: Lawrence of Arabia.avi Lawrence.of.Arabia.avi Lawrence of Arabia (1962).avi Lawrence.of.Arabia(1962).avi For multi-part movies, you can use .part or .cd to identify first and second parts of the movie. Gettysburg.part1.avi Gettysburg.part2.avi If you are unsure of the correct title of the movie, check with IMDB.com. Supported File Types Boxee supports the following video file types and codecs: AVI, MPEG, WMV, ASF, FLV, MKV, MOV, MP4, M4A, AAC, NUT, Ogg, OGM, RealMedia RAM/RM/RV/RA/RMVB, 3gp, VIVO, PVA, NUV, NSV, NSA, FLI, FLC, and DVR-MS (beta support) CDs, DVDs, VCD/SVCD MPEG-1, MPEG-2, MPEG-4 (SP and ASP, including DivX, XviD, 3ivx, DV, H.263), MPEG-4 AVC (aka H.264), HuffYUV, Indeo, MJPEG, RealVideo, QuickTime, Sorenson, WMV, Cinepak Adding Movie Files to Boxee Boxee will automatically scan your default media folders and add any movie files to My Movies. Boxee will attempt to identify the media and check sources on the web to get data like cover art and other metadata. You can add as many sources to Boxee as you like from your local hard drive, external hard drives or from your network. You will need to make sure you have access to shared folders on the networked computer hosting the media you want to share. You can browse for other folders to scan by selecting Scan Media Folders.   You can also add media files by selecting Settings from the Home screen… Then select Media… and then selecting Add Sources. Browse for your directory and select Add source. Next, you’ll need to select the media type and the type of scanning. You can also change the share name if you’d like. When finished, select Add. You should see a quick notification at the top of the screen that the source was added.   Select Scan source to have Boxee to begin scanning your media files and attempt to properly identify them. Your movies may not show up instantly in My Movies. It will take Boxee some time to fully scan your sources, especially if you have a large collection. Eventually you should see My Movies begin to populate with cover art and metadata.   You can see the progress and find unidentified files by clicking on the yellow arrow to the left, or navigating to the left with your keyboard or remote and selecting Manage Sources.   Here you can see how many files (if any) Boxee failed to identify. To see which titles are unresolved, select Unidentified Files.   Here you’ll find your unresolved files. Select one of the unidentified files to search for the proper movie information. Next, select the Indentify Video icon. Boxee will fill in the title of the file or you edit the title yourself in the text box. Click Search. The results of your search will be displayed. Scroll through and select the title that fits your movie. Check the details of the film to make sure you have the correct title and select Done.   Fixing Incorrectly Indentified Files If you find a movie has been incorrectly identified you can correct it manually. Select the movie. Then search for the correct movie title from the list and select it. When you’re sure you found the correct movie, click Done. Filtering your Movies You can filter your movie collection by genre, or by whether it has been marked as watched or unwatched. When you’ve finished watching a movie, Boxee will mark it as watched.   You can also manually mark a title as watched.   Boxee also features a wide variety of genres by which you can filter the titles in your library. Playing your Movie When you’re ready to start watching a movie, simply select your title.   From here, you can select the “i” icon to read more information about the movie, add it to your queue, or add a shortcut. Click Local File to begin playing.   Now you’re ready to enjoy your movie. If you don’t have a large movie collection or just need more selection, you may want to check out the Netflix App for Boxee. Looking for a Boxee remote? Check out the iPhone App for Boxee. Links Download Boxee IMDB.com Similar Articles Productive Geek Tips Watch Netflix Instant Movies in BoxeeIntegrate Boxee with Media Center in Windows 7Customize the Background in BoxeeUse your iPhone or iPod Touch as a Boxee RemoteGetting Started with Boxee TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server What is HTML5? Default Programs Editor – One great tool for Setting Defaults Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free

    Read the article

  • OS X 10.9 Mavericks Kernel Panics out of the box

    - by Kevin
    OS X Kernel panics after a fresh install of OS X 10.9 on a 17" Macbook Pro. Anonymous UUID: D002464D-24B7-C2B5-3D83-1C0B02873B29 Wed Oct 30 11:08:17 2013 panic(cpu 1 caller 0xffffff8006edc19e): Kernel trap at 0xffffff7f88e0a96c, type 14=page fault, registers: CR0: 0x000000008001003b, CR2: 0xffffef7f88e309b8, CR3: 0x0000000009c2d000, CR4: 0x0000000000000660 RAX: 0x0fffffd0c7b30000, RBX: 0xffffef7f88e309b0, RCX: 0x0000000000000001, RDX: 0x000002f384d06471 RSP: 0xffffff80eff03d80, RBP: 0xffffff80eff03e70, RSI: 0x0000031384cfb168, RDI: 0xffffff80e8f05148 R8: 0xffffff801b0f8670, R9: 0x0000000000000005, R10: 0x0000000000004a24, R11: 0x0000000000000202 R12: 0xffffff801938b800, R13: 0x0000000000000005, R14: 0xffffff80e8f05148, R15: 0xffffff7f88e2ee20 RFL: 0x0000000000010006, RIP: 0xffffff7f88e0a96c, CS: 0x0000000000000008, SS: 0x0000000000000010 Fault CR2: 0xffffef7f88e309b8, Error code: 0x0000000000000002, Fault CPU: 0x1 Backtrace (CPU 1), Frame : Return Address 0xffffff80eff03a10 : 0xffffff8006e22f69 0xffffff80eff03a90 : 0xffffff8006edc19e 0xffffff80eff03c60 : 0xffffff8006ef3606 0xffffff80eff03c80 : 0xffffff7f88e0a96c 0xffffff80eff03e70 : 0xffffff7f88e09b89 0xffffff80eff03f30 : 0xffffff8006edda5c 0xffffff80eff03f50 : 0xffffff8006e3757a 0xffffff80eff03f90 : 0xffffff8006e378c8 0xffffff80eff03fb0 : 0xffffff8006ed6aa7 Kernel Extensions in backtrace: com.apple.driver.AppleIntelCPUPowerManagement(216.0)[A6EE4D7B-228E-3A3C-95BA-10ED6F331236]@0xffffff7f88e07000->0xffffff7f88e31fff BSD process name corresponding to current thread: kernel_task Mac OS version: 13A603 Kernel version: Darwin Kernel Version 13.0.0: Thu Sep 19 22:22:27 PDT 2013; root:xnu-2422.1.72~6/RELEASE_X86_64 Kernel UUID: 1D9369E3-D0A5-31B6-8D16-BFFBBB390393 Kernel slide: 0x0000000006c00000 Kernel text base: 0xffffff8006e00000 System model name: MacBookPro5,2 (Mac-F2268EC8) System uptime in nanoseconds: 4634353513870 last loaded kext at 39203945245: com.viscosityvpn.Viscosity.tun 1.0 (addr 0xffffff7f89200000, size 32768) last unloaded kext at 147930318702: com.apple.driver.AppleFileSystemDriver 3.0.1 (addr 0xffffff7f89110000, size 8192) loaded kexts: com.viscosityvpn.Viscosity.tun 1.0 com.viscosityvpn.Viscosity.tap 1.0 com.apple.driver.AudioAUUC 1.60 com.apple.driver.AppleHWSensor 1.9.5d0 com.apple.filesystems.autofs 3.0 com.apple.iokit.IOBluetoothSerialManager 4.2.0f6 com.apple.driver.AGPM 100.14.11 com.apple.driver.AppleMikeyHIDDriver 124 com.apple.driver.AppleHDA 2.5.2fc2 com.apple.iokit.BroadcomBluetoothHostControllerUSBTransport 4.2.0f6 com.apple.GeForceTesla 8.1.8 com.apple.driver.AppleMikeyDriver 2.5.2fc2 com.apple.iokit.IOUserEthernet 1.0.0d1 com.apple.driver.AppleUpstreamUserClient 3.5.13 com.apple.driver.AppleMuxControl 3.4.12 com.apple.driver.ACPI_SMC_PlatformPlugin 1.0.0 com.apple.driver.AppleSMCLMU 2.0.4d1 com.apple.Dont_Steal_Mac_OS_X 7.0.0 com.apple.driver.AppleHWAccess 1 com.apple.driver.AppleMCCSControl 1.1.12 com.apple.driver.AppleLPC 1.7.0 com.apple.driver.SMCMotionSensor 3.0.4d1 com.apple.driver.AppleUSBTCButtons 240.2 com.apple.driver.AppleUSBTCKeyboard 240.2 com.apple.driver.AppleIRController 325.7 com.apple.AppleFSCompression.AppleFSCompressionTypeDataless 1.0.0d1 com.apple.AppleFSCompression.AppleFSCompressionTypeZlib 1.0.0d1 com.apple.BootCache 35 com.apple.iokit.SCSITaskUserClient 3.6.0 com.apple.driver.XsanFilter 404 com.apple.iokit.IOAHCIBlockStorage 2.4.0 com.apple.driver.AppleUSBHub 650.4.4 com.apple.driver.AppleUSBEHCI 650.4.1 com.apple.driver.AppleFWOHCI 4.9.9 com.apple.driver.AirPort.Brcm4331 700.20.22 com.apple.driver.AppleAHCIPort 2.9.5 com.apple.nvenet 2.0.21 com.apple.driver.AppleUSBOHCI 650.4.1 com.apple.driver.AppleSmartBatteryManager 161.0.0 com.apple.driver.AppleRTC 2.0 com.apple.driver.AppleHPET 1.8 com.apple.driver.AppleACPIButtons 2.0 com.apple.driver.AppleSMBIOS 2.0 com.apple.driver.AppleACPIEC 2.0 com.apple.driver.AppleAPIC 1.7 com.apple.driver.AppleIntelCPUPowerManagementClient 216.0.0 com.apple.nke.applicationfirewall 153 com.apple.security.quarantine 3 com.apple.driver.AppleIntelCPUPowerManagement 216.0.0 com.apple.kext.triggers 1.0 com.apple.iokit.IOSerialFamily 10.0.7 com.apple.AppleGraphicsDeviceControl 3.4.12 com.apple.driver.DspFuncLib 2.5.2fc2 com.apple.vecLib.kext 1.0.0 com.apple.iokit.IOAudioFamily 1.9.4fc11 com.apple.kext.OSvKernDSPLib 1.14 com.apple.iokit.IOBluetoothHostControllerUSBTransport 4.2.0f6 com.apple.iokit.IOSurface 91 com.apple.iokit.IOBluetoothFamily 4.2.0f6 com.apple.nvidia.classic.NVDANV50HalTesla 8.1.8 com.apple.driver.AppleSMBusPCI 1.0.12d1 com.apple.driver.AppleGraphicsControl 3.4.12 com.apple.driver.IOPlatformPluginLegacy 1.0.0 com.apple.driver.AppleBacklightExpert 1.0.4 com.apple.iokit.IOFireWireIP 2.2.5 com.apple.driver.AppleHDAController 2.5.2fc2 com.apple.iokit.IOHDAFamily 2.5.2fc2 com.apple.driver.AppleSMBusController 1.0.11d1 com.apple.nvidia.classic.NVDAResmanTesla 8.1.8 com.apple.driver.IOPlatformPluginFamily 5.5.1d27 com.apple.iokit.IONDRVSupport 2.3.6 com.apple.iokit.IOGraphicsFamily 2.3.6 com.apple.driver.AppleSMC 3.1.6d1 com.apple.driver.AppleUSBMultitouch 240.6 com.apple.iokit.IOUSBHIDDriver 650.4.4 com.apple.driver.AppleUSBMergeNub 650.4.0 com.apple.driver.AppleUSBComposite 650.4.0 com.apple.driver.CoreStorage 380 com.apple.iokit.IOSCSIMultimediaCommandsDevice 3.6.0 com.apple.iokit.IOBDStorageFamily 1.7 com.apple.iokit.IODVDStorageFamily 1.7.1 com.apple.iokit.IOCDStorageFamily 1.7.1 com.apple.iokit.IOAHCISerialATAPI 2.6.0 com.apple.iokit.IOSCSIArchitectureModelFamily 3.6.0 com.apple.iokit.IOUSBUserClient 650.4.4 com.apple.iokit.IOFireWireFamily 4.5.5 com.apple.iokit.IO80211Family 600.34 com.apple.iokit.IOAHCIFamily 2.6.0 com.apple.iokit.IONetworkingFamily 3.2 com.apple.iokit.IOUSBFamily 650.4.4 com.apple.driver.NVSMU 2.2.9 com.apple.driver.AppleEFINVRAM 2.0 com.apple.driver.AppleEFIRuntime 2.0 com.apple.iokit.IOHIDFamily 2.0.0 com.apple.iokit.IOSMBusFamily 1.1 com.apple.security.sandbox 278.10 com.apple.kext.AppleMatch 1.0.0d1 com.apple.security.TMSafetyNet 7 com.apple.driver.AppleKeyStore 2 com.apple.driver.DiskImages 371.1 com.apple.iokit.IOStorageFamily 1.9 com.apple.iokit.IOReportFamily 21 com.apple.driver.AppleFDEKeyStore 28.30 com.apple.driver.AppleACPIPlatform 2.0 com.apple.iokit.IOPCIFamily 2.8 com.apple.iokit.IOACPIFamily 1.4 com.apple.kec.pthread 1 com.apple.kec.corecrypto 1.0 System Profile: Model: MacBookPro5,2, BootROM MBP52.008E.B05, 2 processors, Intel Core 2 Duo, 2.8 GHz, 8 GB, SMC 1.42f4 Graphics: NVIDIA GeForce 9400M, NVIDIA GeForce 9400M, PCI, 256 MB Graphics: NVIDIA GeForce 9600M GT, NVIDIA GeForce 9600M GT, PCIe, 512 MB Memory Module: BANK 0/DIMM0, 4 GB, DDR3, 1333 MHz, 0x04CD, 0x46332D3130363636434C392D344742535100 Memory Module: BANK 1/DIMM0, 4 GB, DDR3, 1333 MHz, 0x04CD, 0x46332D3130363636434C392D344742535100 AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x8D), Broadcom BCM43xx 1.0 (5.106.98.100.22) Bluetooth: Version 4.2.0f6 12982, 3 services, 15 devices, 1 incoming serial ports Network Service: Wi-Fi, AirPort, en1 Serial ATA Device: Samsung SSD 840 Series, 120.03 GB Serial ATA Device: MATSHITADVD-R UJ-868 USB Device: Built-in iSight USB Device: BRCM2046 Hub USB Device: Bluetooth USB Host Controller USB Device: Apple Internal Keyboard / Trackpad USB Device: IR Receiver Thunderbolt Bus:

    Read the article

  • Analytic functions – they’re not aggregates

    - by Rob Farley
    SQL 2012 brings us a bunch of new analytic functions, together with enhancements to the OVER clause. People who have known me over the years will remember that I’m a big fan of the OVER clause and the types of things that it brings us when applied to aggregate functions, as well as the ranking functions that it enables. The OVER clause was introduced in SQL Server 2005, and remained frustratingly unchanged until SQL Server 2012. This post is going to look at a particular aspect of the analytic functions though (not the enhancements to the OVER clause). When I give presentations about the analytic functions around Australia as part of the tour of SQL Saturdays (starting in Brisbane this Thursday), and in Chicago next month, I’ll make sure it’s sufficiently well described. But for this post – I’m going to skip that and assume you get it. The analytic functions introduced in SQL 2012 seem to come in pairs – FIRST_VALUE and LAST_VALUE, LAG and LEAD, CUME_DIST and PERCENT_RANK, PERCENTILE_CONT and PERCENTILE_DISC. Perhaps frustratingly, they take slightly different forms as well. The ones I want to look at now are FIRST_VALUE and LAST_VALUE, and PERCENTILE_CONT and PERCENTILE_DISC. The reason I’m pulling this ones out is that they always produce the same result within their partitions (if you’re applying them to the whole partition). Consider the following query: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     LAST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; This is designed to get the TotalDue for the first order of the year, the last order of the year, and also the 95% percentile, using both the continuous and discrete methods (‘discrete’ means it picks the closest one from the values available – ‘continuous’ means it will happily use something between, similar to what you would do for a traditional median of four values). I’m sure you can imagine the results – a different value for each field, but within each year, all the rows the same. Notice that I’m not grouping by the year. Nor am I filtering. This query gives us a result for every row in the SalesOrderHeader table – 31465 in this case (using the original AdventureWorks that dates back to the SQL 2005 days). The RANGE BETWEEN bit in FIRST_VALUE and LAST_VALUE is needed to make sure that we’re considering all the rows available. If we don’t specify that, it assumes we only mean “RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW”, which means that LAST_VALUE ends up being the row we’re looking at. At this point you might think about other environments such as Access or Reporting Services, and remember aggregate functions like FIRST. We really should be able to do something like: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING) FROM Sales.SalesOrderHeader GROUP BY YEAR(OrderDate) ; But you can’t. You get that age-old error: Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.OrderDate' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.SalesOrderID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Hmm. You see, FIRST_VALUE isn’t an aggregate function. None of these analytic functions are. There are too many things involved for SQL to realise that the values produced might be identical within the group. Furthermore, you can’t even surround it in a MAX. Then you get a different error, telling you that you can’t use windowed functions in the context of an aggregate. And so we end up grouping by doing a DISTINCT. SELECT DISTINCT     YEAR(OrderDate),         FIRST_VALUE(TotalDue)              OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),         LAST_VALUE(TotalDue)             OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)          WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; I’m sorry. It’s just the way it goes. Hopefully it’ll change the future, but for now, it’s what you’ll have to do. If we look in the execution plan, we see that it’s incredibly ugly, and actually works out the results of these analytic functions for all 31465 rows, finally performing the distinct operation to convert it into the four rows we get in the results. You might be able to achieve a better plan using things like TOP, or the kind of calculation that I used in http://sqlblog.com/blogs/rob_farley/archive/2011/08/23/t-sql-thoughts-about-the-95th-percentile.aspx (which is how PERCENTILE_CONT works), but it’s definitely convenient to use these functions, and in time, I’m sure we’ll see good improvements in the way that they are implemented. Oh, and this post should be good for fellow SQL Server MVP Nigel Sammy’s T-SQL Tuesday this month.

    Read the article

  • E-Business Suite Technology Sessions at OAUG Collaborate 12

    - by Max Arderius
    Members of our E-Business Suite Applications Technology Group will be at the OAUG Collaborate 12 conference at the Mandalay Bay Convention Center in Las Vegas, Nevada on April 22 to 26, 2012.  Please drop by any of our sessions to hear the latest news and meet up with us. Speaker Sessions Session 9675Planning Your Oracle E-Business Suite Upgrade from Release 11i to 12.1 and BeyondAnne Carlson, Senior Director, Applications Technology Group, OracleSunday, April 22, 2:00 pm - 3:00 pmLocation: Jasmine B Attend this session to hear the latest Oracle E-Business Suite Release 12.1 upgrade planning tips gleaned from customers who have already performed the upgrade. Youll get specific, cross-product advice on how to decide your project's scope, understand the factors that affect your project's duration, develop a robust testing strategy, leverage Oracle Support resources, and more. In a nutshell, this session tells you things you need to know before embarking upon your Release 12.1 upgrade project. Session 9401Minimizing Oracle E-Business Suite Maintenance DowntimesElke Phelps, Principal Product Manager, Applications Technology Group, OracleKevin Hudson, Sr. Director, Applications Technology Group, OracleSunday, April 22, 2:10 pm - 3:10 pmLocation: South Seas EThis session starts with an architecture review of Oracle E-Business Suite fundamentals and then moves to a practical view of the different tools and approaches for downtimes. Topics include patching shortcuts, merging patches, distributing worker processes across multiple servers, running ADPatch in no-interactive mode, staged APPL_TOPs, shared file systems, deferring system-wide database tasks, avoiding resource bottlenecks etc... This session also describes the online patching capabilities coming in Release 12.2. Session 9368Oracle E-Business Suite Technology: Latest Features and RoadmapLisa Parekh, Vice President, Applications Technology Group, Oracle Sunday, April 22, 4:30 pm - 5:30 pmLocation: South Seas EThis session provides an overview of Oracle E-Business Suite technology strategy, the capabilities and associated business benefits of recent releases, as well as a review of the product roadmap. As a cornerstone session for Oracle E-Business Suite technology, come hear about the latest usability enhancements, systems administration and configuration management tools, security-related updates, and tools and options for extending, customizing, and integrating the Oracle E-Business Suite with other applications. Session 10709Oracle E-Business Suite Applications Strategy and General Manager UpdateCliff Godwin, Sr. VP, Application Development, OracleMonday, April 23, 2:30 pm - 3:30 pmLocation: Mandalay Bay DIn this session, hear from Oracle E-Business Suite General Manager Cliff Godwin as he delivers an update on the Oracle E-Business Suite product line. The session covers the value delivered by the current release of Oracle E-Business Suite applications, the momentum, and how Oracle E-Business Suite applications integrate into Oracle’s overall applications strategy. You will come away with an understanding of the value Oracle E-Business Suite applications deliver now and in the future. Session 9398How to Reduce TCO Using Oracle Application Management Suite for Oracle E-Business SuiteAngelo Rosado, Principal Product Manager, Applications Technology Group, OracleKenneth Baxter, Principal Product Strategy Manager, Management Pack Fusion Middleware Management, OracleTuesday, April 24, 8:00 am - 9:00 amLocation: Breakers GThis session covers the methods and tools you can use to gain insights into your end users, troubleshoot performance problems, define service-level objectives, and proactively monitor your end-to-end Oracle E-Business Suite environment to meet your availability and performance targets. Come hear how you can manage, diagnose, and monitor the Oracle E-Business Suite environment from a single console by using Oracle Enterprise Manager together with the Oracle Application Management Suite for Oracle E-Business Suite. Session 9370 Coexistence of Oracle E-Business Suite and Oracle Fusion Applications: Platform Perspective Nadia Bendjedou, Senior Director, Product Strategy, Oracle Tuesday, April 24, 2:00 pm - 3:00 pm Location: South Seas E Join us at this session if you are wondering which tools to integrate your data, your processes and your User Interface. Or what tools to customize and extend your screens and reports (OAF, Forms, ADF, Oracle Reports, BI etc....), what tools to secure, protect and manage your Oracle E-Business Suite etc... Or simply if you are looking for a technical roadmap for your Oracle E-Business Suite infrastructure to CO-EXIST with the rest of your enterprise applications including Oracle Fusion Applications. Session 9375 Oracle E-Business Suite Directions: Deployment and System AdministrationMax Arderius, Manager, Applications Development Group, OracleTuesday, April 24, 4:30 pm - 5:30 pmLocation: Breakers GWhat's coming in the next major version of Oracle E-Business Suite 12? This session covers the latest technology stack, including the use of Oracle WebLogic Server and Oracle Database 11g Release 2. Topics include an architectural overview, installation and upgrade options, new configuration options, and new tools for hot-cloning and automated "lights out" cloning. Learn about how online patching will reduce your database patching downtimes to the time it takes to bounce your database server.Session 9369Oracle E-Business Suite Technology Certification Primer and RoadmapSteven Chan, Sr. Director, Applications Technology Group, Oracle Wednesday, April 25, 8:15 am - 9:15 amLocation: South Seas FThis Oracle Development session summarizes the latest certifications and roadmap for the Oracle E-Business Suite technology stack, including database releases/options, Java, Oracle Forms, Oracle Containers for J2EE, desktop OS, browsers, JRE releases, Office/OpenOffice, development and Web authoring tools, user authentication and management, BI, security options, clouds, Oracle VM etc.... It also covers the most-commonly-asked questions about technology stack component support dates and upgrade implications. Session 9407The Latest Oracle E-Business Suite Release User Interface and Usability EnhancementsGustavo Jimenez, Sr. Manager, Applications Technology Group, Oracle Wednesday, April 25, 1:00 pm - 2:00 pmLocation: South Seas GIn this session, developers will get a detailed look at new features designed to enhance usability, offer more capabilities for personalization and extensions, and support the development and use of dashboards and Web services. Topics include rich new UI capabilities such as new home page features, Navigator and Favorites pull-down menus, Oracle ADF task flows etc.... In addition, we will cover the personalization/extensibility enhancements, business layer extensions, Oracle ADF integration and much more. Session 9374Best Practices for Oracle E-Business Suite Performance Tuning and Upgrade OptimizationIsam Alyousfi, Senior Director, Applications Performance, OracleUdayan Parvate, Director, Release Engineering, Quality and Release Management, Oracle Thursday, April 26, 8:30 am - 9:30 amLocation: South Seas FThis presentation will offer tips and techniques on tuning all the layers of the Oracle E-Business Suite stack including the various tiers of the Oracle E-Business Suite environment. You will learn about tuning Oracle Forms, Concurrent Manager, Apache, and Oracle Discoverer. Track down memory leaks and other issues on the Java and Java Virtual Machine layers. The session also covers Oracle E-Business Suite product-level tuning, including Oracle Workflow, Oracle Order Management, Oracle Payroll, and other modules.Session 9412 Oracle E-Business Suite 12.1 Desktop Integration: Beyond Oracle Applications Desktop IntegratorGustavo Jimenez, Sr. Manager, Applications Technology Group, OracleThursday, April 26, 8:30 am - 9:30 amLocation: Breakers GThis session describes the new expanded functionality in Oracle Web Applications Desktop Integrator, Oracle Report Manager, and dedicated integrators. You have more options for desktop integration now, not fewer. Topics include an overview of prepackaged solutions for integrating Oracle E-Business Suite with desktop applications such as Microsoft Excel, Word, and Projects. The session also discusses how you can use the Desktop Integration Framework feature to create your own integrators quickly and easily.Session 9533 Upgrading your Customizations to Oracle E-Business Suite Release 12.1Sara Woodhull, Principal Product Manager, Applications Technology Group, Oracle Thursday, April 26, 11:00 am - 12:00 pmLocation: South Seas FHave you personalized Forms or OA Framework screens? Have you used mod_plsql or Applications Express to tailor your Release 11i functionality? Have you extended or customized your Release 11i environment using other tools? This session will help you understand customization scenarios, use cases, tools, and technologies for ensuring that your Oracle E-Business Suite Release 12.1 environment fits your users' needs closely and that any future customizations will be easy to upgrade. Special Interest Groups (SIG) Session 10535OAUG Database SIG- Part IMichael Brown, Colibri Limited Company Sunday, April 22, 3:20 pm - 4:20 pmLocation: South Seas FThis is the annual meeting of the Database SIG at Collaborate. The call for candidates for the chair will be closed at the meeting. Plans include a speaker from Oracle and a presentation on applications performance. The details of the meeting will be posted on http://www.dbsig.com. Guest Presentation: Oracle E-Business Suite Database PerformanceIsam Alyousfi, Senior Director, Applications Performance, Oracle Session 10720OAUG EBS Applications Technology SIG- Part ISrini Chaval, Cummins Monday, April 23, 2:30 pm - 3:30 pmLocation: South Seas F Guest Presentation:Oracle E-Business Suite Technology Certification RoadmapSteven Chan, Sr. Director, Applications Technology Group, Oracle Session 10510OAUG EBS Applications Technology SIG- Part IISrini Chaval, CumminsMonday, April 23, 3:45 pm - 4:45 pmLocation: South Seas F Guest Presentation:Oracle E-Business Suite 12.2 Online Patching Kevin Hudson, Sr. Director, Applications Technology Group, Oracle Session 10522 OAUG Upgrade SIG- Part IISandra Vucinic, VLAD Group, Inc. Wednesday, April 25, 3:00 pm - 4:00 pmLocation: South Seas FUpgrade SIG will host a business meeting followed by panel (Q&A) related to EBS Upgrade topics and Oracle presentation. Guest Presentation:Upgrading E-Business Suite Amrita Mehrok, Director, Financials Product Strategy, Oracle Nadia Bendjedou, Senior Director, Product Strategy, Oracle Session 10722OAUG Upgrade SIG- Part IISandra Vucinic, VLAD Group, Inc. Wednesday, April 25, 4:15 pm - 5:15 pmLocation: South Seas FUpgrade SIG will host a business meeting followed by panel (Q&A) related to EBS Upgrade topics and Oracle presentation. Guest Presentation:Tuning the Oracle E-Business Suite Upgrade Isam Alyousfi, Senior Director, Applications Performance, Oracle Panels Session 9360Oracle E-Business Suite Cloning PanelSandra Vucinic, VLAD Group, Inc. Guest Speaker: Max Arderius, Manager, Applications Technology Group, OracleWednesday, April 25, 9:30 am - 10:30 amLocation: South Seas FThis panel will discuss differences between available release 11i, R12 and R12.1 cloning methods. Advantages and disadvantages of each cloning method will be discussed in depth. This panel of experienced database administrators will lead a discussion focusing on the questions such as “which cloning method is best to use in your particular environment”. Attendees will gain practical knowledge, tips and tricks to assist with cloning of Oracle E-Business Suite release 11i, R12 and R12.1 environments. Session 10022Oracle Applications Tuning PanelMark Farnham, Rightsizing, Inc.Guest Speaker: Isam Alyousfi, Senior Director, Applications Performance, OracleThursday, April 26, 09:45 am - 10:45 amLocation: South Seas FThis applications performance panel session, sponsored by the OAUG Database SIG, provides a Q&A forum focused on helping you address your Oracle Applications (Oracle E-Business Suite and Oracle's PeopleSoft Enterprise and Siebel applications) performance- and scalability-related issues. The panel comprises several well-known Oracle Applications performance experts. Topic areas include Oracle Database; the network; and the applications tier, including patching and upgrade performance. For complete listing of all speaker sessions and other activities, please visit the OAUG Collaborate Web Site.

    Read the article

  • CodePlex Daily Summary for Saturday, June 12, 2010

    CodePlex Daily Summary for Saturday, June 12, 2010New ProjectsAdverTool (Advertisement tool): AdverTool is an online tool which integrates the most popular advertisement networks (such as Microsoft adCenter, Google AdWords, Yahoo! Search Mar...Authentication Configuration Tool for SharePoint: Helpful tools to automatically configure SharePoint 2007 and 2010 for forms based authentication and other authentication mechanisms.Bacicworx: A C# .Net 3.5 helper library containing functionality for compression, encryption, hashes, downloading, PayPal API, text analysis and generation, a...BlogEngine.Net iPhone Theme: A port of BETouch originally created by soundbbgBT UPnP Nat Library: This Library makes it extremly simple to add NAT upnp port forwarding to your .net applications. Developed in C# using .Net 4.0CheckBox & CheckBoxList Validators: These validators fill the much needed gap in the Asp.Net Server controlsDataFactories: The DataFactories project was created to provide a standardized interface to SSAS and MSSQL data. However, as it is implemented using the Abstract ...DVD Swarm: Converts unprotected DVD video & audio streams to H.264 with AAC/Vorbis.Frio IM: Frio IM - is cross protocol instant messenger.jiuyuan: jiuyuan management systemMGM: MyGroupManager is a simple graphical interface written in PowerShell that can be deployed to Active Directory users to simplify the managed of grou...MGR2010: This the MA thesis by Witold Stanik & Michał Sereja, PJWSTK.Nauplius.ActiveDirectory: Web-based Active Directory management.Partial rendering control using JQuery: This article show a web custom control that allows partial rendering using JQueryREG - The Random Entertainment Generator: A simple tool to make your mid up when you can't figure out what you want to do!Runes of Magic - Heilerrechner: Heilerrechner für die Heiler von Runes of Magic (www.runes.ofmagic.com)Semagsoft Calculator: Basic calculator for Windows XP, Vista and Windows 7.SO League Tables: SOLT: Stack Overflow League Tables. A fun little app that lets you compare your stack overflow performance for each month, relative to other member...Stacky StackApps .Net Client Library: StackApps is a REST API for which provides access to the stackoverflow.com family of websites. Stacky is a .net client for that API. Stacky current...TwitterDotNet: TwitterDotNet is a TwitterLibrary for .NET Framework.ValiVIN: VIN (Vehicle Identification Number) Validator Validate Vin NumberWorkLogger: Simple work hour logger in WPFNew ReleasesAdverTool (Advertisement tool): Official releases: Please visit http://advertool.org to access the complete source code and downloads.Authentication Configuration Tool for SharePoint: Auth Config Tool (WSS 3.0, MOSS 2007 version): This tool automates the setup of dual authentication web applications in SharePoint that use Windows Authentication and Forms Based Authentication....BlogEngine.Net iPhone Theme: Version 0.1: Original version 0.1 from soundbbgBraintree Client Library: Braintree-2.3.0: Return AvsErrorResponseCode, AvsPostalCodeResponseCode, AvsStreetAddressResponseCode, CurrencyIsoCode, CvvResponseCode with Transaction Return Cr...BT UPnP Nat Library: Bt_Upnp Nat Library Alpha: Alpha Release of the libraryCNZK Library: Silverlight Behaviors - Deep Zoom Tag Filter: Behavior library for Silverlight 4 containing a Deep Zoom Tag Filter Behavior. Sample at the Expression Gallery http://gallery.expression.microsof...Demina: Demina Binaries version 0.2: Updated binaries. This release contains all of the new features, including simple animation transitions.DTLoggedExec: 1.0.0.2: -Fixed a bug that prevented loading packages from SSIS Package Store -Added support for {filename} placeholder in both Data Flow Profiling and CSV ...DVD Swarm: v0.8.10.611: Initial release, mostly stable.Exchange 2010 RBAC Editor (RBAC GUI) - updated on 6/11/2010: RBAC Editor 0.9.5.1: now supports creating and editing Role Assignment Policies; rest of the stuff is the same - still a lot of way to go :) Please use email address i...Extend SmallBasic: Teaching Extensions v.021: Compatible with SmallBasic v0.9 Lame version of TicTacToe Added - more coming later.Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.1.1 GA Released: Hi, Today we are releasing Visifire 3.1.1 GA with the following features: * Logarithmic Axis * ShowIndicator() in Chart. * HideIndica...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.5.4 GA Released: Hi, Today we are releasing Visifire 3.1.1 GA with the following features: Logarithmic Axis ShowIndicator() in Chart. HideIndicator() in Chart...Keep Focused - an enhanced tool for Time Management using Pomodoro Technique: Release 0.3.1 Alpha: Release 0.3.1 Alpha Technical patch. The previous release 0.3 Alpha had some errors and missing features. It was probably not build from the source...Mesopotamia Experiment: Mesopotamia 1.2.96: Bug Fixes - Fixed duplicate cells being added on creating new cells via mutations - Fixed bug where organisms without IO synapses where getting ios...NLog - Advanced .NET Logging: Nightly Build 2010.06.11.001: Changes since the last build:No changes. Unit test results:Passed 243/243 (100%) Passed 243/243 (100%) Passed 267/267 (100%) Passed 269/269 (100%)...Partial rendering control using JQuery: JQuery Web Control V 1.0: This is the first release of the code. It includes the source code and a web application to see how it worksphpxw: Phpxw2.0: 框架目录说明 ./_mod 模块存放目录 ./phpxw/ 框架核心目录 ./phpxw/common/ 框架核心函数 ./phpxw/system/ 框架核心基础类存放目录 ./phpxw/userlib/ 用户继承类存放目录 ./temp...Questionable Content Screensaver: Questionable Content Screensaver: Should be pretty self explanatory, install the appropriate version for your computer (x64 or x86). Features Include Cache comics for offline viewi...Quick Performance Monitor: Version 1.4.1: Added option to change the 'minimum' maximum value visible on the graph at run-time. Also fixed a number of other bugs.Refix - .NET dependency management: Refix v0.1.0.82 ALPHA: This has now been run against a real life project to tease out some of the issues. While this remains alpha software, which you use at your own ris...Rhyduino - Arduino and Managed Code: Beta Release (v0.8.2): ContentsSample Project - Demonstrates basic functionality and is flooded with code comments, so it's capable of being used as a learning tool. It d...Runes of Magic - Heilerrechner: Rom_Heiler_0.1: Erste Version von "RoM Heilerrechner". .Net 4.0 Framework wird vorausgesetzt. Das erhälst du hier: http://www.microsoft.com/downloads/details.aspx?...Semagsoft Calculator: 2.0: new theme and bug fix'sSilverlight Reporting: Release 2: Updated to correct issue in report footer xaml, and to add support for a calculated report footer.Stacky StackApps .Net Client Library: Beta Preview: This is a beta preview to go along with the StackApps beta.TwitterDotNet: TwitterDotNet Library: first versionUnOfficial AW Wrapper dot Net: Aw Wrapper 1.0.0.0 (5.0): New Functions :DValiVIN: ValiVIN first release: First Iteration. METHODS: IsValid(string vin) - Checks if a string is a valid VIN (returns true or false) GetCheckSumValue(string vin) - Returns...VCC: Latest build, v2.1.30611.0: Automatic drop of latest buildViewModelSupport: ViewModelSupport 1.0: Version 1.0 More information: http://houseofbilz.net/archives/2010/05/08/adventures-in-mvvm-my-viewmodel-base/ http://houseofbilz.net/archives/201...VolgaTransTelecomClient: v.1.0.3.0: v.1.0.3.0WCF Client Generator: Version 0.9.3.19259: Changed: - Always generate full type names for parameters and return typesWCF Client Generator: Version 0.9.3.21153: Fixed: - Service contracts namespace generation Added: - Templates assembly code base read from configurationXen: Graphics API for XNA: Xen 2.0 ALPHA: This is a very early alpha for Xen 2.0. Please note: The documentation for this alpha has not been updated yet. Xen 2.0 is not backwards compatib...ZGuideTV.NET: ZGuideTV.NET 0.93: Vendredi 11 avril 2010 (ZGuideTV.NET bêta 9 build 0.93) - English below Ajout : - Classement du contenu dans la description (affichage légende si...Most Popular ProjectsCAML GeneratorSharePoint Geographic Data VisualizerDbIdiom for ADO.NET CorestudyDTSRun Job RunnerXBStudio.asp.net.automationSilverlight load on demand with MEFCloud Business ServicesSharePoint 2010 Taxonomy Import UtilitySTS Federation Metadata EditorMost Active ProjectsRhyduino - Arduino and Managed Codepatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog ModuleCommunity Forums NNTP bridgeCassandraemonBlogEngine.NETMediaCoder.NETMicrosoft Silverlight Media FrameworkAndrew's XNA Helpers

    Read the article

  • How can I get penetration depth from Minkowski Portal Refinement / Xenocollide?

    - by Raven Dreamer
    I recently got an implementation of Minkowski Portal Refinement (MPR) successfully detecting collision. Even better, my implementation returns a good estimate (local minimum) direction for the minimum penetration depth. So I took a stab at adjusting the algorithm to return the penetration depth in an arbitrary direction, and was modestly successful - my altered method works splendidly for face-edge collision resolution! What it doesn't currently do, is correctly provide the minimum penetration depth for edge-edge scenarios, such as the case on the right: What I perceive to be happening, is that my current method returns the minimum penetration depth to the nearest vertex - which works fine when the collision is actually occurring on the plane of that vertex, but not when the collision happens along an edge. Is there a way I can alter my method to return the penetration depth to the point of collision, rather than the nearest vertex? Here's the method that's supposed to return the minimum penetration distance along a specific direction: public static Vector3 CalcMinDistance(List<Vector3> shape1, List<Vector3> shape2, Vector3 dir) { //holding variables Vector3 n = Vector3.zero; Vector3 swap = Vector3.zero; // v0 = center of Minkowski sum v0 = Vector3.zero; // Avoid case where centers overlap -- any direction is fine in this case //if (v0 == Vector3.zero) return Vector3.zero; //always pass in a valid direction. // v1 = support in direction of origin n = -dir; //get the differnce of the minkowski sum Vector3 v11 = GetSupport(shape1, -n); Vector3 v12 = GetSupport(shape2, n); v1 = v12 - v11; //if the support point is not in the direction of the origin if (v1.Dot(n) <= 0) { //Debug.Log("Could find no points this direction"); return Vector3.zero; } // v2 - support perpendicular to v1,v0 n = v1.Cross(v0); if (n == Vector3.zero) { //v1 and v0 are parallel, which means //the direction leads directly to an endpoint n = v1 - v0; //shortest distance is just n //Debug.Log("2 point return"); return n; } //get the new support point Vector3 v21 = GetSupport(shape1, -n); Vector3 v22 = GetSupport(shape2, n); v2 = v22 - v21; if (v2.Dot(n) <= 0) { //can't reach the origin in this direction, ergo, no collision //Debug.Log("Could not reach edge?"); return Vector2.zero; } // Determine whether origin is on + or - side of plane (v1,v0,v2) //tests linesegments v0v1 and v0v2 n = (v1 - v0).Cross(v2 - v0); float dist = n.Dot(v0); // If the origin is on the - side of the plane, reverse the direction of the plane if (dist > 0) { //swap the winding order of v1 and v2 swap = v1; v1 = v2; v2 = swap; //swap the winding order of v11 and v12 swap = v12; v12 = v11; v11 = swap; //swap the winding order of v11 and v12 swap = v22; v22 = v21; v21 = swap; //and swap the plane normal n = -n; } /// // Phase One: Identify a portal while (true) { // Obtain the support point in a direction perpendicular to the existing plane // Note: This point is guaranteed to lie off the plane Vector3 v31 = GetSupport(shape1, -n); Vector3 v32 = GetSupport(shape2, n); v3 = v32 - v31; if (v3.Dot(n) <= 0) { //can't enclose the origin within our tetrahedron //Debug.Log("Could not reach edge after portal?"); return Vector3.zero; } // If origin is outside (v1,v0,v3), then eliminate v2 and loop if (v1.Cross(v3).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v2 = v3; v21 = v31; v22 = v32; n = (v1 - v0).Cross(v3 - v0); continue; } // If origin is outside (v3,v0,v2), then eliminate v1 and loop if (v3.Cross(v2).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v1 = v3; v11 = v31; v12 = v32; n = (v3 - v0).Cross(v2 - v0); continue; } bool hit = false; /// // Phase Two: Refine the portal int phase2 = 0; // We are now inside of a wedge... while (phase2 < 20) { phase2++; // Compute normal of the wedge face n = (v2 - v1).Cross(v3 - v1); n.Normalize(); // Compute distance from origin to wedge face float d = n.Dot(v1); // If the origin is inside the wedge, we have a hit if (d > 0 ) { //Debug.Log("Do plane test here"); float T = n.Dot(v2) / n.Dot(dir); Vector3 pointInPlane = (dir * T); return pointInPlane; } // Find the support point in the direction of the wedge face Vector3 v41 = GetSupport(shape1, -n); Vector3 v42 = GetSupport(shape2, n); v4 = v42 - v41; float delta = (v4 - v3).Dot(n); float separation = -(v4.Dot(n)); if (delta <= kCollideEpsilon || separation >= 0) { //Debug.Log("Non-convergance detected"); //Debug.Log("Do plane test here"); return Vector3.zero; } // Compute the tetrahedron dividing face (v4,v0,v1) float d1 = v4.Cross(v1).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v2) float d2 = v4.Cross(v2).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v3) float d3 = v4.Cross(v3).Dot(v0); if (d1 < 0) { if (d2 < 0) { // Inside d1 & inside d2 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } else { // Inside d1 & outside d2 ==> eliminate v3 v3 = v4; v31 = v41; v32 = v42; } } else { if (d3 < 0) { // Outside d1 & inside d3 ==> eliminate v2 v2 = v4; v21 = v41; v22 = v42; } else { // Outside d1 & outside d3 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } } } return Vector3.zero; } }

    Read the article

  • Quick guide to Oracle IRM 11g: Creating your first sealed document

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThe previous articles in this guide have detailed how to install, configure and secure your Oracle IRM 11g service. This article walks you through the process of now creating your first context and securing a document against it. I should mention that it would be worth reviewing the following to ensure your installation is ready for that all important first document. Ensure you have correctly configured the keystore for the IRM wrapper keys. If this is not correctly configured, creating the context below will fail. Make sure the IRM server URL correctly resolves and uses the right protocol (HTTP or HTTPS) ContentsCreate the first contextInstall the Oracle IRM Desktop Seal your first document Create the first contextIn Oracle 11g there is a built in classification and rights system called the "standard rights model" which is based on 10 years of customer use cases and innovation. It is a system which enables IRM to scale massively whilst retaining the ability to balance security and usability and also separate duties by allowing contacts in the business to own classifications. The final article in this guide goes into detail on this inbuilt classification model, but for the purposes of this current article all we need to do is create at least one context to test our system out.With a new IRM server there are a set of predefined context templates and roles which again are setup in a way which reflects the most common use we've learned from our customers. We will use these out of the box configurations as they are to create the first context against which we will seal some content.First login to your Oracle IRM Management Website located at https://irm.company.com/irm_rights/. Currently the system is only configured to use the built in LDAP for users, so use the only account we have at the moment, which by default is weblogic. Once logged in switch to the Contexts tab. Click on the New Context icon () in the menu bar on the left. In the resulting dialog select the Standard context template and enter in a name for the context. Then just hit finish, the weblogic account will automatically be made the manager. You'll now see your brand new context ready for users to be assigned. Now click on the Assign Role icon () in the menu bar and in the resulting dialog search for your only user account, weblogic, and add to the list on the right. Now select a role for this user. Because we need to create a document with this user we must select contributor, as this is the only role which allows for the ability to seal. Finally hit next and then finish. We now have a context with a user that has the rights to create a document. The next step is to configure the IRM Desktop to get these rights from the server. Install the Oracle IRM Desktop Before we can seal a document we need the client software installed. Oracle IRM has a very small, lightweight client called the Oracle IRM Desktop which can be freely downloaded in 27 languages from here. Double click on the installer and click on next... Next again... And finally on install... Very easy. You may get a warning about closing Outlook, Word or another application and most of the time no reboots are required. Once it is installed you will see the IRM Desktop icon running in your tool tray, bottom right of the desktop. Seal your first document Finally the prize is within reach, creating your first sealed document. The server is running, we've got a context ready, a user assigned a role in the context but there is the simple and obvious hoop left to jump through. To seal a document we need to have the users rights cached to the local machine. For this to take place, the IRM Desktop needs to know where the Oracle IRM server is on the network so we can synchronize these rights and then be able to seal a document. The usual way for the IRM Desktop to know about the IRM server is it learns automatically when you open an existing piece of content that someone has sent you... ack. Bit of a chicken or the egg dilemma. The solution is to manually tell the IRM Desktop the location of the IRM Server and then force a synchronization of rights. Right click on the Oracle IRM Desktop icon in the system tray and select Options.... Then switch to the Servers tab in the resulting dialog. There are no servers in the list because you've never opened any content. This list is usually populated automatically but we are going to add a server manually, so click on New.... Into the dialog enter in the full URL to the IRM server. Note that this time you use the path /irm_desktop/ and not /irm_rights/. You can see an example from the image below. Click on the validate button and you'll be asked to authenticate. Enter in your weblogic username and password and also check the Remember my password check box. Click OK and the IRM Desktop will confirm a successful connection to the server. OK all the dialogs and we are ready to Synchronize this users rights to the desktop. Right click once more on the Oracle IRM Desktop icon in the system tray. Now the Synchronize menu option is available. Select this and the IRM Desktop will now talk to the IRM server, authenticate using your weblogic account and get your rights to the context we created. Because this is the first time this users has communicated with the IRM server the IRM Desktop presents a privacy policy dialog. This is a chance for the business to ask users to agree to any policy about the use of IRM before opening secured documents. In our guide we've not bothered to setup this URL so just click on the check box and hit Accept. The IRM Desktop will then talk to the server, get your rights and display a success dialog. Lets protect a documentNow we are ready to seal a piece of content. In my guide i'm going to protect a Microsoft Word document. This mean's I have to have copy of Office installed, in this guide i'm using Microsoft Office 2007. You could also seal a PDF document, you'll need to download and install Adobe Acrobat Reader. A very simple test could be to seal a GIF/JPG/PNG or piece of HTML because this is rendered using Internet Explorer. But as I say, i'm going to protect a Word document. The following example demonstrates choosing a file in Windows Explorer, there are many ways to seal a file and you can watch a few in this video.Open a copy of Windows Explorer and locate the file you wish to seal. Right click on the document and select Seal To -> Context You are now presented with the Select Context dialog. You'll now have a sealed copy of the document sat in the same location. Double click on this document and it will open, again using the credentials you've already provided. That is it, now you just need to add more users, more documents, more classifications and start exploring the different roles and experiment with different offline periods etc. You may wish to setup the server against an existing LDAP or Active Directory environment instead of using the built in WebLogic LDAP store. You can read how to use your corporate directory here. But before we finish this guide, there is one more article and arguably the most important article of all. Next I discuss the all important decision making surrounding the actually implementation of Oracle IRM inside your business. Who has rights to what? How do you map contexts to your existing business practices? It is the next article which actually ensures you deploy a successful IRM solution by looking at the business and understanding how they use your sensitive information and then configuring Oracle IRM to reflect their use.

    Read the article

  • ASP.NET and WIF: Showing custom profile username as User.Identity.Name

    - by DigiMortal
    I am building ASP.NET MVC application that uses external services to authenticate users. For ASP.NET users are fully authenticated when they are redirected back from external service. In system they are logically authenticated when they have created user profiles. In this posting I will show you how to force ASP.NET MVC controller actions to demand existence of custom user profiles. Using external authentication sources with AppFabric Suppose you want to be user-friendly and you don’t force users to keep in mind another username/password when they visit your site. You can accept logins from different popular sites like Windows Live, Facebook, Yahoo, Google and many more. If user has account in some of these services then he or she can use his or her account to log in to your site. If you have community site then you usually have support for user profiles too. Some of these providers give you some information about users and other don’t. So only thing in common you get from all those providers is some unique ID that identifies user in service uniquely. Image above shows you how new user joins your site. Existing users who already have profile are directed to users homepage after they are authenticated. You can read more about how to solve semi-authorized users problem from my blog posting ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages. The other problem is related to usernames that we don’t get from all identity providers. Why is IIdentity.Name sometimes empty? The problem is described more specifically in my blog posting Identifying AppFabric Access Control Service users uniquely. Shortly the problem is that not all providers have claim called http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name. The following diagram illustrates what happens when user got token from AppFabric ACS and was redirected to your site. Now, when user was authenticated using Windows Live ID then we don’t have name claim in token and that’s why User.Identity.Name is empty. Okay, we can force nameidentifier to be used as name (we can do it in web.config file) but we have user profiles and we want username from profile to be shown when username is asked. Modifying name claim Now let’s force IClaimsIdentity to use username from our user profiles. You can read more about my profiles topic from my blog posting ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages and you can find some useful extension methods for claims identity from my blog posting Identifying AppFabric Access Control Service users uniquely. Here is what we do to set User.Identity.Name: we will check if user has profile, if user has profile we will check if User.Identity.Name matches the name given by profile, if names does not match then probably identity provider returned some name for user, we will remove name claim and recreate it with correct username, we will add new name claim to claims collection. All this stuff happens in Application_AuthorizeRequest event of our web application. The code is here. protected void Application_AuthorizeRequest() {     if (string.IsNullOrEmpty(User.Identity.Name))     {         var identity = User.Identity;         var profile = identity.GetProfile();         if (profile != null)         {             if (profile.UserName != identity.Name)             {                 identity.RemoveName();                   var claim = new Claim("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name", profile.UserName);                 var claimsIdentity = (IClaimsIdentity)identity;                 claimsIdentity.Claims.Add(claim);             }         }     } } RemoveName extension method is simple – it looks for name claims of IClaimsIdentity claims collection and removes them. public static void RemoveName(this IIdentity identity) {     if (identity == null)         return;       var claimsIndentity = identity as ClaimsIdentity;     if (claimsIndentity == null)         return;       for (var i = claimsIndentity.Claims.Count - 1; i >= 0; i--)     {         var claim = claimsIndentity.Claims[i];         if (claim.ClaimType == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name")             claimsIndentity.Claims.RemoveAt(i);     } } And we are done. Now User.Identity.Name returns the username from user profile and you can use it to show username of current user everywhere in your site. Conclusion Mixing AppFabric Access Control Service and Windows Identity Foundation with custom authorization logic is not impossible but a little bit tricky. This posting finishes my little series about AppFabric ACS and WIF for this time and hopefully you found some useful tricks, tips, hacks and code pieces you can use in your own applications.

    Read the article

  • Scott Guthrie in Glasgow

    - by Martin Hinshelwood
    Last week Scott Guthrie was in Glasgow for his new Guathon tour, which was a roaring success. Scott did talks on the new features in Visual Studio 2010, Silverlight 4, ASP.NET MVC 2 and Windows Phone 7. Scott talked from 10am till 4pm, so this can only contain what I remember and I am sure lots of things he discussed just went in one ear and out another, however I have tried to capture at least all of my Ohh’s and Ahh’s. Visual Studio 2010 Right now you can download and install Visual Studio 2010 Candidate Release, but soon we will have the final product in our hands. With it there are some amazing improvements, and not just in the IDE. New versions of VB and C# come out of the box as well as Silverlight 4 and SharePoint 2010 integration. The new Intellisense features allow inline support for Types and Dictionaries as well as being able to type just part of a name and have the list filter accordingly. Even better, and my personal favourite is one that Scott did not mention, and that is that it is not case sensitive so I can actually find things in C# with its reasonless case sensitivity (Scott, can we please have an option to turn that off.) Another nice feature is the Routing engine that was created for ASP.NET MVC is now available for WebForms which is good news for all those that just imported the MVC DLL’s to get at it anyway. Another fantastic feature that will need some exploring is the ability to add validation rules to your entities and have them validated automatically on the front end. This removes the need to add your own validators and means that you can control an objects validation rules from a single location, the object. A simple command “GridView.EnableDynamicData(gettype(product))“ will enable this feature on controls. What was not clear was wither there would be support for this in WPF and WinForms as well. If there is, we can write our validation rules once and use everywhere. I was disappointed to here that there would be no inbuilt support for the Dynamic Language Runtime (DLR) with VS2010, but I think it will be there for .vNext. Because I have been concentrating on the Visual Studio ALM enhancements to VS2010 I found this section invaluable as I now know at least some of what I missed. Silverlight 4 I am not a big fan of Silverlight. There I said it, and I will probably get lynched for it. My big problem with Silverlight is that most of the really useful things I leaned from WPF do not work. I am only going to mention one thing and that is “x:Type”. If you are a WPF developer you will know how much power these 6 little letters provide; the ability to target templates at object types being the the most magical and useful. But, and this is a massive but, if you are developing applications that MUST run on platforms other than windows then Silverlight is your only choice (well that and Flash, but lets just not go there). And Silverlight has a huge install base as well.. 60% of all internet connected devices have Silverlight. Can Adobe say that? Even though I am not a fan of it my current project is a Silverlight one. If you start your XAML experience with Silverlight you will not be disappointed and neither will the users of the applications you build. Scott showed us a fantastic application called “Silverface” that is a Silverlight 4 Out of Browser application. I have looked for a link and can’t find one, but true to form, here is a fantastic WPF version called Fish Bowl from Microsoft. ASP.NET MVC 2 ASP.NET MVC is something I have played with but never used in anger. It is definitely the way forward, but WebForms is not dead yet. there are still circumstances when WebForms are better. If you are starting from greenfield and you are using TDD, then MVC is ultimately the only way you can go. New in version 2 are Dynamic Scaffolding helpers that let you control how data is presented in the UI from the Entities. Adding validation rules and other options that make sense there can help improve the overall ease of developing the UI. Also the Microsoft team have heard the cries of help from the larger site builders and provided “Areas” which allow a level of categorisation to your Controllers and Views. These work just like add-ins and have their own folder, but also have sub Controllers and Views. Areas are totally pluggable and can be dropped onto existing sites giving the ability to have boxed products in MVC, although what you do with all of those views is anyone's guess. They have been listening to everyone again with the new option to encapsulate UI using the Html.Action or Html.ActionRender. This uses the existing  .ascx functionality in ASP.NET to render partial views to the screen in certain areas. While this was possible before, it makes the method official thereby opening it up to the masses and making it a standard. At the end of the session Scott pulled out some IIS goodies including the IIS SEO Toolkit which can be used to verify your own site is “good” for search engine consumption. Better yet he suggested that you run it against your friends sites and shame them with how bad they are. note: make sure you have fixed yours first. Windows Phone 7 Series I had already seen the new UI for WP7 and heard about the developer story, but Scott brought that home by building a twitter application in about 20 minutes using the emulator. Scott’s only mistake was loading @plip’s tweets into the app… And guess what, it was written in Silverlight. When Windows Phone 7 launches you will be able to use about 90% of the codebase of your existing Silverlight application and use it on the phone! There are two downsides to the new WP7 architecture: No, your existing application WILL NOT work without being converted to either a Silverlight or XNA UI. NO, you will not be able to get your applications onto the phone any other way but through the Marketplace. Do I think these are problems? No, not even slightly. This phone is aimed at consumers who have probably never tried to install an application directly onto a device. There will be support for enterprise apps in the future, but for now enterprises should stay on Windows Phone 6.5.x devices. Post Event drinks At the after event drinks gathering Scott was checking out my HTC HD2 (released to the US this month on T-Mobile) and liked the Windows Phone 6.5.5 build I have on it. We discussed why Microsoft were not going to allow Windows Phone 7 Series onto it with my understanding being that it had 5 buttons and not 3, while Scott was sure that there was more to it from a hardware standpoint. I think he is right, and although the HTC HD2 has a DX9 compatible processor, it was never built with WP7 in mind. However, as if by magic Saturday brought fantastic news for all those that have already bought an HD2: Yes, this appears to be Windows Phone 7 running on a HTC HD2. The HD2 itself won't be getting an official upgrade to Windows Phone 7 Series, so all eyes are on the ROM chefs at the moment. The rather massive photos have been posted by Tom Codon on HTCPedia and they've apparently got WiFi, GPS, Bluetooth and other bits working. The ROM isn't online yet but according to the post there's a beta version coming soon. Leigh Geary - http://www.coolsmartphone.com/news5648.html  What was Scott working on on his flight back to the US?   Technorati Tags: VS2010,MVC2,WP7S,WP7 Follow: @CAMURPHY, @ColinMackay, @plip and of course @ScottGu

    Read the article

  • Setting up a VPN connection to Amazon VPC - routing

    - by Keeno
    I am having some real issues setting up a VPN between out office and AWS VPC. The "tunnels" appear to be up, however I don't know if they are configured correctly. The device I am using is a Netgear VPN Firewall - FVS336GV2 If you see in the attached config downloaded from VPC (#3 Tunnel Interface Configuration), it gives me some "inside" addresses for the tunnel. When setting up the IPsec tunnels do I use the inside tunnel IP's (e.g. 169.254.254.2/30) or do I use my internal network subnet (10.1.1.0/24) I have tried both, when I tried the local network (10.1.1.x) the tracert stops at the router. When I tried with the "inside" ips, the tracert to the amazon VPC (10.0.0.x) goes out over the internet. this all leads me to the next question, for this router, how do I set up stage #4, the static next hop? What are these seemingly random "inside" addresses and where did amazon generate them from? 169.254.254.x seems odd? With a device like this, is the VPN behind the firewall? I have tweaked any IP addresses below so that they are not "real". I am fully aware, this is probably badly worded. Please if there is any further info/screenshots that will help, let me know. Amazon Web Services Virtual Private Cloud IPSec Tunnel #1 ================================================================================ #1: Internet Key Exchange Configuration Configure the IKE SA as follows - Authentication Method : Pre-Shared Key - Pre-Shared Key : --- - Authentication Algorithm : sha1 - Encryption Algorithm : aes-128-cbc - Lifetime : 28800 seconds - Phase 1 Negotiation Mode : main - Perfect Forward Secrecy : Diffie-Hellman Group 2 #2: IPSec Configuration Configure the IPSec SA as follows: - Protocol : esp - Authentication Algorithm : hmac-sha1-96 - Encryption Algorithm : aes-128-cbc - Lifetime : 3600 seconds - Mode : tunnel - Perfect Forward Secrecy : Diffie-Hellman Group 2 IPSec Dead Peer Detection (DPD) will be enabled on the AWS Endpoint. We recommend configuring DPD on your endpoint as follows: - DPD Interval : 10 - DPD Retries : 3 IPSec ESP (Encapsulating Security Payload) inserts additional headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data. To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway: - TCP MSS Adjustment : 1387 bytes - Clear Don't Fragment Bit : enabled - Fragmentation : Before encryption #3: Tunnel Interface Configuration Your Customer Gateway must be configured with a tunnel interface that is associated with the IPSec tunnel. All traffic transmitted to the tunnel interface is encrypted and transmitted to the Virtual Private Gateway. The Customer Gateway and Virtual Private Gateway each have two addresses that relate to this IPSec tunnel. Each contains an outside address, upon which encrypted traffic is exchanged. Each also contain an inside address associated with the tunnel interface. The Customer Gateway outside IP address was provided when the Customer Gateway was created. Changing the IP address requires the creation of a new Customer Gateway. The Customer Gateway inside IP address should be configured on your tunnel interface. Outside IP Addresses: - Customer Gateway : 217.33.22.33 - Virtual Private Gateway : 87.222.33.42 Inside IP Addresses - Customer Gateway : 169.254.254.2/30 - Virtual Private Gateway : 169.254.254.1/30 Configure your tunnel to fragment at the optimal size: - Tunnel interface MTU : 1436 bytes #4: Static Routing Configuration: To route traffic between your internal network and your VPC, you will need a static route added to your router. Static Route Configuration Options: - Next hop : 169.254.254.1 You should add static routes towards your internal network on the VGW. The VGW will then send traffic towards your internal network over the tunnels. IPSec Tunnel #2 ================================================================================ #1: Internet Key Exchange Configuration Configure the IKE SA as follows - Authentication Method : Pre-Shared Key - Pre-Shared Key : --- - Authentication Algorithm : sha1 - Encryption Algorithm : aes-128-cbc - Lifetime : 28800 seconds - Phase 1 Negotiation Mode : main - Perfect Forward Secrecy : Diffie-Hellman Group 2 #2: IPSec Configuration Configure the IPSec SA as follows: - Protocol : esp - Authentication Algorithm : hmac-sha1-96 - Encryption Algorithm : aes-128-cbc - Lifetime : 3600 seconds - Mode : tunnel - Perfect Forward Secrecy : Diffie-Hellman Group 2 IPSec Dead Peer Detection (DPD) will be enabled on the AWS Endpoint. We recommend configuring DPD on your endpoint as follows: - DPD Interval : 10 - DPD Retries : 3 IPSec ESP (Encapsulating Security Payload) inserts additional headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data. To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway: - TCP MSS Adjustment : 1387 bytes - Clear Don't Fragment Bit : enabled - Fragmentation : Before encryption #3: Tunnel Interface Configuration Outside IP Addresses: - Customer Gateway : 217.33.22.33 - Virtual Private Gateway : 87.222.33.46 Inside IP Addresses - Customer Gateway : 169.254.254.6/30 - Virtual Private Gateway : 169.254.254.5/30 Configure your tunnel to fragment at the optimal size: - Tunnel interface MTU : 1436 bytes #4: Static Routing Configuration: Static Route Configuration Options: - Next hop : 169.254.254.5 You should add static routes towards your internal network on the VGW. The VGW will then send traffic towards your internal network over the tunnels. EDIT #1 After writing this post, I continued to fiddle and something started to work, just not very reliably. The local IPs to use when setting up the tunnels where indeed my network subnets. Which further confuses me over what these "inside" IP addresses are for. The problem is, results are not consistent what so ever. I can "sometimes" ping, I can "sometimes" RDP using the VPN. Sometimes, Tunnel 1 or Tunnel 2 can be up or down. When I came back into work today, Tunnel 1 was down, so I deleted it and re-created it from scratch. Now I cant ping anything, but Amazon AND the router are telling me tunnel 1/2 are fine. I guess the router/vpn hardware I have just isnt up to the job..... EDIT #2 Now Tunnel 1 is up, Tunnel 2 is down (I didn't change any settings) and I can ping/rdp again. EDIT #3 Screenshot of route table that the router has built up. Current state (tunnel 1 still up and going string, 2 is still down and wont re-connect)

    Read the article

  • Problem with RAID5 (mdadm) - disk detached

    - by poscaman
    Having these lines in /var/log/syslog Apr 18 16:53:05 Server kernel: [4487878.816036] ata4: EH in SWNCQ mode,QC:qc_active 0x1 sactive 0x1 Apr 18 16:53:05 Server kernel: [4487878.816058] ata4: SWNCQ:qc_active 0x1 defer_bits 0x0 last_issue_tag 0x0 Apr 18 16:53:05 Server kernel: [4487878.816059] dhfis 0x1 dmafis 0x1 sdbfis 0x0 Apr 18 16:53:05 Server kernel: [4487878.816093] ata4: ATA_REG 0x40 ERR_REG 0x0 Apr 18 16:53:05 Server kernel: [4487878.816108] ata4: tag : dhfis dmafis sdbfis sacitve Apr 18 16:53:05 Server kernel: [4487878.816125] ata4: tag 0x0: 1 1 0 1 Apr 18 16:53:05 Server kernel: [4487878.816150] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 frozen Apr 18 16:53:05 Server kernel: [4487878.816178] ata4.00: failed command: WRITE FPDMA QUEUED Apr 18 16:53:05 Server kernel: [4487878.816199] ata4.00: cmd 61/08:00:00:88:e0/00:00:e8:00:00/40 tag 0 ncq 4096 out Apr 18 16:53:05 Server kernel: [4487878.816200] res 40/00:00:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Apr 18 16:53:05 Server kernel: [4487878.816253] ata4.00: status: { DRDY } Apr 18 16:53:05 Server kernel: [4487878.816272] ata4: hard resetting link Apr 18 16:53:05 Server kernel: [4487878.816274] ata4: nv: skipping hardreset on occupied port Apr 18 16:53:06 Server kernel: [4487879.676029] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 18 16:53:07 Server kernel: [4487880.416749] ata4.00: n_sectors mismatch 3907029168 != 268435455 Apr 18 16:53:07 Server kernel: [4487880.416752] ata4.00: revalidation failed (errno=-19) Apr 18 16:53:07 Server kernel: [4487880.416773] ata4.00: limiting speed to UDMA/133:PIO2 Apr 18 16:53:11 Server kernel: [4487884.676024] ata4: hard resetting link Apr 18 16:53:11 Server kernel: [4487884.676027] ata4: nv: skipping hardreset on occupied port Apr 18 16:53:12 Server kernel: [4487885.144032] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 18 16:53:12 Server kernel: [4487885.240185] ata4.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80) Apr 18 16:53:12 Server kernel: [4487885.240190] ata4.00: revalidation failed (errno=-5) Apr 18 16:53:12 Server kernel: [4487885.240210] ata4.00: disabled Apr 18 16:53:17 Server kernel: [4487890.144023] ata4: hard resetting link Apr 18 16:53:17 Server kernel: [4487891.024033] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 18 16:53:17 Server kernel: [4487891.033357] ata4.00: ATA-8: WDC WD20EARS-00S8B1, 80.00A80, max UDMA/133 Apr 18 16:53:17 Server kernel: [4487891.033360] ata4.00: 3907029168 sectors, multi 1: LBA48 NCQ (depth 31/32) Apr 18 16:53:17 Server kernel: [4487891.048347] ata4.00: configured for UDMA/133 Apr 18 16:53:17 Server kernel: [4487891.048361] sd 3:0:0:0: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Apr 18 16:53:17 Server kernel: [4487891.048365] sd 3:0:0:0: [sdc] Sense Key : Aborted Command [current] [descriptor] Apr 18 16:53:17 Server kernel: [4487891.048369] Descriptor sense data with sense descriptors (in hex): Apr 18 16:53:17 Server kernel: [4487891.048371] 72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00 Apr 18 16:53:17 Server kernel: [4487891.048378] 00 00 00 00 Apr 18 16:53:17 Server kernel: [4487891.048382] sd 3:0:0:0: [sdc] Add. Sense: No additional sense information Apr 18 16:53:17 Server kernel: [4487891.048385] sd 3:0:0:0: [sdc] CDB: Write(10): 2a 00 e8 e0 88 00 00 00 08 00 Apr 18 16:53:17 Server kernel: [4487891.048393] end_request: I/O error, dev sdc, sector 3907028992 Apr 18 16:53:17 Server kernel: [4487891.048420] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048440] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048458] end_request: I/O error, dev sdc, sector 3907028992 Apr 18 16:53:17 Server kernel: [4487891.048477] md: super_written gets error=-5, uptodate=0 Apr 18 16:53:17 Server kernel: [4487891.048482] raid5: Disk failure on sdc, disabling device. Apr 18 16:53:17 Server kernel: [4487891.048483] raid5: Operation continuing on 3 devices. Apr 18 16:53:17 Server kernel: [4487891.048525] ata4: EH complete Apr 18 16:53:17 Server kernel: [4487891.048554] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048576] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048596] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048615] sd 3:0:0:0: [sdc] READ CAPACITY(16) failed Apr 18 16:53:17 Server kernel: [4487891.048617] sd 3:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Apr 18 16:53:17 Server kernel: [4487891.048620] sd 3:0:0:0: [sdc] Sense not available. Apr 18 16:53:17 Server kernel: [4487891.048624] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048643] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048663] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048681] sd 3:0:0:0: [sdc] READ CAPACITY failed Apr 18 16:53:17 Server kernel: [4487891.048683] sd 3:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Apr 18 16:53:17 Server kernel: [4487891.048685] sd 3:0:0:0: [sdc] Sense not available. Apr 18 16:53:17 Server kernel: [4487891.048689] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048709] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048800] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048860] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.049028] sd 3:0:0:0: [sdc] Asking for cache data failed Apr 18 16:53:17 Server kernel: [4487891.049048] sd 3:0:0:0: [sdc] Assuming drive cache: write through Apr 18 16:53:17 Server kernel: [4487891.049071] sdc: detected capacity change from 2000398934016 to 0 Apr 18 16:53:17 Server kernel: [4487891.049080] ata4.00: detaching (SCSI 3:0:0:0) Apr 18 16:53:18 Server kernel: [4487891.061149] sd 3:0:0:0: [sdc] Stopping disk Apr 18 16:53:18 Server kernel: [4487891.485492] RAID5 conf printout: Apr 18 16:53:18 Server kernel: [4487891.485496] --- rd:4 wd:3 Apr 18 16:53:18 Server kernel: [4487891.485500] disk 0, o:1, dev:sdb Apr 18 16:53:18 Server kernel: [4487891.485502] disk 1, o:0, dev:sdc Apr 18 16:53:18 Server kernel: [4487891.485504] disk 2, o:1, dev:sdd Apr 18 16:53:18 Server kernel: [4487891.485506] disk 3, o:1, dev:sde Apr 18 16:53:18 Server kernel: [4487891.497014] RAID5 conf printout: Apr 18 16:53:18 Server kernel: [4487891.497016] --- rd:4 wd:3 Apr 18 16:53:18 Server kernel: [4487891.497018] disk 0, o:1, dev:sdb Apr 18 16:53:18 Server kernel: [4487891.497019] disk 2, o:1, dev:sdd Apr 18 16:53:18 Server kernel: [4487891.497021] disk 3, o:1, dev:sde Apr 18 16:53:18 Server kernel: [4487891.838719] scsi 3:0:0:0: Direct-Access ATA WDC WD20EARS-00S 80.0 PQ: 0 ANSI: 5 Apr 18 16:53:18 Server kernel: [4487891.838886] sd 3:0:0:0: Attached scsi generic sg3 type 0 Apr 18 16:53:18 Server kernel: [4487891.838911] sd 3:0:0:0: [sdf] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Apr 18 16:53:18 Server kernel: [4487891.838964] sd 3:0:0:0: [sdf] Write Protect is off Apr 18 16:53:18 Server kernel: [4487891.838967] sd 3:0:0:0: [sdf] Mode Sense: 00 3a 00 00 Apr 18 16:53:18 Server kernel: [4487891.838988] sd 3:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 18 16:53:20 Server kernel: [4487891.839147] sdf: unknown partition table Apr 18 16:53:20 Server kernel: [4487893.130026] sd 3:0:0:0: [sdf] Attached SCSI disk Right now, i'm unable to do anything on /dev/sdc. Is there any way to try to re-attach it? I don't want to power-down the server unless absolutely necessary System: Debian Stable 2.6.32-5-amd64 mdadm version 3.1.4-1+8efb9d1 cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb[0] sdc[4](F) sde[3] sdd[2] 5860543488 blocks level 5, 64k chunk, algorithm 2 [4/3] [U_UU] unused devices: <none> mdadm --examine --scan ARRAY /dev/md0 UUID=1a7744b5:912ec7af:f82a9565:e3b453b4

    Read the article

< Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >