Search Results

Search found 18220 results on 729 pages for 'null hypothesis'.

Page 407/729 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Entity Framework version 1- Brief Synopsis and Tips &ndash; Part 1

    - by Rohit Gupta
    To Do Eager loading use Projections (for e.g. from c in context.Contacts select c, c.Addresses)  or use Include Query Builder Methods (Include(“Addresses”)) If there is multi-level hierarchical Data then to eager load all the relationships use Include Query Builder methods like customers.Include("Order.OrderDetail") to include Order and OrderDetail collections or use customers.Include("Order.OrderDetail.Location") to include all Order, OrderDetail and location collections with a single include statement =========================================================================== If the query uses Joins then Include() Query Builder method will be ignored, use Nested Queries instead If the query does projections then Include() Query Builder method will be ignored Use Address.ContactReference.Load() OR Contact.Addresses.Load() if you need to Deferred Load Specific Entity – This will result in extra round trips to the database ObjectQuery<> cannot return anonymous types... it will return a ObjectQuery<DBDataRecord> Only Include method can be added to Linq Query Methods Any Linq Query method can be added to Query Builder methods. If you need to append a Query Builder Method (other than Include) after a LINQ method  then cast the IQueryable<Contact> to ObjectQuery<Contact> and then append the Query Builder method to it =========================================================================== Query Builder methods are Select, Where, Include Methods which use Entity SQL as parameters e.g. "it.StartDate, it.EndDate" When Query Builder methods do projection then they return ObjectQuery<DBDataRecord>, thus to iterate over this collection use contact.Item[“Name”].ToString() When Linq To Entities methods do projection, they return collection of anonymous types --- thus the collection is strongly typed and supports Intellisense EF Object Context can track changes only on Entities, not on Anonymous types. If you use a Defining Query for a EntitySet then the EntitySet becomes readonly since a Defining Query is the same as a View (which is treated as a ReadOnly by default). However if you want to use this EntitySet for insert/update/deletes then we need to map stored procs (as created in the DB) to the insert/update/delete functions of the Entity in the Designer You can use either Execute method or ToList() method to bind data to datasources/bindingsources If you use the Execute Method then remember that you can traverse through the ObjectResult<> collection (returned by Execute) only ONCE. In WPF use ObservableCollection to bind to data sources , for keeping track of changes and letting EF send updates to the DB automatically. Use Extension Methods to add logic to Entities. For e.g. create extension methods for the EntityObject class. Create a method in ObjectContext Partial class and pass the entity as a parameter, then call this method as desired from within each entity. ================================================================ DefiningQueries and Stored Procedures: For Custom Entities, one can use DefiningQuery or Stored Procedures. Thus the Custom Entity Collection will be populated using the DefiningQuery (of the EntitySet) or the Sproc. If you use Sproc to populate the EntityCollection then the query execution is immediate and this execution happens on the Server side and any filters applied will be applied in the Client App. If we use a DefiningQuery then these queries are composable, meaning the filters (if applied to the entityset) will all be sent together as a single query to the DB, returning only filtered results. If the sproc returns results that cannot be mapped to existing entity, then we first create the Entity/EntitySet in the CSDL using Designer, then create a dummy Entity/EntitySet using XML in the SSDL. When creating a EntitySet in the SSDL for this dummy entity, use a TSQL that does not return any results, but does return the relevant columns e.g. select ContactID, FirstName, LastName from dbo.Contact where 1=2 Also insure that the Entity created in the SSDL uses the SQL DataTypes and not .NET DataTypes. If you are unable to open the EDMX file in the designer then note the Errors ... they will give precise info on what is wrong. The Thrid option is to simply create a Native Query in the SSDL using <Function Name="PaymentsforContact" IsComposable="false">   <CommandText>SELECT ActivityId, Activity AS ActivityName, ImagePath, Category FROM dbo.Activities </CommandText></FuncTion> Then map this Function to a existing Entity. This is a quick way to get a custom Entity which is regular Entity with renamed columns or additional columns (which are computed columns). The disadvantage to using this is that It will return all the rows from the Defining query and any filter (if defined) will be applied only at the Client side (after getting all the rows from DB). If you you DefiningQuery instead then we can use that as a Composable Query. The Fourth option (for mapping a READ stored proc results to a non-existent Entity) is to create a View in the Database which returns all the fields that the sproc also returns, then update the Model so that the model contains this View as a Entity. Then map the Read Sproc to this View Entity. The other option would be to simply create the View and remove the sproc altogether. ================================================================ To Execute a SProc that does not return a entity, use a EntityCommand to execute that proc. You cannot call a sproc FunctionImport that does not return Entities From Code, the only way is to use SSDL function calls using EntityCommand.  This changes with EntityFramework Version 4 where you can return Scalar Types, Complex Types, Entities or NonQuery ================================================================ UDF when created as a Function in SSDL, we need to set the Name & IsComposable properties for the Function element. IsComposable is always false for Sprocs, for UDF's set this to true. You cannot call UDF "Function" from within code since you cannot import a UDF Function into the CSDL Model (with Version 1 of EF). only stored procedures can be imported and then mapped to a entity ================================================================ Entity Framework requires properties that are involved in association mappings to be mapped in all of the function mappings for the entity (Insert, Update and Delete). Because Payment has an association to Reservation... hence we need to pass both the paymentId and reservationId to the Delete sproc even though just the paymentId is the PK on the Payment Table. ================================================================ When mapping insert, update and delete procs to a Entity, insure that all the three or none are mapped. Further if you have a base class and derived class in the CSDL, then you must map (ins, upd, del) sprocs to all parent and child entities in the inheritance relationship. Note that this limitation that base and derived entity methods must all must be mapped does not apply when you are mapping Read Stored Procedures.... ================================================================ You can write stored procedures SQL directly into the SSDL by creating a Function element in the SSDL and then once created, you can map this Function to a CSDL Entity directly in the designer during Function Import ================================================================ You can do Entity Splitting such that One Entity maps to multiple tables in the DB. For e.g. the Customer Entity currently derives from Contact Entity...in addition it also references the ContactPersonalInfo Entity. One can copy all properties from the ContactPersonalInfo Entity into the Customer Entity and then Delete the CustomerPersonalInfo entity, finall one needs to map the copied properties to the ContactPersonalInfo Table in Table Mapping (by adding another table (ContactPersonalInfo) to the Table Mapping... this is called Entity Splitting. Thus now when you insert a Customer record, it will automatically create SQL to insert records into the Contact, Customers and ContactPersonalInfo tables even though you have a Single Entity called Customer in the CSDL =================================================================== There is Table by Type Inheritance where another EDM Entity can derive from another EDM entity and absorb the inherted entities properties, for example in the Break Away Geek Adventures EDM, the Customer entity derives (inherits) from the Contact Entity and absorbs all the properties of Contact entity. Thus when you create a Customer Entity in Code and then call context.SaveChanges the Object Context will first create the TSQL to insert into the Contact Table followed by a TSQL to insert into the Customer table =================================================================== Then there is the Table per Hierarchy Inheritance..... where different types are created based on a condition (similar applying a condition to filter a Entity to contain filtered records)... the diference being that the filter condition populates a new Entity Type (derived from the base Entity). In the BreakAway sample the example is Lodging Entity which is a Abstract Entity and Then Resort and NonResort Entities which derive from Lodging Entity and records are filtered based on the value of the Resort Boolean field =================================================================== Then there is Table per Concrete Type Hierarchy where we create a concrete Entity for each table in the database. In the BreakAway sample there is a entity for the Reservation table and another Entity for the OldReservation table even though both the table contain the same number of fields. The OldReservation Entity can then inherit from the Reservation Entity and configure the OldReservation Entity to remove all Scalar Properties from the Entity (since it inherits the properties from Reservation and filters based on ReservationDate field) =================================================================== Complex Types (Complex Properties) Entities in EF can also contain Complex Properties (in addition to Scalar Properties) and these Complex Properties reference a ComplexType (not a EntityType) DropdownList, ListBox, RadioButtonList, CheckboxList, Bulletedlist are examples of List server controls (not data bound controls) these controls cannot use Complex properties during databinding, they need Scalar Properties. So if a Entity contains Complex properties and you need to bind those to list server controls then use projections to return Scalar properties and bind them to the control (the disadvantage is that projected collections are not tracked by the Object Context and hence cannot persist changes to the projected collections bound to controls) ObjectDataSource and EntityDataSource do account for Complex properties and one can bind entities with Complex Properties to Data Source controls and they will be tracked for changes... with no additional plumbing needed to persist changes to these collections bound to controls So DataBound controls like GridView, FormView need to use EntityDataSource or ObjectDataSource as a datasource for entities that contain Complex properties so that changes to the datasource done using the GridView can be persisted to the DB (enabling the controls for updates)....if you cannot use the EntityDataSource you need to flatten the ComplexType Properties using projections With EF Version 4 ComplexTypes are supported by the Designer and can add/remove/compose Complex Types directly using the Designer =================================================================== Conditional Mapping ... is like Table per Hierarchy Inheritance where Entities inherit from a base class and then used conditions to populate the EntitySet (called conditional Mapping). Conditional Mapping has limitations since you can only use =, is null and IS NOT NULL Conditions to do conditional mapping. If you need more operators for filtering/mapping conditionally then use QueryView(or possibly Defining Query) to create a readonly entity. QueryView are readonly by default... the EntitySet created by the QueryView is enabled for change tracking by the ObjectContext, however the ObjectContext cannot create insert/update/delete TSQL statements for these Entities when SaveChanges is called since it is QueryView. One way to get around this limitation is to map stored procedures for the insert/update/delete operations in the Designer. =================================================================== Difference between QueryView and Defining Query : QueryView is defined in the (MSL) Mapping File/section of the EDM XML, whereas the DefiningQuery is defined in the store schema (SSDL). QueryView is written using Entity SQL and is this database agnostic and can be used against any database/Data Layer. DefiningQuery is written using Database Lanaguage i.e. TSQL or PSQL thus you have more control =================================================================== Performance: Lazy loading is deferred loading done automatically. lazy loading is supported with EF version4 and is on by default. If you need to turn it off then use context.ContextOptions.lazyLoadingEnabled = false To improve Performance consider PreCompiling the ObjectQuery using the CompiledQuery.Compile method

    Read the article

  • Anti-Forgery Request Recipes For ASP.NET MVC And AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent in the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> This invocation generates a token then writes inside the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and also writes into the cookie: __RequestVerificationToken_Lw__= J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. In the server side, [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, some problems are encountered. Specify validation on controller (not on each action) The server side problem is, It is expected to declare [ValidateAntiForgeryToken] on controller, but actually it has be to declared on each POST actions. Because POST actions are usually much more then controllers, the work would be a little crazy. Problem Usually a controller contains actions for HTTP GET and actions for HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller // One [ValidateAntiForgeryToken] attribute. { [HttpGet] public ActionResult Index() // Index() cannot work. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If browser sends an HTTP GET request by clicking a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each POST action:public class SomeController : Controller // Many [ValidateAntiForgeryToken] attributes. { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } This is a little bit crazy, because one application can have a lot of POST actions. Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one for each POST action), the following ValidateAntiForgeryTokenWrapperAttribute wrapper class can be helpful, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // GET actions are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all POST actions. Maybe it would be nice if HTTP verbs can be specified on the built-in [ValidateAntiForgeryToken] attribute, which is easy to implemented. Specify Non-constant salt in runtime By default, the salt should be a compile time constant, so it can be used for the [ValidateAntiForgeryToken] or [ValidateAntiForgeryTokenWrapper] attribute. Problem One Web product might be sold to many clients. If a constant salt is evaluated in compile time, after the product is built and deployed to many clients, they all have the same salt. Of course, clients do not like this. Even some clients might want to specify a custom salt in configuration. In these scenarios, salt is required to be a runtime value. Solution In the above [ValidateAntiForgeryToken] and [ValidateAntiForgeryTokenWrapper] attribute, the salt is passed through constructor. So one solution is to remove this parameter:public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = AntiForgeryToken.Value }; } // Other members. } But here the injected dependency becomes a hard dependency. So the other solution is moving validation code into controller to work around the limitation of attributes:public abstract class AntiForgeryControllerBase : Controller { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; protected AntiForgeryControllerBase(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } protected override void OnAuthorization(AuthorizationContext filterContext) { base.OnAuthorization(filterContext); string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } Then make controller classes inheriting from this AntiForgeryControllerBase class. Now the salt is no long required to be a compile time constant. Submit token via AJAX For browser side, once server side turns on anti-forgery validation for HTTP POST, all AJAX POST requests will fail by default. Problem In AJAX scenarios, the HTTP POST request is not sent by form. Take jQuery as an example:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution Basically, the tokens must be printed to browser then sent back to server. So first of all, HtmlHelper.AntiForgeryToken() need to be called somewhere. Now the browser has token in both HTML and cookie. Then jQuery must find the printed token in the HTML, and append token to the data before sending:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated into a tiny jQuery plugin:/// <reference path="jquery-1.4.2.js" /> (function ($) { $.getAntiForgeryToken = function (tokenWindow, appPath) { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. tokenWindow = tokenWindow && typeof tokenWindow === typeof window ? tokenWindow : window; appPath = appPath && typeof appPath === "string" ? "_" + appPath.toString() : ""; // The name attribute is either __RequestVerificationToken, // or __RequestVerificationToken_{appPath}. tokenName = "__RequestVerificationToken" + appPath; // Finds the <input type="hidden" name={tokenName} value="..." /> from the specified. // var inputElements = $("input[type='hidden'][name='__RequestVerificationToken" + appPath + "']"); var inputElements = tokenWindow.document.getElementsByTagName("input"); for (var i = 0; i < inputElements.length; i++) { var inputElement = inputElements[i]; if (inputElement.type === "hidden" && inputElement.name === tokenName) { return { name: tokenName, value: inputElement.value }; } } return null; }; $.appendAntiForgeryToken = function (data, token) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } // Gets token from current window by default. token = token ? token : $.getAntiForgeryToken(); // $.getAntiForgeryToken(window). data = data ? data + "&" : ""; // If token exists, appends {token.name}={token.value} to data. return token ? data + encodeURIComponent(token.name) + "=" + encodeURIComponent(token.value) : data; }; // Wraps $.post(url, data, callback, type). $.postAntiForgery = function (url, data, callback, type) { return $.post(url, $.appendAntiForgeryToken(data), callback, type); }; // Wraps $.ajax(settings). $.ajaxAntiForgery = function (settings) { settings.data = $.appendAntiForgeryToken(settings.data); return $.ajax(settings); }; })(jQuery); In most of the scenarios, it is Ok to just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() with $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. There might be some scenarios of custom token, where $.appendAntiForgeryToken() is useful:data = $.appendAntiForgeryToken(data, token); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); And there are scenarios that the token is not in the current window. For example, an HTTP POST request can be sent by an iframe, while the token is in the parent window. Here, token's container window can be specified for $.getAntiForgeryToken():data = $.appendAntiForgeryToken(data, $.getAntiForgeryToken(window.parent)); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); If you have better solution, please do tell me.

    Read the article

  • One Exception to Aggregate Them All

    - by João Angelo
    .NET 4.0 introduced a new type of exception, the AggregateException which as the name implies allows to aggregate several exceptions inside a single throw-able exception instance. It is extensively used in the Task Parallel Library (TPL) and besides representing a simple collection of exceptions can also be used to represent a set of exceptions in a tree-like structure. Besides its InnerExceptions property which is a read-only collection of exceptions, the most relevant members of this new type are the methods Flatten and Handle. The former allows to flatten a tree hierarchy removing the need to recur while working with an aggregate exception. For example, if we would flatten the exception tree illustrated in the previous figure the result would be: The other method, Handle, accepts a predicate that is invoked for each aggregated exception and returns a boolean indicating if each exception is handled or not. If at least one exception goes unhandled then Handle throws a new AggregateException containing only the unhandled exceptions. The following code snippet illustrates this behavior and also another scenario where an aggregate exception proves useful – single threaded batch processing. static void Main() { try { ConvertAllToInt32("10", "x1x", "0", "II"); } catch (AggregateException errors) { // Contained exceptions are all FormatException // so Handle does not thrown any exception errors.Handle(e => e is FormatException); } try { ConvertAllToInt32("1", "1x", null, "-2", "#4"); } catch (AggregateException errors) { // Handle throws a new AggregateException containing // the exceptions for which the predicate failed. // In this case it will contain a single ArgumentNullException errors.Handle(e => e is FormatException); } } private static int[] ConvertAllToInt32(params string[] values) { var errors = new List<Exception>(); var integers = new List<int>(); foreach (var item in values) { try { integers.Add(Int32.Parse(item)); } catch (Exception e) { errors.Add(e); } } if (errors.Count > 0) throw new AggregateException(errors); return integers.ToArray(); }

    Read the article

  • NullPointerException on jetty 7 startup with jetty deployment descriptor

    - by Draemon
    I'm getting the following error when I start Jetty: 2010-03-01 12:30:19.328:WARN::Failed startup of context WebAppContext@15ddf5@15ddf5/webapp,null,/path/to/jetty-distribution-7.0.1.v20091125/webapps-plus/webapp.war With this commandline: java -jar start.jar OPTIONS=All lib=/path/to/jetty-distribution-7.0.1.v20091125/lib/ext etc/jetty.xml etc/jetty-plus.xml /path/to/webapp/src/configuration/test.xml And test.xml contains: <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="contextPath">/webapp</Set> <Set name="war"><SystemProperty name="jetty.home" default="."/>/webapps-plus/webapp.war</Set> </Configure> If I don't include test.xml on the commandline it works fine.

    Read the article

  • Visual Studio 2010 Hosting :: Connect to MySQL Database from Visual Studio VS2010

    - by mbridge
    So, in order to connect to a MySql database from VS2010 you need to 1. download the latest version of the MySql Connector/NET from http://www.mysql.com/downloads/connector/net/ 2. install the connector (if you have an older version you need to remove it from Control Panel -> Add / Remove Programs) 3. open Visual Studio 2010 4. open Server Explorer Window (View -> Server Explorer) 5. use Connect to Database button 6. in the Choose Data Source windows select MySql Database and press Continue 7. in the Add Connection window - set server name: 127.0.0.1 or localhost for MySql server running on local machine or an IP address for a remote server - username and password - if the the above data is correct and the connection can be made, you have the possibility to select the database If you want to connect to a MySql database from a C# application (Windows or Web) you can use the next sequence: //define the connection reference and initialize it MySql.Data.MySqlClient.MySqlConnection msqlConnection = null; msqlConnection = new MySql.Data.MySqlClient.MySqlConnection(“server=localhost;user id=UserName;Password=UserPassword;database=DatabaseName;persist security info=False”);     //define the command reference MySql.Data.MySqlClient.MySqlCommand msqlCommand = new MySql.Data.MySqlClient.MySqlCommand();     //define the connection used by the command object msqlCommand.Connection = this.msqlConnection;     //define the command text msqlCommand.CommandText = "SELECT * FROM TestTable;"; try {     //open the connection     this.msqlConnection.Open();     //use a DataReader to process each record     MySql.Data.MySqlClient.MySqlDataReader msqlReader = msqlCommand.ExecuteReader();     while (msqlReader.Read())     {         //do something with each record     } } catch (Exception er) {     //do something with the exception } finally {     //always close the connection     this.msqlConnection.Close(); }.

    Read the article

  • Logging in user in Windows 2008 server using LogonUser fails on LogonType LOGON32_LOGON_SERVICE

    - by Ofiris
    I am using LogonUser function to logon an account to Windows 2008 R2 server on a domain with clusterring. When using LOGON32_LOGON_INTERACTIVE as LogonType, I successfully login. When using LOGON32_LOGON_SERVICE as LogonType, Login fails, EventViewer says: An account failed to log on. Logon Type: 5 Account For Which Logon Failed: Security ID: NULL SID Account Name: thename Account Domain: thedomain Logon ID: 0x1009371c Logon Type: 5 Failure Information: Failure Reason: The user has not been granted the requested logon type at this machine. Status: 0xc000015b Sub Status: 0x0 Was not sure if its for superuser or stackoverflow (calling LogonUser from C# code), but I guess its some Windows server issue*. EventID = 4625 Edit: Found that - 0xc000015b The user has not been granted the requested logon type (aka logon right) at this machine Edit: Should be serverfault question...

    Read the article

  • What code smell best describes this code?

    - by Paul Stovell
    Suppose you have this code in a class: private DataContext _context; public Customer[] GetCustomers() { GetContext(); return _context.Customers.ToArray(); } public Order[] GetOrders() { GetContext(); return _context.Customers.ToArray(); } // For the sake of this example, a new DataContext is *required* // for every public method call private void GetContext() { if (_context != null) { _context.Dispose(); } _context = new DataContext(); } This code isn't thread-safe - if two calls to GetOrders/GetCustomers are made at the same time from different threads, they may end up using the same context, or the context could be disposed while being used. Even if this bug didn't exist, however, it still "smells" like bad code. A much better design would be for GetContext to always return a new instance of DataContext and to get rid of the private field, and to dispose of the instance when done. Changing from an inappropriate private field to a local variable feels like a better solution. I've looked over the code smell lists and can't find one that describes this. In the past I've thought of it as temporal coupling, but the Wikipedia description suggests that's not the term: Temporal coupling When two actions are bundled together into one module just because they happen to occur at the same time. This page discusses temporal coupling, but the example is the public API of a class, while my question is about the internal design. Does this smell have a name? Or is it simply "buggy code"?

    Read the article

  • Update on ASP.NET MVC 3 RC2 (and a workaround for a bug in it)

    - by ScottGu
    Last week we published the RC2 build of ASP.NET MVC 3.  I blogged a bunch of details about it here. One of the reasons we publish release candidates is to help find those last “hard to find” bugs. So far we haven’t seen many issues reported with the RC2 release (which is good) - although we have seen a few reports of a metadata caching bug that manifests itself in at least two scenarios: Nullable parameters in action methods have problems: When you have a controller action method with a nullable parameter (like int? – or a complex type that has a nullable sub-property), the nullable parameter might always end up being null - even when the request contains a valid value for the parameter. [AllowHtml] doesn’t allow HTML in model binding: When you decorate a model property with an [AllowHtml] attribute (to turn off HTML injection protection), the model binding still fails when HTML content is posted to it. Both of these issues are caused by an over-eager caching optimization we introduced very late in the RC2 milestone.  This issue will be fixed for the final ASP.NET MVC 3 release.  Below is a workaround step you can implement to fix it today. Workaround You Can Use Today You can fix the above issues with the current ASP.NT MVC 3 RC2 release by adding one line of code to the Application_Start() event handler within the Global.asax class of your application: The above code sets the ModelMetaDataProviders.Current property to use the DataAnnotationsModelMetadataProvider.  This causes ASP.NET MVC 3 to use a meta-data provider implementation that doesn’t have the more aggressive caching logic we introduced late in the RC2 release, and prevents the caching issues that cause the above issues to occur.  You don’t need to change any other code within your application.  Once you make this change the above issues are fixed.  You won’t need to have this line of code within your applications once the final ASP.NET MVC 3 release ships (although keeping it in also won’t cause any problems). Hope this helps – and please keep any reports of issues coming our way, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Entity Framework v1 &hellip; Brief Synopsis and Tips &ndash; Part 2

    - by Rohit Gupta
    Using Entity Framework with ASMX Web sErvices and WCF Web Service: If you use ASMX WebService to expose Entity objects from Entity Framework... then the ASMX Webservice does not  include object graphs, one work around is to use Facade pattern or to use WCF Service. The other important aspect of using ASMX Web Services along with Entity Framework is that the ASMX Client is not aware of the existence of EF v1 since the client solely deals with C# objects (not EntityObjects or ObjectContext). Since the client is not aware of the ObjectContext hence the client cannot participate in change tracking since the client only receives the Current Values and not the Orginal values when the service sends the the Entity objects to the client. Thus there are 2 drawbacks to using EntityFramework with ASMX Web Service: 1. Object state is not maintained... so to overcome this limitation we need insert/update single entity at a time and retrieve the original values for the entity being updated on the server/service end before calling Save Changes. 2. ASMX does not maintain object graphs... i.e. Customer.Reservations or Customer.Reservations.Trip relationships are not maintained. Thus you need to send these relationships separately from service to client. WCF Web Service overcomes the object graph limitation of ASMX Web Service, but we need to insure that we are populating all the non-null scalar properties of all the objects in the object graph before calling Update. WCF Web service still cannot overcome the second limitation of tracking changes to entities at the client end. Also note that the "Customer" class in the Client is very different from the "Customer" class in the Entity Framework Model Entities. They are incompatible with each other hence we cannot cast one to the other. However the .NET Framework translates the client "Customer" Entity to the EFv1 Model "customer" Entity once the entity is serialzed back on the ASMX server end. If you need change tracking enabled on the client then we need to use WCF Data Services which is available with VS 2010. ====================================================================================================== In WCF when adding an object that has relationships, the framework assumes that every object in the object graph needs to be added to store. for e.g. in a Customer.Reservations.Trip object graph, when a Customer Entity is added to the store, the EFv1 assumes that it needs to a add a Reservations collection and also Trips for each Reservation. Thus if we need to use existing Trips for reservations then we need to insure that we null out the Trip object reference from Reservations and set the TripReference to the EntityKey of the desired Trip instead. ====================================================================================================== Understanding Relationships and Associations in EFv1 The Golden Rule of EF is that it does not load entities/relationships unless you ask it to explicitly do so. However there is 1 exception to this rule. This exception happens when you attach/detach entities from the ObjectContext. If you detach an Entity in a ObjectGraph from the ObjectContext, then the ObjectContext removes the ObjectStateEntry for this Entity and all the relationship Objects associated with this Entity. For e.g. in a Customer.Order.OrderDetails if the Customer Entity is detached from the ObjectContext then you cannot traverse to the Order and OrderDetails Entities (that still exist in the ObjectContext) from the Customer Entity(which does not exist in the Object Context) Conversely, if you JOIN a entity that is not in the ObjectContext with a Entity that is in the ObjContext then the First Entity will automatically be added to the ObjContext since relationships for the 2 Entities need to exist in the ObjContext. ========================================================= You cannot attach an EntityCollection to an entity through its navigation property for e.g. you cannot code myContact.Addresses = myAddressEntityCollection ========================================================== Cascade Deletes in EDM: The Designer does not support specifying cascase deletes for a Entity. To enable cascasde deletes on a Entity in EDM use the Association definition in CSDL for the Entity. for e.g. SalesOrderDetail (SOD) has a Foreign Key relationship with SalesOrderHeader (SalesOrderHeader 1 : SalesOrderDetail *) if you specify a cascade Delete on SalesOrderHeader Entity then calling deleteObject on SalesOrderHeader (SOH) Entity will send delete commands for SOH record and all the SOD records that reference the SOH record. ========================================================== As a good design practise, if you use Cascade Deletes insure that Cascade delete facet is used both in the EDM as well as in the database. Even though it is not absolutely mandatory to have Cascade deletes on both Database and EDM (since you can see that just the Cascade delete spec on the SOH Entity in EDM will insure that SOH record and all related SOD records will be deleted from the database ... even though you dont have cascade delete configured in the database in the SOD table) ============================================================== Maintaining relationships in Code When Setting a Navigation property of a Entity (for e.g. setting the Contact Navigation property of Address Entity) the following rules apply : If both objects are detached, no relationship object will be created. You are simply setting a property the CLR way. If both objects are attached, a relationship object will be created. If only one of the objects is attached, the other will become attached and a relationship object will be created. If that detached object is new, when it is attached to the context its EntityState will be Added. One important rule to remember regarding synchronizing the EntityReference.Value and EntityReference.EntityKey properties is that when attaching an Entity which has a EntityReference (e.g. Address Entity with ContactReference) the Value property will take precedence and if the Value and EntityKey are out of sync, the EntityKey will be updated to match the Value. ====================================================== If you call .Load() method on a detached Entity then the .Load() operation will throw an exception. There is one exception to this rule. If you load entities using MergeOption.NoTracking, you will be able to call .Load() on such entities since these Entities are accessible by the ObjectContext. So the bottomline is that we need Objectontext to be able to call .Load() method to do deffered loading on EntityReference or EntityCollection. Another rule to remember is that you cannot call .Load() on entities that have a EntityState.Added State since the ObjectContext uses the EntityKey of the Primary (Parent) Entity when loading the related (Child) Entity (and not the EntityKey of the child (even if the EntityKey of the child is present before calling .Load()) ====================================================== You can use ObjContext.Add() to add a entity to the ObjContext and set the EntityState of the new Entity to EntityState.Added. here no relationships are added/updated. You can also use EntityCollection.Add() method to add an entity to another entity's related EntityCollection for e.g. contact has a Addresses EntityCollection so to add a new address use contact.Addresses.Add(newAddress) to add a new address to the Addresses EntityCollection. Note that if the entity does not already exist in the ObjectContext then calling contact.Addresses.Add(myAddress) will cause a new Address Entity to be added to the ObjContext with EntityState.Added and it will also add a RelationshipEntry (a relationship object) with EntityState.Added which connects the Contact (contact) with the new address newAddress. Note that if the entity already exists in the Objectcontext (being part theOtherContact.Addresses Collection), then calling contact.Addresses.Add(existingAddress) will add 2 RelationshipEntry objects to the ObjectStateEntry Collection, one with EntityState.Deleted and the other with EntityState.Added. This implies that the existingAddress Entity is removed from the theOtherContact.Addresses Collection and Added to the contact.Addresses Collection..effectively reassigning the address entity from the theOtherContact to "contact". This is called moving an existing entity to a new object graph. ====================================================== You usually use ObjectContext.Attach() and EntityCollection.Attach() methods usually when you need to reconstruct the ObjectGraph after deserializing the objects as received from a ASMX Web Service Client. Attach is usually used to connect existing Entities in the ObjectContext. When EntityCollection.Attach() is called the EntityState of the RelationshipEntry (the relationship object) remains as EntityState.unchanged whereas when EntityCollection.Add() method is called the EntityState of the relationship object changes to EntityState.Added or EntityState.Deleted as the situation demands. ========================================================= LINQ To Entities Tips: Select Many does Inner Join by default.   for e.g. from c in Contact from a in c.Address select c ... this will do a Inner Join between the Contacts and Addresses Table and return only those Contacts that have a Address. ======================================================== Group Joins Do LEFT Join by default. e.g. from a in Address join c in Contact ON a.Contact.ContactID == c.ContactID Into g WHERE a.CountryRegion == "US" select g; This query will do a left join on the Contact table and return contacts that have a address in "US" region The following query : from c in Contact join a in Address.Where(a1 => a1.CountryRegion == "US") on c.ContactID  equals a.Contact.ContactID into addresses select new {c, addresses} will do a left join on the Address table and return All Contacts. In these Contacts only those will have its Address EntityCollection Populated which have a Address in the "US" region, the other contacts will have 0 Addresses in the Address collection (even if addresses for those contacts exist in the database but are in a different region) ======================================================== Linq to Entities does not support DefaultIfEmpty().... instead use .Include("Address") Query Builder method to do a Left JOIN or use Group Joins if you need more control like Filtering on the Address EntityCollection of Contact Entity =================================================================== Use CreateSourceQuery() on the EntityReference or EntityCollection if you need to add filters during deferred loading of Entities (Deferred loading in EFv1 happens when you call Load() method on the EntityReference or EntityCollection. for e.g. var cust=context.Contacts.OfType<Customer>().First(); var sq = cust.Reservations.CreateSourceQuery().Where(r => r.ReservationDate > new DateTime(2008,1,1)); cust.Reservations.Attach(sq); This populates only those reservations that are older than Jan 1 2008. This is the only way (in EFv1) to Attach a Range of Entities to a EntityCollection using the Attach() method ================================================================== If you need to get the Foreign Key value for a entity e.g. to get the ContactID value from a Address Entity use this :                                address.ContactReference.EntityKey.EntityKeyValues.Where(k=> k.Key == "ContactID")

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • How to use SharePoint modal dialog box to display Custom Page Part2

    - by ybbest
    In the first part of the series, I showed you how to display and close a custom page in a SharePoint modal dialog using JavaScript. In this one, I’d like to show you how to display some information after the Modal dialog is closed.You can download the source code here. 1. Firstly, modify the element file as follow <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <CustomAction Id="ReportConcern" RegistrationType="ContentType" RegistrationId="0x010100866B1423D33DDA4CA1A4639B54DD4642" Location="EditControlBlock" Sequence="107" Title="Display Custom Page" Description="To Display Custom Page in a modal dialog box on this item"> <UrlAction Url="javascript: function emitStatus(messageToDisplay) { statusId = SP.UI.Status.addStatus(messageToDisplay.message + ' ' +messageToDisplay.location ); SP.UI.Status.setStatusPriColor(statusId, 'Green'); } function portalModalDialogClosedCallback(result, value) { if (value !== null) { emitStatus(value); } } var options = { url: '{SiteUrl}' + '/_layouts/YBBEST/TitleRename.aspx?List={ListId}&amp;ID={ItemId}', title: 'Rename title', allowMaximize: false, showClose: true, width: 500, height: 300, dialogReturnValueCallback: portalModalDialogClosedCallback }; SP.UI.ModalDialog.showModalDialog(options);" /> </CustomAction> </Elements> 2. In your code behind, you can implement a close dialog function as below. This will close your modal dialog box once the button is clicked and display a status bar. protected static string GetCloseDialogScript(string message) { var scriptBuilder = new StringBuilder(); scriptBuilder.Append("<script type='text/javascript'>" + "SP.UI.ModalDialog.commonModalDialogClose(1,").Append(message).Append("); </script>"); return scriptBuilder.ToString(); }

    Read the article

  • Server overloaded with log messages: tty_release_dev: pts0: read/write wait queue active!

    - by Raph
    In the logs, I have this (extract from the full kernel messages logges at 06:01:14): Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.863038] BUG: unable to handle kernel NULL pointer dereference at 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861081] Process telnet (pid: 20247, threadinfo ffff8800f8598000, task ffff8800024d4500) And then the server logs flooded by this message: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861547] tty_release_dev: pts0: read/write wait queue active! In the end, 2 hours later, I had to reboot because it had become inaccessible: the load hat grown to 160%. The last command does not show anyone logged on pts0 at that time. I also don't know where this telnet process could come from.... This is an AWS instance running UBUNTU 10.04 LTS And here are the complete logs: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.863038] BUG: unable to handle kernel NULL pointer dereference at 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861007] IP: [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861019] PGD ee13d067 PUD f8698067 PMD 0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861025] Oops: 0000 [#1] SMP Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861028] last sysfs file: /sys/devices/xen/vbd-2208/block/sdk/removable Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861032] CPU 0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861034] Modules linked in: ipv6 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861040] Pid: 20247, comm: telnet Not tainted 2.6.32-312-ec2 #24-Ubuntu Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861042] RIP: e030:[<ffffffff81363dde>] [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861047] RSP: e02b:ffff8800f8599d88 EFLAGS: 00010246 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861049] RAX: 0000000000000015 RBX: ffff8800f8598000 RCX: 0000000001aed069 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861052] RDX: 0000000000000000 RSI: ffff8800f8599e67 RDI: ffff8801dd833d1c Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861054] RBP: ffff8800f8599e98 R08: ffffffff8135eb10 R09: 7fffffffffffffff Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861057] R10: 0000000000000000 R11: 0000000000000246 R12: ffff8801dd833800 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861059] R13: 0000000000000000 R14: ffff8801dd833a68 R15: ffff8801dd833d1c Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861065] FS: 00007f90121f6720(0000) GS:ffff880002c40000(0000) knlGS:0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861068] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861070] CR2: 0000000000000015 CR3: 0000000032a59000 CR4: 0000000000002660 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861073] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861076] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861081] Process telnet (pid: 20247, threadinfo ffff8800f8598000, task ffff8800024d4500) Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861083] Stack: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861085] 0000000000000000 0000000001aed069 ffff8801dd8339c8 ffff8800024d4500 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861089] <0> ffff8801dd8339c0 ffff8801dd833c90 0000000001aed027 ffff8800024d4500 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861094] <0> ffff8801dd8338d8 0000000000000000 ffff8800024d4500 0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861099] Call Trace: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861107] [<ffffffff81034bc0>] ? default_wake_function+0x0/0x10 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861113] [<ffffffff8135ebb6>] tty_read+0xa6/0xf0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861118] [<ffffffff810ee7e5>] vfs_read+0xb5/0x1a0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861122] [<ffffffff810ee91c>] sys_read+0x4c/0x80 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861127] [<ffffffff81009ba8>] system_call_fastpath+0x16/0x1b Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861131] [<ffffffff81009b40>] ? system_call+0x0/0x52 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861133] Code: 85 d2 0f 84 92 00 00 00 45 8b ac 24 5c 02 00 00 f0 45 0f b3 2e 45 19 ed 49 63 84 24 5c 02 00 00 49 8b 94 24 50 02 00 00 4c 89 ff <0f> be 1c 02 e8 a9 d3 14 00 41 8b 94 24 5c 02 00 00 41 83 ac 24 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861171] RIP [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861175] RSP <ffff8800f8599d88> Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861171] RIP [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861175] RSP <ffff8800f8599d88> Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861177] CR2: 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861205] ---[ end trace f10eee2057ff4f6b ]--- Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861547] tty_release_dev: pts0: read/write wait queue active!

    Read the article

  • How to use SharePoint modal dialog box to display Custom Page Part1

    - by ybbest
    In the part1 of this series, I will show you how to use the modal dialog box to display the custom page and close the page. You can download solution here. 1. Firstly, I create custom action on the list item ECB called Display Custom Page. To do so, you need to create an element item in SharePoint project and copy the following xml to the element file. <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <CustomAction Id="ReportConcern" RegistrationType="ContentType" RegistrationId="0x010100866B1423D33DDA4CA1A4639B54DD4642" Location="EditControlBlock" Sequence="107" Title="Display Custom Page" Description="To Display Custom Page in a modal dialog box on this item"> <UrlAction Url="javascript: function CallDETCustomDialog(dialogResult, returnValue) { SP.UI.ModalDialog.RefreshPage(SP.UI.DialogResult.OK); } var options = { url: '{SiteUrl}' + '/_layouts/YBBEST/TitleRename.aspx?List={ListId}&amp;ID={ItemId}', title: 'Rename title', allowMaximize: false, showClose: true, width: 500, height: 300, dialogReturnValueCallback: CallDETCustomDialog }; SP.UI.ModalDialog.showModalDialog(options);" /> </CustomAction> </Elements> 2. In your code behind, you can implement a close dialog function as below. This will close your modal dialog box once the button is clicked. protected void CloseDialog() { if (HttpContext.Current.Request.QueryString["IsDlg"] == null) return; if (!ClientScript.IsStartupScriptRegistered("CloseDialogFunction")) { const string script = "<script type='text/javascript'>" + "SP.UI.ModalDialog.commonModalDialogClose(1, 1);" + "</script>"; ClientScript.RegisterStartupScript(GetType(), "CloseDialogFunction", script); } }

    Read the article

  • How to use SharePoint modal dialog box to display Custom Page Part1

    - by ybbest
    In the part1 of this series, I will show you how to use the modal dialog box to display the custom page and close the page. You can download solution here. 1. Firstly, I create custom action on the list item ECB called Display Custom Page. To do so, you need to create an element item in SharePoint project and copy the following xml to the element file. <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <CustomAction Id="ReportConcern" RegistrationType="ContentType" RegistrationId="0x010100866B1423D33DDA4CA1A4639B54DD4642" Location="EditControlBlock" Sequence="107" Title="Display Custom Page" Description="To Display Custom Page in a modal dialog box on this item"> <UrlAction Url="javascript: function CallDETCustomDialog(dialogResult, returnValue) { SP.UI.ModalDialog.RefreshPage(SP.UI.DialogResult.OK); } var options = { url: '{SiteUrl}' + '/_layouts/YBBEST/TitleRename.aspx?List={ListId}&amp;ID={ItemId}', title: 'Rename title', allowMaximize: false, showClose: true, width: 500, height: 300, dialogReturnValueCallback: CallDETCustomDialog }; SP.UI.ModalDialog.showModalDialog(options);" /> </CustomAction> </Elements> 2. In your code behind, you can implement a close dialog function as below. This will close your modal dialog box once the button is clicked. protected void CloseDialog() { if (HttpContext.Current.Request.QueryString["IsDlg"] == null) return; if (!ClientScript.IsStartupScriptRegistered("CloseDialogFunction")) { const string script = "<script type='text/javascript'>" + "SP.UI.ModalDialog.commonModalDialogClose(1, 1);" + "</script>"; ClientScript.RegisterStartupScript(GetType(), "CloseDialogFunction", script); } }

    Read the article

  • How to create multiboot flash drive

    - by Nrew
    I've found a guide here: http://www.pendrivelinux.com/boot-multiple-iso-from-usb-multiboot-usb/ And found this menu.lst in my flash drive, which seems to be the one that I'm seeing when I boot using my flash drive: # This Menu Created by Lance http://www.pendrivelinux.com # Ongoing Suggested Menu Entries and the Suggestor are noted! default 0 timeout 30 color NORMAL HIGHLIGHT HELPTEXT HEADING splashimage=(hd0,0)/splash.xpm.gz foreground=FFFFFF background=0066FF title Memtest86+ find --set-root /memtest86+-4.00.iso map --mem /memtest86+-4.00.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by madprofessor title Boot Clonezilla root (hd0,0) kernel /clonezilla/live/vmlinuz live-media-path=clonezilla/live bootfrom=/dev/sd boot=live union=aufs noprompt ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" ocs_lang="" vga=791 ip=frommedia initrd /clonezilla/live/initrd.img title Parted Magic 4.9 (Partition Tools) find --set-root /pmagic-4.9.iso map /pmagic-4.9.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by Deb title Partition Wizard 4.2 (Partition Tools) find --set-root /pwhe42.iso map /pwhe42.iso (hd32) map --hook root (hd32) chainloader (hd32) title Balder DOS image (FreeDOS) map --unsafe-boot /balder10.img (fd0) map --hook chainloader --force (fd0)+1 rootnoverify (fd0) # Suggested by Szymon Silski title Linux Mint 8 find --set-root /LinuxMint-8.iso map /LinuxMint-8.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/mint.seed boot=casper persistent iso-scan/filename=/LinuxMint-8.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 find --set-root /ubuntu-10.04-desktop-i386.iso map /ubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/ubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz title Xubuntu 10.04 (XFCE Desktop) find --set-root /xubuntu-10.04-desktop-i386.iso map /xubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/xubuntu.seed boot=casper persistent iso-scan/filename=/xubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz title Kubuntu 10.04 (KDE Desktop) find --set-root /kubuntu-10.04-desktop-i386.iso map /kubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/kubuntu.seed boot=casper persistent iso-scan/filename=/kubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz # Suggested by Ambriel title Lubuntu 10.04 (LXDE Lightweight Desktop) find --set-root /lubuntu-10.04.iso map /lubuntu-10.04.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/lubuntu.seed boot=casper persistent iso-scan/filename=/lubuntu-10.04.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 Netbook Remix (NetBook Distro) find --set-root /ubuntu-10.04-netbook-i386.iso map /ubuntu-10.04-netbook-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/netbook-remix.seed boot=casper persistent iso-scan/filename=/ubuntu-10.04-netbook-i386.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 Server Edition Installer (32 bit Installer Only) find --set-root /ubuntu-10.04-server-i386.iso map /ubuntu-10.04-server-i386.iso (0xff) map --hook root (0xff) kernel /install/vmlinuz file=/cdrom/preseed/ubuntu-server.seed boot=install iso-scan/filename=/ubuntu-10.04-server-i386.iso splash initrd /install/initrd.gz title Ubuntu 9.10 find --set-root /ubuntu-9.10-desktop-i386.iso map /ubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/ubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz title Xubuntu 9.10 find --set-root /xubuntu-9.10-desktop-i386.iso map /xubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/xubuntu.seed boot=casper persistent iso-scan/filename=/xubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz title Kubuntu 9.10 find --set-root /kubuntu-9.10-desktop-i386.iso map /kubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/kubuntu.seed boot=casper persistent iso-scan/filename=/kubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz # Ubuntu Server and Netbook Remix suggested by Wojciech Holek title Ubuntu 9.10 Server Edition Installer (Installer Only) find --set-root /ubuntu-9.10-server-i386.iso map /ubuntu-9.10-server-i386.iso (0xff) map --hook root (0xff) kernel /install/vmlinuz file=/cdrom/preseed/ubuntu-server.seed boot=install iso-scan/filename=/ubuntu-9.10-server-i386.iso splash initrd /install/initrd.gz title Ubuntu 9.10 Netbook Remix (NetBook Distro) find --set-root /ubuntu-9.10-netbook-remix-i386.iso map /ubuntu-9.10-netbook-remix-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/netbook-remix.seed boot=casper persistent iso-scan/filename=/ubuntu-9.10-netbook-remix-i386.iso splash initrd /casper/initrd.lz title Ubuntu 9.10 Rescue Remix (Recovery Tools) find --set-root /ubuntu-rescue-remix-9-10-revision1.iso map /ubuntu-rescue-remix-9-10-revision1.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper iso-scan/filename=/ubuntu-rescue-remix-9-10-revision1.iso splash initrd /casper/initrd.lz title DSL 4.4.10 find --set-root /dsl-4.4.10-initrd.iso map --mem /dsl-4.4.10-initrd.iso (hd32) map --hook root (hd32) chainloader (hd32) title AVG Rescue CD (Anti-Virus + Anti-Spyware) find --set-root /avg_arl_en_90_100114.iso map /avg_arl_en_90_100114.iso (hd32) map --hook chainloader (hd32) title Ultimate Boot CD 4.11 find --set-root /ubcd411.iso map /ubcd411.iso (hd32) map --hook chainloader (hd32) title OphCrack XP 2.3.1 (XP Password Cracker) find --set-root /ophcrack-xp-livecd-2.3.1.iso map /ophcrack-xp-livecd-2.3.1.iso (0xff) map --hook root (0xff) kernel /boot/bzImage rw root=/dev/null vga=normal lang=C kmap=us screen=1024x768x16 autologin initrd /boot/rootfs.gz title OphCrack Vista 2.3.1 (Vista Password Cracker) find --set-root /ophcrack-vista-livecd-2.3.1.iso map /ophcrack-vista-livecd-2.3.1.iso (0xff) map --hook root (0xff) kernel /boot/bzImage rw root=/dev/null vga=normal lang=C kmap=us screen=1024x768x16 autologin initrd /boot/rootfs.gz # Suggested by Greg Steer title Offline NT Password & Registy Editor find --set-root /cd080802.iso map /cd080802.iso (hd32) map --hook chainloader (hd32) title SliTaz 2.0 find --set-root /slitaz-2.0.iso map --mem /slitaz-2.0.iso (hd32) map --hook chainloader (hd32) title Riplinux 9.3 find --set-root /RIPLinuX-9.3.iso map --heads=0 --sectors-per-track=0 /RIPLinuX-9.3.iso (0xff) || map --heads=0 --sectors-per-track=0 --mem /RIPLinuX-9.3.iso (0xff) map --hook chainloader (0xff) # Suggested by Sunny title YlmF (Windows Like OS) find --set-root /YlmF_OS_EN_v1.0.iso map /YlmF_OS_EN_v1.0.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/YlmF_OS_EN_v1.0.iso splash initrd /casper/initrd.lz # Suggested by Martin Andersson title DBAN 1.0.7 (Drive Nuker) find --set-root /dban-1.0.7_i386.iso map --mem /dban-1.0.7_i386.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by Robin McGough title xPUD 0.9.2 (NetBook Distro) find --set-root --ignore-floppies --ignore-cd /xpud-0.9.2.iso map --heads=0 --sectors-per-track=0 /xpud-0.9.2.iso (hd32) map --hook chainloader (hd32) title Puppy 4.3.1 find --set-root /puppy/pup-431.sfs kernel /puppy/vmlinuz initrd /puppy/initrd.gz # Suggested by Relst title Run a Linux OS from the Internet kernel /gpxe.lkrn I also put some .iso files for os installers (Windows xp sp2 and Ubuntu 10.04) But they didn't show up in the list when I booted Do I need to: extract the .iso files and put in in their respective folders? Add the os that I added on the menu.lst? How do I add the iso image(os) in the menu.lst? Before adding the .iso files I first made a folder named Windows xp sp2 then placed the .iso files in there. Please help, I think I need to add the folder name or the file name on the menu.lst but I don't know how

    Read the article

  • How to use SharePoint modal dialog box to display Custom Page Part2

    - by ybbest
    In the first part of the series, I showed you how to display and close a custom page in a SharePoint modal dialog using JavaScript. In this one, I’d like to show you how to display some information after the Modal dialog is closed.You can download the source code here. 1. Firstly, modify the element file as follow <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <CustomAction Id="ReportConcern" RegistrationType="ContentType" RegistrationId="0x010100866B1423D33DDA4CA1A4639B54DD4642" Location="EditControlBlock" Sequence="107" Title="Display Custom Page" Description="To Display Custom Page in a modal dialog box on this item"> <UrlAction Url="javascript: function emitStatus(messageToDisplay) { statusId = SP.UI.Status.addStatus(messageToDisplay.message + ' ' +messageToDisplay.location ); SP.UI.Status.setStatusPriColor(statusId, 'Green'); } function portalModalDialogClosedCallback(result, value) { if (value !== null) { emitStatus(value); } } var options = { url: '{SiteUrl}' + '/_layouts/YBBEST/TitleRename.aspx?List={ListId}&amp;ID={ItemId}', title: 'Rename title', allowMaximize: false, showClose: true, width: 500, height: 300, dialogReturnValueCallback: portalModalDialogClosedCallback }; SP.UI.ModalDialog.showModalDialog(options);" /> </CustomAction> </Elements> 2. In your code behind, you can implement a close dialog function as below. This will close your modal dialog box once the button is clicked and display a status bar. protected static string GetCloseDialogScript(string message) { var scriptBuilder = new StringBuilder(); scriptBuilder.Append("<script type='text/javascript'>" + "SP.UI.ModalDialog.commonModalDialogClose(1,").Append(message).Append("); </script>"); return scriptBuilder.ToString(); }

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • Slides and Samples for My TechEd / Microsoft BI Conference Talks

    - by plitwin
    I posted the slides and samples for my talks I delivered in New Orleans on June 8th at Microsoft TechEd and Business Intelligence Conference. They can be downloaded from Paul Litwin's Conference Downloads. #1 Creating Report Subscriptions with SQL Server 2008 Reporting Services at 8 AM on Tuesday. Room 241In this session, learn how to set up standard and data-driven subscriptions using Report Manager. We discuss creating file-share, email, and null subscriptions; and how to deal with potential issues with parameters and security. We also demonstrate a sophisticated Microsoft ASP.NET-based application that creates subscriptions by calling the SSRS Web Services API.  #2 ASP.NET MVC for Web Forms Programmers at 3:15 PM Tuesday. Room 391Are you comfortable creating ASP.NET Web Form applications but even a little curious about what all the fuss is about MVC and test-driven development? In this session, Web Form junkie Paul Litwin takes a critical look at the world of ASP.NET MVC, but not from any expert point of view. Instead, Paul shares his experience as a Web Form developer who decided to take a closer look at this radical new approach to ASP.NET development. Come hear what Paul learned and if he plans to employ ASP.NET MVC in his future ASP.NET applications.

    Read the article

  • Setting up Github post-receive webhook with private Jenkins and private repo

    - by Joseph S.
    I'm trying to set up a private GitHub project to send a post-receive request to a private Jenkins instance to trigger a project build on branch push. Using latest Jenkins with the GitHub plugin. I believe I set up everything correctly on the Jenkins side because when sending a request from a public server with curl like this: curl http://username:password@ipaddress:port/github-webhook/ results in: Stacktrace: net.sf.json.JSONException: null object which is fine because the JSON payload is missing. Sending the wrong username and password in the URI results in: Exception: Failed to login as username I interpret this as a correct Jenkins configuration. Both of these requests also result in entries in the Jenkins log. However, when pasting the exact same URI from above into the Github repository Post-Receive URLs Service Hook and clicking on Test Hook, absolutely nothing seems to happen on my server. Nothing in the Jenkins log and the GitHub Hook Log in the Jenkins project says Polling has not run yet. I have run out of ideas and don't know how to proceed further.

    Read the article

  • Restful Services, oData, and Rest Sharp

    - by jkrebsbach
    After a great presentation by Jason Sheehan at MDC about RestSharp, I decided to implement it. RestSharp is a .Net framework for consuming restful data sources via either Json or XML. My first step was to put together a Restful data source for RestSharp to consume.  Staying entirely withing .Net, I decided to use Microsoft's oData implementation, built on System.Data.Services.DataServices.  Natively, these support Json, or atom+pub xml.  (XML with a few bells and whistles added on) There are three main steps for creating an oData data source: 1)  override CreateDSPMetaData This is where the metadata data is returned.  The meta data defines the structure of the data to return.  The structure contains the relationships between data objects, along with what properties the objects expose.  The meta data can and should be somehow cached so that the structure is not rebuild with every data request. 2) override CreateDataSource The context contains the data the data source will publish.  This method is the conduit which will populate the metadata objects to be returned to the requestor. 3) implement static InitializeService At this point we can set up security, along with setting up properties of the web service (versioning, etc)   Here is a web service which publishes stock prices for various Products (stocks) in various Categories. namespace RestService {     public class RestServiceImpl : DSPDataService<DSPContext>     {         private static DSPContext _context;         private static DSPMetadata _metadata;         /// <summary>         /// Populate traversable data source         /// </summary>         /// <returns></returns>         protected override DSPContext CreateDataSource()         {             if (_context == null)             {                 _context = new DSPContext();                 Category utilities = new Category(0);                 utilities.Name = "Electric";                 Category financials = new Category(1);                 financials.Name = "Financial";                                 IList products = _context.GetResourceSetEntities("Products");                 Product electric = new Product(0, utilities);                 electric.Name = "ABC Electric";                 electric.Description = "Electric Utility";                 electric.Price = 3.5;                 products.Add(electric);                 Product water = new Product(1, utilities);                 water.Name = "XYZ Water";                 water.Description = "Water Utility";                 water.Price = 2.4;                 products.Add(water);                 Product banks = new Product(2, financials);                 banks.Name = "FatCat Bank";                 banks.Description = "A bank that's almost too big";                 banks.Price = 19.9; // This will never get to the client                 products.Add(banks);                 IList categories = _context.GetResourceSetEntities("Categories");                 categories.Add(utilities);                 categories.Add(financials);                 utilities.Products.Add(electric);                 utilities.Products.Add(electric);                 financials.Products.Add(banks);             }             return _context;         }         /// <summary>         /// Setup rules describing published data structure - relationships between data,         /// key field, other searchable fields, etc.         /// </summary>         /// <returns></returns>         protected override DSPMetadata CreateDSPMetadata()         {             if (_metadata == null)             {                 _metadata = new DSPMetadata("DemoService", "DataServiceProviderDemo");                 // Define entity type product                 ResourceType product = _metadata.AddEntityType(typeof(Product), "Product");                 _metadata.AddKeyProperty(product, "ProductID");                 // Only add properties we wish to share with end users                 _metadata.AddPrimitiveProperty(product, "Name");                 _metadata.AddPrimitiveProperty(product, "Description");                 EntityPropertyMappingAttribute att = new EntityPropertyMappingAttribute("Name",                     SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);                 product.AddEntityPropertyMappingAttribute(att);                 att = new EntityPropertyMappingAttribute("Description",                     SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);                 product.AddEntityPropertyMappingAttribute(att);                 // Define products as a set of product entities                 ResourceSet products = _metadata.AddResourceSet("Products", product);                 // Define entity type category                 ResourceType category = _metadata.AddEntityType(typeof(Category), "Category");                 _metadata.AddKeyProperty(category, "CategoryID");                 _metadata.AddPrimitiveProperty(category, "Name");                 _metadata.AddPrimitiveProperty(category, "Description");                 // Define categories as a set of category entities                 ResourceSet categories = _metadata.AddResourceSet("Categories", category);                 att = new EntityPropertyMappingAttribute("Name",                     SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);                 category.AddEntityPropertyMappingAttribute(att);                 att = new EntityPropertyMappingAttribute("Description",                     SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);                 category.AddEntityPropertyMappingAttribute(att);                 // A product has a category, a category has products                 _metadata.AddResourceReferenceProperty(product, "Category", categories);                 _metadata.AddResourceSetReferenceProperty(category, "Products", products);             }             return _metadata;         }         /// <summary>         /// Based on the requesting user, can set up permissions to Read, Write, etc.         /// </summary>         /// <param name="config"></param>         public static void InitializeService(DataServiceConfiguration config)         {             config.SetEntitySetAccessRule("*", EntitySetRights.All);             config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;             config.DataServiceBehavior.AcceptProjectionRequests = true;         }     } }     The objects prefixed with DSP come from the samples on the oData site: http://www.odata.org/developers The products and categories objects are POCO business objects with no special modifiers. Three main options are available for defining the MetaData of data sources in .Net: 1) Generate Entity Data model (Potentially directly from SQL Server database).  This requires the least amount of manual interaction, and uses the edmx WYSIWYG editor to generate a data model.  This can be directly tied to the SQL Server database and generated from the database if you want a data access layer tightly coupled with your database. 2) Object model decorations.  If you already have a POCO data layer, you can decorate your objects with properties to statically inform the compiler how the objects are related.  The disadvantage is there are now tags strewn about your business layer that need to be updated as the business rules change.  3) Programmatically construct metadata object.  This is the object illustrated above in CreateDSPMetaData.  This puts all relationship information into one central programmatic location.  Here business rules are constructed when the DSPMetaData response object is returned.   Once you have your service up and running, RestSharp is designed for XML / Json, along with the native Microsoft library.  There are currently some differences between how Jason made RestSharp expect XML with how atom+pub works, so I found better results currently with the Json implementation - modifying the RestSharp XML parser to make an atom+pub parser is fairly trivial though, so use what implementation works best for you. I put together a sample console app which calls the RestSvcImpl.svc service defined above (and assumes it to be running on port 2000).  I used both RestSharp as a client, and also the default Microsoft oData client tools. namespace RestConsole {     class Program     {         private static DataServiceContext _ctx;         private enum DemoType         {             Xml,             Json         }         static void Main(string[] args)         {             // Microsoft implementation             _ctx = new DataServiceContext(new System.Uri("http://localhost:2000/RestServiceImpl.svc"));             var msProducts = RunQuery<Product>("Products").ToList();             var msCategory = RunQuery<Category>("/Products(0)/Category").AsEnumerable().Single();             var msFilteredProducts = RunQuery<Product>("/Products?$filter=length(Name) ge 4").ToList();             // RestSharp implementation                          DemoType demoType = DemoType.Json;             var client = new RestClient("http://localhost:2000/RestServiceImpl.svc");             client.ClearHandlers(); // Remove all available handlers             // Set up handler depending on what situation dictates             if (demoType == DemoType.Json)                 client.AddHandler("application/json", new RestSharp.Deserializers.JsonDeserializer());             else if (demoType == DemoType.Xml)             {                 client.AddHandler("application/atom+xml", new RestSharp.Deserializers.XmlDeserializer());             }                          var request = new RestRequest();             if (demoType == DemoType.Json)                 request.RootElement = "d"; // service root element for json             else if (demoType == DemoType.Xml)             {                 request.XmlNamespace = "http://www.w3.org/2005/Atom";             }                              // Return all products             request.Resource = "/Products?$orderby=Name";             RestResponse<List<Product>> productsResp = client.Execute<List<Product>>(request);             List<Product> products = productsResp.Data;             // Find category for product with ProductID = 1             request.Resource = string.Format("/Products(1)/Category");             RestResponse<Category> categoryResp = client.Execute<Category>(request);             Category category = categoryResp.Data;             // Specialized queries             request.Resource = string.Format("/Products?$filter=ProductID eq {0}", 1);             RestResponse<Product> productResp = client.Execute<Product>(request);             Product product = productResp.Data;                          request.Resource = string.Format("/Products?$filter=Name eq '{0}'", "XYZ Water");             productResp = client.Execute<Product>(request);             product = productResp.Data;         }         private static IEnumerable<TElement> RunQuery<TElement>(string queryUri)         {             try             {                 return _ctx.Execute<TElement>(new Uri(queryUri, UriKind.Relative));             }             catch (Exception ex)             {                 throw ex;             }         }              } }   Feel free to step through the code a few times and to attach a debugger to the service as well to see how and where the context and metadata objects are constructed and returned.  Pay special attention to the response object being returned by the oData service - There are several properties of the RestRequest that can be used to help troubleshoot when the structure of the response is not exactly what would be expected.

    Read the article

  • Initially Unselected DropDownList

    - by Ricardo Peres
    One of the most (IMHO) things with DropDownList is its inability to show an unselected value at load time, which is something that HTML does permit. I decided to change the DropDownList to add this behavior. All was needed was some JavaScript and reflection. See the result for yourself: public class CustomDropDownList : DropDownList { public CustomDropDownList() { this.InitiallyUnselected = true; } [DefaultValue(true)] public Boolean InitiallyUnselected { get; set; } protected override void OnInit(EventArgs e) { this.Page.RegisterRequiresControlState(this); this.Page.PreRenderComplete += this.OnPreRenderComplete; base.OnInit(e); } protected virtual void OnPreRenderComplete(Object sender, EventArgs args) { FieldInfo cachedSelectedValue = typeof(ListControl).GetField("cachedSelectedValue", BindingFlags.NonPublic | BindingFlags.Instance); if (String.IsNullOrEmpty(cachedSelectedValue.GetValue(this) as String) == true) { if (this.InitiallyUnselected == true) { if ((ScriptManager.GetCurrent(this.Page) != null) && (ScriptManager.GetCurrent(this.Page).IsInAsyncPostBack == true)) { ScriptManager.RegisterStartupScript(this, this.GetType(), "unselect" + this.ClientID, "$get('" + this.ClientID + "').selectedIndex = -1;", true); } else { this.Page.ClientScript.RegisterStartupScript(this.GetType(), "unselect" + this.ClientID, "$get('" + this.ClientID + "').selectedIndex = -1;", true); } } } } } SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • apt-get doesn't see packages in my trivial repository

    - by lorin
    I've tried to set up a trivial repository with binary .debs for internal use, but apt-get doesn't see the packages. I've done the following: On the web server: Created the binary debs with dpkg-buildpackage Put all of the binary debs in a web-accessible directory which corresponds to http://www.example.com/packages Generated a Packages.gz file in the same directory by doing: dpkg-scansources . /dev/null | gzip -9c > Packages.gz On the client machine: Added the following line to my /etc/apt/sources.list file: deb http://www.example.com/packages / Ran: sudo apt-get update The output related to my trivial repository looked like this: Ign http://www.example.com Release.gpg Ign http://www.example.com/packages/ Translation-en_US Ign http://www.example.com Release Ign http://www.example.com Packages Ign http://www.example.com Packages Hit http://www.example.com Packages But I can't install the package by name. For example, there's a package called "python-nova" which corresponds to package python-nova_2011.3-custom~bzr680-0ubuntu1_all.deb I've tried to do: apt-get install python-nova, but I get the following error: $ sudo apt-get install python-nova Reading package lists... Done Building dependency tree Reading state information... Done E: Couldn't find package python-nova

    Read the article

  • Oracle Coherence & Oracle Service Bus: REST API Integration

    - by Nino Guarnacci
    This post aims to highlight one of the features found in Oracle Coherence which allows it to be easily added and integrated inside a wider variety of projects.  The features in question are the REST API exposed by the Coherence nodes, with which you can interact in the wider mode in memory data grid.Oracle Coherence and Oracle Service Bus are natively integrated through a feature found in the Oracle Service Bus, which allows you to use the coherence grid cache during the configuration phase of a business service. This feature allows you to use an intermediate layer of cache to retrieve the answers from previous invocations of the same service, without necessarily having to invoke the real business service again. Directly from the web console of Oracle Service Bus, you can decide the policies of eviction of the objects / answers and define the discriminating parameters that identify their uniqueness.The coherence REST APIs, however, allow you to integrate both products for other necessities enabling realization of new architectures design.  Consider coherence’s node as a simple service which interoperates through the stardard services and in particular REST (with JSON and XML). Thinking of coherence as a company’s shared service, able to have an implementation of a centralized “map and reduce” which you can access  by a huge variety of protocols (transport and envelopes).An amazing step forward for those who still imagine connectors and code. This type of integration does not require writing custom code or complex implementation to be self-supported. The added value is made unique by the incredible value of both products independently, and still more out of their simple and robust integration.As already mentioned this scenario discovers a hidden new door behind the columns of these two products. The door leads to new ideas and perspectives for enterprise architectures that increasingly wink to next-generation applications: simple and dynamic, perhaps towards the mobile and web 2.0.Below, a small and simple demo useful to demonstrate how easily is to integrate these two products using the Coherence REST API. This demo is also intended to imagine new enterprise architectures using this approach.The idea is to create a centralized system of alerting, fed easily from any company’s application, regardless of the technology with which they were built . Then use a representation standard protocol: RSS, using a service exposed by the service bus; So you can browse and search only the alerts that you are interested on, by category, author, title, date, etc etc.. The steps needed to implement this system are very simple and very few. Here they are listed below and described to be easily replicated within your environment. I would remind you that the demo is only meant to demonstrate how easily is to integrate Oracle Coherence and the Oracle Service Bus, and stimulate your imagination to new technological approaches.1) Install the two products: In this demo used (if necessary, consult the installation guides of 2 products)  - Oracle Service Bus ver. 11.1.1.5.0 http://www.oracle.com/technetwork/middleware/service-bus/downloads/index.html - Oracle Coherence ver. 3.7.1 http://www.oracle.com/technetwork/middleware/coherence/downloads/index.html 2) Because you choose to create a centralized alerting system, we need to define a structure type containing some alerting attributes useful to preserve and organize the information of the various alerts sent by the different applications. Here, then it was built a java class named Alert containing the canonical properties of an alarm information:- Title- Description- System- Time- Severity 3) Therefore, we need to create two configuration files for the coherence node, in order to save the Alert objects within the grid, through the rest/http protocol (more than the native API for Java, C + +, C,. Net). Here are the two minimal configuration files for Coherence:coherence-rest-config.xml resty-server-config.xml This minimum configuration allows me to use a distributed cache named "alerts" that can  also be accessed via http - rest on the host "localhost" over port "8080", objects are of type “oracle.cohsb.Alert”. 4) Below  a simple Java class that represents the type of alert messages: 5) At this point we just need to startup our coherence node, able to listen on http protocol to manage the “alerts” cache, which will receive incoming XML or JSON objects of type Alert. Remember to include in the classpath of the coherence node, the Alert java class and the following coherence libraries and configuration files:  At this point, just run the coherence class node “com.tangosol.net.DefaultCacheServer”advising you to set the following parameters:-Dtangosol.coherence.log.level=9 -Dtangosol.coherence.log=stdout -Dtangosol.coherence.cacheconfig=[PATH_TO_THE_FILE]\resty-server-config.xml 6) Let's create a procedure to test our configuration of Coherence and in order to insert some custom alerts in our cache. The technology with which you want to achieve this functionality is fully not considerable: Javascript, Python, Ruby, Scala, C + +, Java.... Because the protocol to communicate with Coherence is simply HTTP / JSON or XML. For this little demo i choose Java: A method to send/put the alert to the cache: A method to query and view the content of the cache: Finally the main method that execute our methods:  No special library added in the classpath for our class (json struct static defined), when it will be executed, it asks some information such as title, description,... in order to compose and send an alert to the cache and then it will perform an inquiry, to the same cache. At this point, a good exercise at this point, may be to create the same procedure using other technologies, such as a simple html page containing some JavaScript code, and then using Python, Ruby, and so on.7) Now we are ready to start configuring the Oracle Service Bus in order to integrate the two products. First integrate the internal alerting system of Oracle Service Bus with our centralized alerting system based on coherence node. This ensures that by monitoring, or directly from within our Proxy Message Flow, we can throw alerts and save them directly into the Coherence node. To do this I choose to use the jms technology, natively present inside the Oracle Weblogic / Service Bus. Access to the Oracle WebLogic Administration console and create and configure a new JMS connection factory and a new jms destination (queue). Now we should create a new resource of type “alert destination” within our Oracle Service Bus project. The new “alert destination” resource should be configured using the newly created connection factory jms and jms destination. Finally, in order to withdraw the message alert enqueued in our JMS destination and send it to our coherence node, we just need to create a new business service and proxy service within our Oracle Service Bus project.Our business service is responsible for sending a message to our REST service Coherence using as a method action: PUT Finally our proxy service have to collect all messages enqueued on the destination, execute an xquery transformation on those messages  in order to translate them into valid XML / alert objects useful to be sent to our coherence service, through the newly created business service. The message flow pipeline containing the xquery transformation: Incredibly,  we just did a basic first integration between the native alerting system of Oracle Service Bus and our centralized alerting system by simply configuring our coherence node without developing anything.It's time to test it out. To do this I create a proxy service able to generate an alert using our "alert destination", whenever the proxy is invoked. After some invocation to our proxy that generates fake alerts, we could open an Internet browser and type the URL  http://localhost: 8080/alerts/  so we could see what has been inserted within the coherence node. 8) We are ready for the final step.  We would create a new message flow, that can be used to search and display the results in standard mode. To do this I choosen the standard representation of RSS, to display a formatted result on a huge variety of devices such as readers for the iPhone and Android. The inquiry may be defined already at the time of the request able to return only feed / items related to our needs. To do this we need to create a new business service, a new proxy service, and finally a new XQuery Transformation to take care of translating the collection of alerts that will be return from our coherence node in a nicely formatted RSS standard document.So we start right from this resource (xquery), which has the task of transforming a collection of alerts / xml returned from the node coherence in a type well-formatted feed RSS 2.0 our new business service that will search the alerts on our coherence node using the Rest API. And finally, our last resource, the proxy service that will be exposed as an RSS / feeds to various mobile devices and traditional web readers, in which we will intercept any search query, and transform the result returned by the business service in an RSS feed 2.0. The message flow with the transformation phase (Alert TO Feed Items): Finally some little tricks to follow during the routing to the business service, - check for any queries present in the url to require a subset of alerts  - the http header "Accept" to help get an answer XML instead of JSON: In our little demo we also static added some coherence parameters to the request:sort=time:desc;start=0;count=100I would like to get from Coherence that the results will be sorted by date, and starting from 1 up to a maximum of 100.Done!!Just incredible, our centralized alerting system is ready. Inheriting all the qualities and capabilities of the two products involved Oracle Coherence & Oracle Service Bus: - RASP (Reliability, Availability, Scalability, Performance)Now try to use your mobile device, or a normal Internet browser by accessing the RSS just published: Some urls you may test: Search for the last 100 alerts : http://localhost:7001/alarmsSearch for alerts that do not have time set to null (time is not null):http://localhost:7001/alarms?q=time+is+not+nullSearch for alerts that the system property is “Web Browser” (system = ‘Web Browser’):http://localhost:7001/alarms?q=system+%3D+%27Web+Browser%27Search for alerts that the system property is “Web Browser” and the severity property is “Fatal” and the title property contain the word “Javascript”  (system = ‘Web Broser’ and severity = ‘Fatal’ and title like ‘%Javascript%’)http://localhost:8080/alerts?q=system+%3D+%27Web+Browser%27+AND+severity+%3D+%27Fatal%27+AND+title+LIKE+%27%25Javascript%25%27 To compose more complex queries about your need I would suggest you to read the chapter in the coherence documentation inherent the Cohl language (Coherence Query Language) http://download.oracle.com/docs/cd/E24290_01/coh.371/e22837/api_cq.htm . Some useful links: - Oracle Coherence REST API Documentation http://download.oracle.com/docs/cd/E24290_01/coh.371/e22839/rest_intro.htm - Oracle Service Bus Documentation http://download.oracle.com/docs/cd/E21764_01/soa.htm#osb - REST explanation from Wikipedia http://en.wikipedia.org/wiki/Representational_state_transfer At this URL could be downloaded the whole materials of this demo http://blogs.oracle.com/slc/resource/cosb/coh-sb-demo.zip Author: Nino Guarnacci.

    Read the article

  • Razor – Hiding a Section in a Layout

    - by João Angelo
    Layouts in Razor allow you to define placeholders named sections where content pages may insert custom content much like the ContentPlaceHolder available in ASPX master pages. When you define a section in a Razor layout it’s possible to specify if the section must be defined in every content page using the layout or if its definition is optional allowing a page not to provide any content for that section. For the latter case, it’s also possible using the IsSectionDefined method to render default content when a page does not define the section. However if you ever require to hide a given section from all pages based on some runtime condition you might be tempted to conditionally define it in the layout much like in the following code snippet. if(condition) { @RenderSection("ConditionalSection", false) } With this code you’ll hit an error as soon as any content page provides content for the section which makes sense since if a page inherits a layout then it should only define sections that are also defined in it. To workaround this scenario you have a couple of options. Make the given section optional with and move the condition that enables or disables it to every content page. This leads to code duplication and future pages may forget to only define the section based on that same condition. The other option is to conditionally define the section in the layout page using the following hack: @{ if(condition) { @RenderSection("ConditionalSection", false) } else { RenderSection("ConditionalSection", false).WriteTo(TextWriter.Null); } } Hack inspired by a recent stackoverflow question.

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >