Search Results

Search found 24245 results on 970 pages for 'date model'.

Page 216/970 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • HTML5 Form Validation

    - by Stephen.Walther
    The latest versions of Google Chrome (16+), Mozilla Firefox (8+), and Internet Explorer (10+) all support HTML5 client-side validation. It is time to take HTML5 validation seriously. The purpose of the blog post is to describe how you can take advantage of HTML5 client-side validation regardless of the type of application that you are building. You learn how to use the HTML5 validation attributes, how to perform custom validation using the JavaScript validation constraint API, and how to simulate HTML5 validation on older browsers by taking advantage of a jQuery plugin. Finally, we discuss the security issues related to using client-side validation. Using Client-Side Validation Attributes The HTML5 specification discusses several attributes which you can use with INPUT elements to perform client-side validation including the required, pattern, min, max, step, and maxlength attributes. For example, you use the required attribute to require a user to enter a value for an INPUT element. The following form demonstrates how you can make the firstName and lastName form fields required: <!DOCTYPE html> <html > <head> <title>Required Demo</title> </head> <body> <form> <label> First Name: <input required title="First Name is Required!" /> </label> <label> Last Name: <input required title="Last Name is Required!" /> </label> <button>Register</button> </form> </body> </html> If you attempt to submit this form without entering a value for firstName or lastName then you get the validation error message: Notice that the value of the title attribute is used to display the validation error message “First Name is Required!”. The title attribute does not work this way with the current version of Firefox. If you want to display a custom validation error message with Firefox then you need to include an x-moz-errormessage attribute like this: <input required title="First Name is Required!" x-moz-errormessage="First Name is Required!" /> The pattern attribute enables you to validate the value of an INPUT element against a regular expression. For example, the following form includes a social security number field which includes a pattern attribute: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Pattern</title> </head> <body> <form> <label> Social Security Number: <input required pattern="^\d{3}-\d{2}-\d{4}$" title="###-##-####" /> </label> <button>Register</button> </form> </body> </html> The regular expression in the form above requires the social security number to match the pattern ###-##-####: Notice that the input field includes both a pattern and a required validation attribute. If you don’t enter a value then the regular expression is never triggered. You need to include the required attribute to force a user to enter a value and cause the value to be validated against the regular expression. Custom Validation You can take advantage of the HTML5 constraint validation API to perform custom validation. You can perform any custom validation that you need. The only requirement is that you write a JavaScript function. For example, when booking a hotel room, you might want to validate that the Arrival Date is in the future instead of the past: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Constraint Validation API</title> </head> <body> <form> <label> Arrival Date: <input id="arrivalDate" type="date" required /> </label> <button>Submit Reservation</button> </form> <script type="text/javascript"> var arrivalDate = document.getElementById("arrivalDate"); arrivalDate.addEventListener("input", function() { var value = new Date(arrivalDate.value); if (value < new Date()) { arrivalDate.setCustomValidity("Arrival date must be after now!"); } else { arrivalDate.setCustomValidity(""); } }); </script> </body> </html> The form above contains an input field named arrivalDate. Entering a value into the arrivalDate field triggers the input event. The JavaScript code adds an event listener for the input event and checks whether the date entered is greater than the current date. If validation fails then the validation error message “Arrival date must be after now!” is assigned to the arrivalDate input field by calling the setCustomValidity() method of the validation constraint API. Otherwise, the validation error message is cleared by calling setCustomValidity() with an empty string. HTML5 Validation and Older Browsers But what about older browsers? For example, what about Apple Safari and versions of Microsoft Internet Explorer older than Internet Explorer 10? What the world really needs is a jQuery plugin which provides backwards compatibility for the HTML5 validation attributes. If a browser supports the HTML5 validation attributes then the plugin would do nothing. Otherwise, the plugin would add support for the attributes. Unfortunately, as far as I know, this plugin does not exist. I have not been able to find any plugin which supports both the required and pattern attributes for older browsers, but does not get in the way of these attributes in the case of newer browsers. There are several jQuery plugins which provide partial support for the HTML5 validation attributes including: · jQuery Validation — http://docs.jquery.com/Plugins/Validation · html5Form — http://www.matiasmancini.com.ar/jquery-plugin-ajax-form-validation-html5.html · h5Validate — http://ericleads.com/h5validate/ The jQuery Validation plugin – the most popular JavaScript validation library – supports the HTML5 required attribute, but it does not support the HTML5 pattern attribute. Likewise, the html5Form plugin does not support the pattern attribute. The h5Validate plugin provides the best support for the HTML5 validation attributes. The following page illustrates how this plugin supports both the required and pattern attributes: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>h5Validate</title> <style type="text/css"> .validationError { border: solid 2px red; } .validationValid { border: solid 2px green; } </style> </head> <body> <form id="customerForm"> <label> First Name: <input id="firstName" required /> </label> <label> Social Security Number: <input id="ssn" required pattern="^\d{3}-\d{2}-\d{4}$" title="Expected pattern is ###-##-####" /> </label> <input type="submit" /> </form> <script type="text/javascript" src="Scripts/jquery-1.4.4.min.js"></script> <script type="text/javascript" src="Scripts/jquery.h5validate.js"></script> <script type="text/javascript"> // Enable h5Validate plugin $("#customerForm").h5Validate({ errorClass: "validationError", validClass: "validationValid" }); // Prevent form submission when errors $("#customerForm").submit(function (evt) { if ($("#customerForm").h5Validate("allValid") === false) { evt.preventDefault(); } }); </script> </body> </html> When an input field fails validation, the validationError CSS class is applied to the field and the field appears with a red border. When an input field passes validation, the validationValid CSS class is applied to the field and the field appears with a green border. From the perspective of HTML5 validation, the h5Validate plugin is the best of the plugins. It adds support for the required and pattern attributes to browsers which do not natively support these attributes such as IE9. However, this plugin does not include everything in my wish list for a perfect HTML5 validation plugin. Here’s my wish list for the perfect back compat HTML5 validation plugin: 1. The plugin would disable itself when used with a browser which natively supports HTML5 validation attributes. The plugin should not be too greedy – it should not handle validation when a browser could do the work itself. 2. The plugin should simulate the same user interface for displaying validation error messages as the user interface displayed by browsers which natively support HTML5 validation. Chrome, Firefox, and Internet Explorer all display validation errors in a popup. The perfect plugin would also display a popup. 3. Finally, the plugin would add support for the setCustomValidity() method and the other methods of the HTML5 validation constraint API. That way, you could implement custom validation in a standards compatible way and you would know that it worked across all browsers both old and new. Security It would be irresponsible of me to end this blog post without mentioning the issue of security. It is important to remember that any client-side validation — including HTML5 validation — can be bypassed. You should use client-side validation with the intention to create a better user experience. Client validation is great for providing a user with immediate feedback when the user is in the process of completing a form. However, client-side validation cannot prevent an evil hacker from submitting unexpected form data to your web server. You should always enforce your validation rules on the server. The only way to ensure that a required field has a value is to verify that the required field has a value on the server. The HTML5 required attribute does not guarantee anything. Summary The goal of this blog post was to describe the support for validation contained in the HTML5 standard. You learned how to use both the required and the pattern attributes in an HTML5 form. We also discussed how you can implement custom validation by taking advantage of the setCustomValidity() method. Finally, I discussed the available jQuery plugins for adding support for the HTM5 validation attributes to older browsers. Unfortunately, I am unaware of any jQuery plugin which provides a perfect solution to the problem of backwards compatibility.

    Read the article

  • Profile System: User share the same id

    - by Malcolm Frexner
    I have a strange effect on my site when it is under heavy load. I randomly get the properties of other users settings. I have my own implementation of the profile system so I guess I can not blame the profile system itself. I just need a point to start debugging from. I guess there is a cookie-value that maps to an Profile entry somewhere. Is there any chance to see how this mapping works? Here is my profile provider: using System; using System.Text; using System.Configuration; using System.Web; using System.Web.Profile; using System.Collections; using System.Collections.Specialized; using B2CShop.Model; using log4net; using System.Collections.Generic; using System.Diagnostics; using B2CShop.DAL; using B2CShop.Model.RepositoryInterfaces; [assembly: log4net.Config.XmlConfigurator()] namespace B2CShop.Profile { public class B2CShopProfileProvider : ProfileProvider { private static readonly ILog _log = LogManager.GetLogger(typeof(B2CShopProfileProvider)); // Get an instance of the Profile DAL using the ProfileDALFactory private static readonly B2CShop.DAL.UserRepository dal = new B2CShop.DAL.UserRepository(); // Private members private const string ERR_INVALID_PARAMETER = "Invalid Profile parameter:"; private const string PROFILE_USER = "User"; private static string applicationName = B2CShop.Model.Configuration.ApplicationConfiguration.MembershipApplicationName; /// <summary> /// The name of the application using the custom profile provider. /// </summary> public override string ApplicationName { get { return applicationName; } set { applicationName = value; } } /// <summary> /// Initializes the provider. /// </summary> /// <param name="name">The friendly name of the provider.</param> /// <param name="config">A collection of the name/value pairs representing the provider-specific attributes specified in the configuration for this provider.</param> public override void Initialize(string name, NameValueCollection config) { if (config == null) throw new ArgumentNullException("config"); if (string.IsNullOrEmpty(config["description"])) { config.Remove("description"); config.Add("description", "B2C Shop Custom Provider"); } if (string.IsNullOrEmpty(name)) name = "b2c_shop"; if (config["applicationName"] != null && !string.IsNullOrEmpty(config["applicationName"].Trim())) applicationName = config["applicationName"]; base.Initialize(name, config); } /// <summary> /// Returns the collection of settings property values for the specified application instance and settings property group. /// </summary> /// <param name="context">A System.Configuration.SettingsContext describing the current application use.</param> /// <param name="collection">A System.Configuration.SettingsPropertyCollection containing the settings property group whose values are to be retrieved.</param> /// <returns>A System.Configuration.SettingsPropertyValueCollection containing the values for the specified settings property group.</returns> public override SettingsPropertyValueCollection GetPropertyValues(SettingsContext context, SettingsPropertyCollection collection) { string username = (string)context["UserName"]; bool isAuthenticated = (bool)context["IsAuthenticated"]; //if (!isAuthenticated) return null; int uniqueID = dal.GetUniqueID(username, isAuthenticated, false, ApplicationName); SettingsPropertyValueCollection svc = new SettingsPropertyValueCollection(); foreach (SettingsProperty prop in collection) { SettingsPropertyValue pv = new SettingsPropertyValue(prop); switch (pv.Property.Name) { case PROFILE_USER: if (!String.IsNullOrEmpty(username)) { pv.PropertyValue = GetUser(uniqueID); } break; default: throw new ApplicationException(ERR_INVALID_PARAMETER + " name."); } svc.Add(pv); } return svc; } /// <summary> /// Sets the values of the specified group of property settings. /// </summary> /// <param name="context">A System.Configuration.SettingsContext describing the current application usage.</param> /// <param name="collection">A System.Configuration.SettingsPropertyValueCollection representing the group of property settings to set.</param> public override void SetPropertyValues(SettingsContext context, SettingsPropertyValueCollection collection) { string username = (string)context["UserName"]; CheckUserName(username); bool isAuthenticated = (bool)context["IsAuthenticated"]; int uniqueID = dal.GetUniqueID(username, isAuthenticated, false, ApplicationName); if (uniqueID == 0) { uniqueID = dal.CreateProfileForUser(username, isAuthenticated, ApplicationName); } foreach (SettingsPropertyValue pv in collection) { if (pv.PropertyValue != null) { switch (pv.Property.Name) { case PROFILE_USER: SetUser(uniqueID, (UserInfo)pv.PropertyValue); break; default: throw new ApplicationException(ERR_INVALID_PARAMETER + " name."); } } } UpdateActivityDates(username, false); } // Profile gettters // Retrieve UserInfo private static UserInfo GetUser(int userID) { return dal.GetUser(userID); } // Update account info private static void SetUser(int uniqueID, UserInfo user) { user.UserID = uniqueID; dal.SetUser(user); } // UpdateActivityDates // Updates the LastActivityDate and LastUpdatedDate values // when profile properties are accessed by the // GetPropertyValues and SetPropertyValues methods. // Passing true as the activityOnly parameter will update // only the LastActivityDate. private static void UpdateActivityDates(string username, bool activityOnly) { dal.UpdateActivityDates(username, activityOnly, applicationName); } /// <summary> /// Deletes profile properties and information for the supplied list of profiles. /// </summary> /// <param name="profiles">A System.Web.Profile.ProfileInfoCollection of information about profiles that are to be deleted.</param> /// <returns>The number of profiles deleted from the data source.</returns> public override int DeleteProfiles(ProfileInfoCollection profiles) { int deleteCount = 0; foreach (ProfileInfo p in profiles) if (DeleteProfile(p.UserName)) deleteCount++; return deleteCount; } /// <summary> /// Deletes profile properties and information for profiles that match the supplied list of user names. /// </summary> /// <param name="usernames">A string array of user names for profiles to be deleted.</param> /// <returns>The number of profiles deleted from the data source.</returns> public override int DeleteProfiles(string[] usernames) { int deleteCount = 0; foreach (string user in usernames) if (DeleteProfile(user)) deleteCount++; return deleteCount; } // DeleteProfile // Deletes profile data from the database for the specified user name. private static bool DeleteProfile(string username) { CheckUserName(username); return dal.DeleteAnonymousProfile(username, applicationName); } // Verifies user name for sise and comma private static void CheckUserName(string userName) { if (string.IsNullOrEmpty(userName) || userName.Length > 256 || userName.IndexOf(",") > 0) throw new ApplicationException(ERR_INVALID_PARAMETER + " user name."); } /// <summary> /// Deletes all user-profile data for profiles in which the last activity date occurred before the specified date. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are deleted.</param> /// <param name="userInactiveSinceDate">A System.DateTime that identifies which user profiles are considered inactive. If the System.Web.Profile.ProfileInfo.LastActivityDate value of a user profile occurs on or before this date and time, the profile is considered inactive.</param> /// <returns>The number of profiles deleted from the data source.</returns> public override int DeleteInactiveProfiles(ProfileAuthenticationOption authenticationOption, DateTime userInactiveSinceDate) { string[] userArray = new string[0]; dal.GetInactiveProfiles((int)authenticationOption, userInactiveSinceDate, ApplicationName).CopyTo(userArray, 0); return DeleteProfiles(userArray); } /// <summary> /// Retrieves profile information for profiles in which the user name matches the specified user names. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="usernameToMatch">The user name to search for.</param> /// <param name="pageIndex">The index of the page of results to return.</param> /// <param name="pageSize">The size of the page of results to return.</param> /// <param name="totalRecords">When this method returns, contains the total number of profiles.</param> /// <returns>A System.Web.Profile.ProfileInfoCollection containing user-profile information // for profiles where the user name matches the supplied usernameToMatch parameter.</returns> public override ProfileInfoCollection FindProfilesByUserName(ProfileAuthenticationOption authenticationOption, string usernameToMatch, int pageIndex, int pageSize, out int totalRecords) { CheckParameters(pageIndex, pageSize); return GetProfileInfo(authenticationOption, usernameToMatch, null, pageIndex, pageSize, out totalRecords); } /// <summary> /// Retrieves profile information for profiles in which the last activity date occurred on or before the specified date and the user name matches the specified user name. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="usernameToMatch">The user name to search for.</param> /// <param name="userInactiveSinceDate">A System.DateTime that identifies which user profiles are considered inactive. If the System.Web.Profile.ProfileInfo.LastActivityDate value of a user profile occurs on or before this date and time, the profile is considered inactive.</param> /// <param name="pageIndex">The index of the page of results to return.</param> /// <param name="pageSize">The size of the page of results to return.</param> /// <param name="totalRecords">When this method returns, contains the total number of profiles.</param> /// <returns>A System.Web.Profile.ProfileInfoCollection containing user profile information for inactive profiles where the user name matches the supplied usernameToMatch parameter.</returns> public override ProfileInfoCollection FindInactiveProfilesByUserName(ProfileAuthenticationOption authenticationOption, string usernameToMatch, DateTime userInactiveSinceDate, int pageIndex, int pageSize, out int totalRecords) { CheckParameters(pageIndex, pageSize); return GetProfileInfo(authenticationOption, usernameToMatch, userInactiveSinceDate, pageIndex, pageSize, out totalRecords); } /// <summary> /// Retrieves user profile data for all profiles in the data source. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="pageIndex">The index of the page of results to return.</param> /// <param name="pageSize">The size of the page of results to return.</param> /// <param name="totalRecords">When this method returns, contains the total number of profiles.</param> /// <returns>A System.Web.Profile.ProfileInfoCollection containing user-profile information for all profiles in the data source.</returns> public override ProfileInfoCollection GetAllProfiles(ProfileAuthenticationOption authenticationOption, int pageIndex, int pageSize, out int totalRecords) { CheckParameters(pageIndex, pageSize); return GetProfileInfo(authenticationOption, null, null, pageIndex, pageSize, out totalRecords); } /// <summary> /// Retrieves user-profile data from the data source for profiles in which the last activity date occurred on or before the specified date. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="userInactiveSinceDate">A System.DateTime that identifies which user profiles are considered inactive. If the System.Web.Profile.ProfileInfo.LastActivityDate of a user profile occurs on or before this date and time, the profile is considered inactive.</param> /// <param name="pageIndex">The index of the page of results to return.</param> /// <param name="pageSize">The size of the page of results to return.</param> /// <param name="totalRecords">When this method returns, contains the total number of profiles.</param> /// <returns>A System.Web.Profile.ProfileInfoCollection containing user-profile information about the inactive profiles.</returns> public override ProfileInfoCollection GetAllInactiveProfiles(ProfileAuthenticationOption authenticationOption, DateTime userInactiveSinceDate, int pageIndex, int pageSize, out int totalRecords) { CheckParameters(pageIndex, pageSize); return GetProfileInfo(authenticationOption, null, userInactiveSinceDate, pageIndex, pageSize, out totalRecords); } /// <summary> /// Returns the number of profiles in which the last activity date occurred on or before the specified date. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="userInactiveSinceDate">A System.DateTime that identifies which user profiles are considered inactive. If the System.Web.Profile.ProfileInfo.LastActivityDate of a user profile occurs on or before this date and time, the profile is considered inactive.</param> /// <returns>The number of profiles in which the last activity date occurred on or before the specified date.</returns> public override int GetNumberOfInactiveProfiles(ProfileAuthenticationOption authenticationOption, DateTime userInactiveSinceDate) { int inactiveProfiles = 0; ProfileInfoCollection profiles = GetProfileInfo(authenticationOption, null, userInactiveSinceDate, 0, 0, out inactiveProfiles); return inactiveProfiles; } //Verifies input parameters for page size and page index. private static void CheckParameters(int pageIndex, int pageSize) { if (pageIndex < 1 || pageSize < 1) throw new ApplicationException(ERR_INVALID_PARAMETER + " page index."); } //GetProfileInfo //Retrieves a count of profiles and creates a //ProfileInfoCollection from the profile data in the //database. Called by GetAllProfiles, GetAllInactiveProfiles, //FindProfilesByUserName, FindInactiveProfilesByUserName, //and GetNumberOfInactiveProfiles. //Specifying a pageIndex of 0 retrieves a count of the results only. private static ProfileInfoCollection GetProfileInfo(ProfileAuthenticationOption authenticationOption, string usernameToMatch, object userInactiveSinceDate, int pageIndex, int pageSize, out int totalRecords) { ProfileInfoCollection profiles = new ProfileInfoCollection(); totalRecords = 0; // Count profiles only. if (pageSize == 0) return profiles; int counter = 0; int startIndex = pageSize * (pageIndex - 1); int endIndex = startIndex + pageSize - 1; DateTime dt = new DateTime(1900, 1, 1); if (userInactiveSinceDate != null) dt = (DateTime)userInactiveSinceDate; /* foreach(CustomProfileInfo profile in dal.GetProfileInfo((int)authenticationOption, usernameToMatch, dt, applicationName, out totalRecords)) { if(counter >= startIndex) { ProfileInfo p = new ProfileInfo(profile.UserName, profile.IsAnonymous, profile.LastActivityDate, profile.LastUpdatedDate, 0); profiles.Add(p); } if(counter >= endIndex) { break; } counter++; } */ return profiles; } } } This is how I use it in the controller: public ActionResult AddTyreToCart(CartViewModel model) { string profile = Request.IsAuthenticated ? Request.AnonymousID : User.Identity.Name; } I would like to debug: How can 2 users who provide different cookies get the same profileid? EDIT Here is the code for getuniqueid public int GetUniqueID(string userName, bool isAuthenticated, bool ignoreAuthenticationType, string appName) { SqlParameter[] parms = { new SqlParameter("@Username", SqlDbType.VarChar, 256), new SqlParameter("@ApplicationName", SqlDbType.VarChar, 256)}; parms[0].Value = userName; parms[1].Value = appName; if (!ignoreAuthenticationType) { Array.Resize(ref parms, parms.Length + 1); parms[2] = new SqlParameter("@IsAnonymous", SqlDbType.Bit) { Value = !isAuthenticated }; } int userID; object retVal = null; retVal = SqlHelper.ExecuteScalar(ConfigurationManager.ConnectionStrings["SQLOrderB2CConnString"].ConnectionString, CommandType.StoredProcedure, "getProfileUniqueID", parms); if (retVal == null) userID = CreateProfileForUser(userName, isAuthenticated, appName); else userID = Convert.ToInt32(retVal); return userID; } And this is the SP: CREATE PROCEDURE [dbo].[getProfileUniqueID] @Username VarChar( 256), @ApplicationName VarChar( 256), @IsAnonymous bit = null AS BEGIN SET NOCOUNT ON; /* [getProfileUniqueID] created 08.07.2009 mf Retrive unique id for current user */ SELECT UniqueID FROM dbo.Profiles WHERE Username = @Username AND ApplicationName = @ApplicationName AND IsAnonymous = @IsAnonymous or @IsAnonymous = null END

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • Installing Lubuntu 14.04.1 forcepae fails

    - by Rantanplan
    I tried to install Lubuntu 14.04.1 from a CD. First, I chose Try Lubuntu without installing which gave: ERROR: PAE is disabled on this Pentium M (PAE can potentially be enabled with kernel parameter "forcepae" ... Following the description on https://help.ubuntu.com/community/PAE, I used forcepae and tried Try Lubuntu without installing again. That worked fine. dmesg | grep -i pae showed: [ 0.000000] Kernel command line: file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- forcepae [ 0.008118] PAE forced! On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Finally, I rebooted once more and tried Check disc for defects with forcepae option; no errors have been found. Now, I am wondering how to find the error or whether it would be better to follow advice c in https://help.ubuntu.com/community/PAE: "Move the hard disk to a computer on which the processor has PAE capability and PAE flag (that is, almost everything else than a Banias). Install the system as usual but don't add restricted drivers. After the install move the disk back." Thanks for some hints! Perhaps some of the following can help: On Lubuntu 12.04: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 bogomips : 1284.76 clflush size : 64 cache_alignment : 64 address sizes : 32 bits physical, 32 bits virtual power management: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise cpuid eax in eax ebx ecx edx 00000000 00000002 756e6547 6c65746e 49656e69 00000001 000006d6 00000816 00000180 afe9f9bf 00000002 02b3b001 000000f0 00000000 2c04307d 80000000 80000004 00000000 00000000 00000000 80000001 00000000 00000000 00000000 00000000 80000002 20202020 20202020 65746e49 2952286c 80000003 6e655020 6d756974 20295228 7270204d 80000004 7365636f 20726f73 30352e31 007a4847 Vendor ID: "GenuineIntel"; CPUID level 2 Intel-specific functions: Version 000006d6: Type 0 - Original OEM Family 6 - Pentium Pro Model 13 - Stepping 6 Reserved 0 Brand index: 22 [not in table] Extended brand string: " Intel(R) Pentium(R) M processor 1.50GHz" CLFLUSH instruction cache line size: 8 Feature flags afe9f9bf: FPU Floating Point Unit VME Virtual 8086 Mode Enhancements DE Debugging Extensions PSE Page Size Extensions TSC Time Stamp Counter MSR Model Specific Registers MCE Machine Check Exception CX8 COMPXCHG8B Instruction SEP Fast System Call MTRR Memory Type Range Registers PGE PTE Global Flag MCA Machine Check Architecture CMOV Conditional Move and Compare Instructions FGPAT Page Attribute Table CLFSH CFLUSH instruction DS Debug store ACPI Thermal Monitor and Clock Ctrl MMX MMX instruction set FXSR Fast FP/MMX Streaming SIMD Extensions save/restore SSE Streaming SIMD Extensions instruction set SSE2 SSE2 extensions SS Self Snoop TM Thermal monitor 31 reserved TLB and cache info: b0: unknown TLB/cache descriptor b3: unknown TLB/cache descriptor 02: Instruction TLB: 4MB pages, 4-way set assoc, 2 entries f0: unknown TLB/cache descriptor 7d: unknown TLB/cache descriptor 30: unknown TLB/cache descriptor 04: Data TLB: 4MB pages, 4-way set assoc, 8 entries 2c: unknown TLB/cache descriptor On Lubuntu 14.04.1 live-CD with forcepae: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe bts est tm2 bogomips : 1284.68 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 32 bits virtual power management: uname -a Linux lubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty cpuid CPU 0: vendor_id = "GenuineIntel" version information (1/eax): processor type = primary processor (0) family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) model = 0xd (13) stepping id = 0x6 (6) extended family = 0x0 (0) extended model = 0x0 (0) (simple synth) = Intel Pentium M (Dothan B1) / Celeron M (Dothan B1), 90nm miscellaneous (1/ebx): process local APIC physical ID = 0x0 (0) cpu count = 0x0 (0) CLFLUSH line size = 0x8 (8) brand index = 0x16 (22) brand id = 0x16 (22): Intel Pentium M, .13um feature information (1/edx): x87 FPU on chip = true virtual-8086 mode enhancement = true debugging extensions = true page size extensions = true time stamp counter = true RDMSR and WRMSR support = true physical address extensions = false machine check exception = true CMPXCHG8B inst. = true APIC on chip = false SYSENTER and SYSEXIT = true memory type range registers = true PTE global bit = true machine check architecture = true conditional move/compare instruction = true page attribute table = true page size extension = false processor serial number = false CLFLUSH instruction = true debug store = true thermal monitor and clock ctrl = true MMX Technology = true FXSAVE/FXRSTOR = true SSE extensions = true SSE2 extensions = true self snoop = true hyper-threading / multi-core supported = false therm. monitor = true IA64 = false pending break event = true feature information (1/ecx): PNI/SSE3: Prescott New Instructions = false PCLMULDQ instruction = false 64-bit debug store = false MONITOR/MWAIT = false CPL-qualified debug store = false VMX: virtual machine extensions = false SMX: safer mode extensions = false Enhanced Intel SpeedStep Technology = true thermal monitor 2 = true SSSE3 extensions = false context ID: adaptive or shared L1 data = false FMA instruction = false CMPXCHG16B instruction = false xTPR disable = false perfmon and debug = false process context identifiers = false direct cache access = false SSE4.1 extensions = false SSE4.2 extensions = false extended xAPIC support = false MOVBE instruction = false POPCNT instruction = false time stamp counter deadline = false AES instruction = false XSAVE/XSTOR states = false OS-enabled XSAVE/XSTOR = false AVX: advanced vector extensions = false F16C half-precision convert instruction = false RDRAND instruction = false hypervisor guest status = false cache and TLB information (2): 0xb0: instruction TLB: 4K, 4-way, 128 entries 0xb3: data TLB: 4K, 4-way, 128 entries 0x02: instruction TLB: 4M pages, 4-way, 2 entries 0xf0: 64 byte prefetching 0x7d: L2 cache: 2M, 8-way, sectored, 64 byte lines 0x30: L1 cache: 32K, 8-way, 64 byte lines 0x04: data TLB: 4M pages, 4-way, 8 entries 0x2c: L1 data cache: 32K, 8-way, 64 byte lines extended feature flags (0x80000001/edx): SYSCALL and SYSRET instructions = false execution disable = false 1-GB large page support = false RDTSCP = false 64-bit extensions technology available = false Intel feature flags (0x80000001/ecx): LAHF/SAHF supported in 64-bit mode = false LZCNT advanced bit manipulation = false 3DNow! PREFETCH/PREFETCHW instructions = false brand = " Intel(R) Pentium(R) M processor 1.50GHz" (multi-processing synth): none (multi-processing method): Intel leaf 1 (synth) = Intel Pentium M (Dothan B1), 90nm

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 3

    - by SQLOS Team
    In parts 1 and 2 of this series we looked at the basics of Hyper-V Dynamic Memory and SQL Server memory management. In this part Serdar looks at configuration guidelines for SQL Server memory management. Part 3: Configuration Guidelines for Hyper-V Dynamic Memory and SQL Server Now that we understand SQL Server Memory Management and Hyper-V Dynamic Memory basics, let’s take a look at general configuration guidelines in order to utilize benefits of Hyper-V Dynamic Memory in your SQL Server VMs. Requirements Host Operating System Requirements Hyper-V Dynamic Memory feature is introduced with Windows Server 2008 R2 SP1. Therefore in order to use Dynamic Memory for your virtual machines, you need to have Windows Server 2008 R2 SP1 or Microsoft Hyper-V Server 2008 R2 SP1 in your Hyper-V host. Guest Operating System Requirements In addition to this Dynamic Memory is only supported in Standard, Web, Enterprise and Datacenter editions of windows running inside VMs. Make sure that your VM is running one of these editions. For additional requirements on each operating system see “Dynamic Memory Configuration Guidelines” here. SQL Server Requirements All versions of SQL Server support Hyper-V Dynamic Memory. However, only certain editions of SQL Server are aware of dynamically changing system memory. To have a truly dynamic environment for your SQL Server VMs make sure that you are running one of the SQL Server editions listed below: ·         SQL Server 2005 Enterprise ·         SQL Server 2008 Enterprise / Datacenter Editions ·         SQL Server 2008 R2 Enterprise / Datacenter Editions Configuration guidelines for other versions of SQL Server are covered below in the FAQ section. Guidelines for configuring Dynamic Memory Parameters Here is how to configure Dynamic Memory for your SQL VMs in a nutshell: Hyper-V Dynamic Memory Parameter Recommendation Startup RAM 1 GB + SQL Min Server Memory Maximum RAM > SQL Max Server Memory Memory Buffer % 5 Memory Weight Based on performance needs   Startup RAM In order to ensure that your SQL Server VMs can start correctly, ensure that Startup RAM is higher than configured SQL Min Server Memory for your VMs. Otherwise SQL Server service will need to do paging in order to start since it will not be able to see enough memory during startup. Also note that Startup Memory will always be reserved for your VMs. This will guarantee a certain level of performance for your SQL Servers, however setting this too high will limit the consolidation benefits you’ll get out of your virtualization environment. Maximum RAM This one is obvious. If you’ve configured SQL Max Server Memory for your SQL Server, make sure that Dynamic Memory Maximum RAM configuration is higher than this value. Otherwise your SQL Server will not grow to memory values higher than the value configured for Dynamic Memory. Memory Buffer % Memory buffer configuration is used to provision file cache to virtual machines in order to improve performance. Due to the fact that SQL Server is managing its own buffer pool, Memory Buffer setting should be configured to the lowest value possible, 5%. Configuring a higher memory buffer will prevent low resource notifications from Windows Memory Manager and it will prevent reclaiming memory from SQL Server VMs. Memory Weight Memory weight configuration defines the importance of memory to a VM. Configure higher values for the VMs that have higher performance requirements. VMs with higher memory weight will have more memory under high memory pressure conditions on your host. Questions and Answers Q1 – Which SQL Server memory model is best for Dynamic Memory? The best SQL Server model for Dynamic Memory is “Locked Page Memory Model”. This memory model ensures that SQL Server memory is never paged out and it’s also adaptive to dynamically changing memory in the system. This will be extremely useful when Dynamic Memory is attempting to remove memory from SQL Server VMs ensuring no SQL Server memory is paged out. You can find instructions on configuring “Locked Page Memory Model” for your SQL Servers here. Q2 – What about other SQL Server Editions, how should I configure Dynamic Memory for them? Other editions of SQL Server do not adapt to dynamically changing environments. They will determine how much memory they should allocate during startup and don’t change this value afterwards. Therefore make sure that you configure a higher startup memory for your VM because that will be all the memory that SQL Server utilize Tune Maximum Memory and Memory Buffer based on the other workloads running on the system. If there are no other workloads consider using Static Memory for these editions. Q3 – What if I have multiple SQL Server instances in a VM? Having multiple SQL Server instances in a VM is not a general recommendation for predictable performance, manageability and isolation. In order to achieve a predictable behavior make sure that you configure SQL Min Server Memory and SQL Max Server Memory for each instance in the VM. And make sure that: ·         Dynamic Memory Startup Memory is greater than the sum of SQL Min Server Memory values for the instances in the VM ·         Dynamic Memory Maximum Memory is greater than the sum of SQL Max Server Memory values for the instances in the VM Q4 – I’m using Large Page Memory Model for my SQL Server. Can I still use Dynamic Memory? The short answer is no. SQL Server does not dynamically change its memory size when configured with Large Page Memory Model. In virtualized environments Hyper-V provides large page support by default. Most of the time, Large Page Memory Model doesn’t bring any benefits to a SQL Server if it’s running in virtualized environments. Q5 – How do I monitor SQL performance when I’m trying Dynamic Memory on my VMs? Use the performance counters below to monitor memory performance for SQL Server: Process - Working Set: This counter is available in the VM via process performance counters. It represents the actual amount of physical memory being used by SQL Server process in the VM. SQL Server – Buffer Cache Hit Ratio: This counter is available in the VM via SQL Server counters. This represents the paging being done by SQL Server. A rate of 90% or higher is desirable. Conclusion These blog posts are a quick start to a story that will be developing more in the near future. We’re still continuing our testing and investigations to provide more detailed configuration guidelines with example performance numbers with a white paper in the upcoming months. Now it’s time to give SQL Server and Hyper-V Dynamic Memory a try. Use this guidelines to kick-start your environment. See what you think about it and let us know of your experiences. - Serdar Sutay Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Exploring the Excel Services REST API

    - by jamiet
    Over the last few years Analysis Services guru Chris Webb and I have been on something of a crusade to enable better access to data that is locked up in countless Excel workbooks that litter the hard drives of enterprise PCs. The most prominent manifestation of that crusade up to now has been a forum thread that Chris began on Microsoft Answers entitled Excel Web App API? Chris began that thread with: I was wondering whether there was an API for the Excel Web App? Specifically, I was wondering if it was possible (or if it will be possible in the future) to expose data in a spreadsheet in the Excel Web App as an OData feed, in the way that it is possible with Excel Services? Up to recently the last 10 words of that paragraph "in the way that it is possible with Excel Services" had completely washed over me however a comment on my recent blog post Thoughts on ExcelMashup.com (and a rant) by Josh Booker in which Josh said: Excel Services is a service application built for sharepoint 2010 which exposes a REST API for excel documents. We're looking forward to pros like you giving it a try now that Office365 makes sharepoint more easily accessible.  Can't wait for your future blog about using REST API to load data from Excel on Offce 365 in SSIS. made me think that perhaps the Excel Services REST API is something I should be looking into and indeed that is what I have been doing over the past few days. And you know what? I'm rather impressed with some of what Excel Services' REST API has to offer. Unfortunately Excel Services' REST API also has one debilitating aspect that renders this blog post much less useful than it otherwise would be; namely that it is not publicly available from the Excel Web App on SkyDrive. Therefore all I can do in this blog post is show you screenshots of what the REST API provides in Sharepoint rather than linking you directly to those REST resources; that's a great shame because one of the benefits of a REST API is that it is easily and ubiquitously demonstrable from a web browser. Instead I am hosting a workbook on Sharepoint in Office 365 because that does include Excel Services' REST API but, again, all I can do is show you screenshots. N.B. If anyone out there knows how to make Office-365-hosted spreadsheets publicly-accessible (i.e. without requiring a username/password) please do let me know (because knowing which forum on which to ask the question is an exercise in futility). In order to demonstrate Excel Services' REST API I needed some decent data and for that I used the World Tourism Organization Statistics Database and Yearbook - United Nations World Tourism Organization dataset hosted on Azure Datamarket (its free, by the way); this dataset "provides comprehensive information on international tourism worldwide and offers a selection of the latest available statistics on international tourist arrivals, tourism receipts and expenditure" and you can explore the data for yourself here. If you want to play along at home by viewing the data as it exists in Excel then it can be viewed here. Let's dive in.   The root of Excel Services' REST API is the model resource which resides at: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model Note that this is true for every workbook hosted in a Sharepoint document library - each Excel workbook is a RESTful resource. (Update: Mark Stacey on Twitter tells me that "It's turned off by default in onpremise Sharepoint (1 tickbox to turn on though)". Thanks Mark!) The data is provided as an ATOM feed but I have Firefox's feed reading ability turned on so you don't see the underlying XML goo. As you can see there are four top level resources, Ranges, Charts, Tables and PivotTables; exploring one of those resources is where things get interesting. Let's take a look at the Tables Resource: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables Our workbook contains only one table, called ‘Table1’ (to reiterate, you can explore this table yourself here). Viewing that table via the REST API is pretty easy, we simply append the name of the table onto our previous URI: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1') As you can see, that quite simply gives us a representation of the data in that table. What you cannot see from this screenshot is that this is pure HTML that is being served up; that is all well and good but actually we can do more interesting things. If we specify that the data should be returned not as HTML but as: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1')?$format=image then that data comes back as a pure image and can be used in any web page where you would ordinarily use images. This is the thing that I really like about Excel Services’ REST API – we can embed an image in any web page but instead of being a copy of the data, that image is actually live – if the underlying data in the workbook were to change then hitting refresh will show a new image. Pretty cool, no? The same is true of any Charts or Pivot Tables in your workbook - those can be embedded as images too and if the underlying data changes, boom, the image in your web page changes too. There is a lot of data in the workbook so the image returned by that previous URI is too large to show here so instead let’s take a look at a different resource, this time a range: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15') That URI returns cells A1 to C15 from a worksheet called “Data”: And if we ask for that as an image again: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15')?$format=image Were this image resource not behind a username/password then this would be a live image of the data in the workbook as opposed to one that I had to copy and upload elsewhere. Nonetheless I hope this little wrinkle doesn't detract from the inate value of what I am trying to articulate here; that an existing image in a web page can be changed on-the-fly simply by inserting some data into an Excel workbook. I for one think that that is very cool indeed! I think that's enough in the way of demo for now as this shows what is possible using Excel Services' REST API. Of course, not all features work quite how I would like and here is a bulleted list of some of my more negative feedback: The URIs are pig-ugly. Are "_vti_bin" & "ExcelRest.aspx" really necessary as part of the URI? Would this not be better: http://server/Documents/TourismExpenditureInMillionsOfUSD.xlsx/Model/Tables(‘Table1’) That URI provides the necessary addressability and is a lot easier to remember. Discoverability of these resources is not easy, we essentially have to handcrank a URI ourselves. Take the example of embedding a chart into a blog post - would it not be better if I could browse first through the document library to an Excel workbook and THEN through the workbook to the chart/range/table that I am interested in? Call it a wizard if you like. That would be really cool and would, I am sure, promote this feature and cut down on the copy-and-paste disease that the REST API is meant to alleviate. The resources that I demonstrated can be returned as feeds as well as images or HTML simply by changing the format parameter to ?$format=atom however for some inexplicable reason they don't return OData and no-one on the Excel Services team can tell me why (believe me, I have asked). $format is an OData parameter however other useful parameters such as $top and $filter are not supported. It would be nice if they were. Although I haven't demonstrated it here Excel Services' REST API does provide a makeshift way of altering the data by changing the value of specific cells however what it does not allow you to do is add new data into the workbook. Google Docs allows this and was one of the motivating factors for Chris Webb's forum post that I linked to above. None of this works for Excel workbooks hosted on SkyDrive This blog post is as long as it needs to be for a short introduction so I'll stop now. If you want to know more than I recommend checking out a few links: Excel Services REST API documentation on MSDNSo what does REST on Excel Services look like??? by Shahar PrishExcel Services in SharePoint 2010 REST API Syntax by Christian Stich. Any thoughts? Let's hear them in the comments section below! @Jamiet 

    Read the article

  • Exploring the Excel Services REST API

    - by jamiet
    Over the last few years Analysis Services guru Chris Webb and I have been on something of a crusade to enable better access to data that is locked up in countless Excel workbooks that litter the hard drives of enterprise PCs. The most prominent manifestation of that crusade up to now has been a forum thread that Chris began on Microsoft Answers entitled Excel Web App API? Chris began that thread with: I was wondering whether there was an API for the Excel Web App? Specifically, I was wondering if it was possible (or if it will be possible in the future) to expose data in a spreadsheet in the Excel Web App as an OData feed, in the way that it is possible with Excel Services? Up to recently the last 10 words of that paragraph "in the way that it is possible with Excel Services" had completely washed over me however a comment on my recent blog post Thoughts on ExcelMashup.com (and a rant) by Josh Booker in which Josh said: Excel Services is a service application built for sharepoint 2010 which exposes a REST API for excel documents. We're looking forward to pros like you giving it a try now that Office365 makes sharepoint more easily accessible.  Can't wait for your future blog about using REST API to load data from Excel on Offce 365 in SSIS. made me think that perhaps the Excel Services REST API is something I should be looking into and indeed that is what I have been doing over the past few days. And you know what? I'm rather impressed with some of what Excel Services' REST API has to offer. Unfortunately Excel Services' REST API also has one debilitating aspect that renders this blog post much less useful than it otherwise would be; namely that it is not publicly available from the Excel Web App on SkyDrive. Therefore all I can do in this blog post is show you screenshots of what the REST API provides in Sharepoint rather than linking you directly to those REST resources; that's a great shame because one of the benefits of a REST API is that it is easily and ubiquitously demonstrable from a web browser. Instead I am hosting a workbook on Sharepoint in Office 365 because that does include Excel Services' REST API but, again, all I can do is show you screenshots. N.B. If anyone out there knows how to make Office-365-hosted spreadsheets publicly-accessible (i.e. without requiring a username/password) please do let me know (because knowing which forum on which to ask the question is an exercise in futility). In order to demonstrate Excel Services' REST API I needed some decent data and for that I used the World Tourism Organization Statistics Database and Yearbook - United Nations World Tourism Organization dataset hosted on Azure Datamarket (its free, by the way); this dataset "provides comprehensive information on international tourism worldwide and offers a selection of the latest available statistics on international tourist arrivals, tourism receipts and expenditure" and you can explore the data for yourself here. If you want to play along at home by viewing the data as it exists in Excel then it can be viewed here. Let's dive in.   The root of Excel Services' REST API is the model resource which resides at: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model Note that this is true for every workbook hosted in a Sharepoint document library - each Excel workbook is a RESTful resource. (Update: Mark Stacey on Twitter tells me that "It's turned off by default in onpremise Sharepoint (1 tickbox to turn on though)". Thanks Mark!) The data is provided as an ATOM feed but I have Firefox's feed reading ability turned on so you don't see the underlying XML goo. As you can see there are four top level resources, Ranges, Charts, Tables and PivotTables; exploring one of those resources is where things get interesting. Let's take a look at the Tables Resource: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables Our workbook contains only one table, called ‘Table1’ (to reiterate, you can explore this table yourself here). Viewing that table via the REST API is pretty easy, we simply append the name of the table onto our previous URI: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1') As you can see, that quite simply gives us a representation of the data in that table. What you cannot see from this screenshot is that this is pure HTML that is being served up; that is all well and good but actually we can do more interesting things. If we specify that the data should be returned not as HTML but as: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1')?$format=image then that data comes back as a pure image and can be used in any web page where you would ordinarily use images. This is the thing that I really like about Excel Services’ REST API – we can embed an image in any web page but instead of being a copy of the data, that image is actually live – if the underlying data in the workbook were to change then hitting refresh will show a new image. Pretty cool, no? The same is true of any Charts or Pivot Tables in your workbook - those can be embedded as images too and if the underlying data changes, boom, the image in your web page changes too. There is a lot of data in the workbook so the image returned by that previous URI is too large to show here so instead let’s take a look at a different resource, this time a range: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15') That URI returns cells A1 to C15 from a worksheet called “Data”: And if we ask for that as an image again: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15')?$format=image Were this image resource not behind a username/password then this would be a live image of the data in the workbook as opposed to one that I had to copy and upload elsewhere. Nonetheless I hope this little wrinkle doesn't detract from the inate value of what I am trying to articulate here; that an existing image in a web page can be changed on-the-fly simply by inserting some data into an Excel workbook. I for one think that that is very cool indeed! I think that's enough in the way of demo for now as this shows what is possible using Excel Services' REST API. Of course, not all features work quite how I would like and here is a bulleted list of some of my more negative feedback: The URIs are pig-ugly. Are "_vti_bin" & "ExcelRest.aspx" really necessary as part of the URI? Would this not be better: http://server/Documents/TourismExpenditureInMillionsOfUSD.xlsx/Model/Tables(‘Table1’) That URI provides the necessary addressability and is a lot easier to remember. Discoverability of these resources is not easy, we essentially have to handcrank a URI ourselves. Take the example of embedding a chart into a blog post - would it not be better if I could browse first through the document library to an Excel workbook and THEN through the workbook to the chart/range/table that I am interested in? Call it a wizard if you like. That would be really cool and would, I am sure, promote this feature and cut down on the copy-and-paste disease that the REST API is meant to alleviate. The resources that I demonstrated can be returned as feeds as well as images or HTML simply by changing the format parameter to ?$format=atom however for some inexplicable reason they don't return OData and no-one on the Excel Services team can tell me why (believe me, I have asked). $format is an OData parameter however other useful parameters such as $top and $filter are not supported. It would be nice if they were. Although I haven't demonstrated it here Excel Services' REST API does provide a makeshift way of altering the data by changing the value of specific cells however what it does not allow you to do is add new data into the workbook. Google Docs allows this and was one of the motivating factors for Chris Webb's forum post that I linked to above. None of this works for Excel workbooks hosted on SkyDrive This blog post is as long as it needs to be for a short introduction so I'll stop now. If you want to know more than I recommend checking out a few links: Excel Services REST API documentation on MSDNSo what does REST on Excel Services look like??? by Shahar PrishExcel Services in SharePoint 2010 REST API Syntax by Christian Stich. Any thoughts? Let's hear them in the comments section below! @Jamiet 

    Read the article

  • XNA: Rotating Bones

    - by MLM
    XNA 4.0 I am trying to learn how to rotate bones on a very simple tank model I made in Cinema 4D. It is rigged by 3 bones, Root - Main - Turret - Barrel I have binded all of the objects to the bones so that all translations/rotations work as planned in C4D. I exported it as .fbx I based my test project after: http://create.msdn.com/en-US/education/catalog/sample/simple_animation I can build successfully with no errors but all the rotations I try to do to my bones have no effect. I can transform my Root successfully using below but the bone transforms have no effect: myModel.Root.Transform = world; Matrix turretRotation = Matrix.CreateRotationY(MathHelper.ToRadians(37)); Matrix barrelRotation = Matrix.CreateRotationX(barrelRotationValue); MainBone.Transform = MainTransform; TurretBone.Transform = turretRotation * TurretTransform; BarrelBone.Transform = barrelRotation * BarrelTransform; I am wondering if my model is just not right or something important I am missing in the code. Here is my Game1.cs using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace ModelTesting { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; float aspectRatio; Tank myModel; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here myModel = new Tank(); base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here myModel.Load(Content); aspectRatio = graphics.GraphicsDevice.Viewport.AspectRatio; } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here float time = (float)gameTime.TotalGameTime.TotalSeconds; // Move the pieces /* myModel.TurretRotation = (float)Math.Sin(time * 0.333f) * 1.25f; myModel.BarrelRotation = (float)Math.Sin(time * 0.25f) * 0.333f - 0.333f; */ base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // Calculate the camera matrices. float time = (float)gameTime.TotalGameTime.TotalSeconds; Matrix rotation = Matrix.CreateRotationY(MathHelper.ToRadians(45)); Matrix view = Matrix.CreateLookAt(new Vector3(2000, 500, 0), new Vector3(0, 150, 0), Vector3.Up); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, graphics.GraphicsDevice.Viewport.AspectRatio, 10, 10000); // TODO: Add your drawing code here myModel.Draw(rotation, view, projection); base.Draw(gameTime); } } } And here is my tank class: using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace ModelTesting { public class Tank { Model myModel; // Array holding all the bone transform matrices for the entire model. // We could just allocate this locally inside the Draw method, but it // is more efficient to reuse a single array, as this avoids creating // unnecessary garbage. public Matrix[] boneTransforms; // Shortcut references to the bones that we are going to animate. // We could just look these up inside the Draw method, but it is more // efficient to do the lookups while loading and cache the results. ModelBone MainBone; ModelBone TurretBone; ModelBone BarrelBone; // Store the original transform matrix for each animating bone. Matrix MainTransform; Matrix TurretTransform; Matrix BarrelTransform; // current animation positions float turretRotationValue; float barrelRotationValue; /// <summary> /// Gets or sets the turret rotation amount. /// </summary> public float TurretRotation { get { return turretRotationValue; } set { turretRotationValue = value; } } /// <summary> /// Gets or sets the barrel rotation amount. /// </summary> public float BarrelRotation { get { return barrelRotationValue; } set { barrelRotationValue = value; } } /// <summary> /// Load the model /// </summary> public void Load(ContentManager Content) { // TODO: use this.Content to load your game content here myModel = Content.Load<Model>("Models\\simple_tank02"); MainBone = myModel.Bones["Main"]; TurretBone = myModel.Bones["Turret"]; BarrelBone = myModel.Bones["Barrel"]; MainTransform = MainBone.Transform; TurretTransform = TurretBone.Transform; BarrelTransform = BarrelBone.Transform; // Allocate the transform matrix array. boneTransforms = new Matrix[myModel.Bones.Count]; } public void Draw(Matrix world, Matrix view, Matrix projection) { myModel.Root.Transform = world; Matrix turretRotation = Matrix.CreateRotationY(MathHelper.ToRadians(37)); Matrix barrelRotation = Matrix.CreateRotationX(barrelRotationValue); MainBone.Transform = MainTransform; TurretBone.Transform = turretRotation * TurretTransform; BarrelBone.Transform = barrelRotation * BarrelTransform; myModel.CopyAbsoluteBoneTransformsTo(boneTransforms); // Draw the model, a model can have multiple meshes, so loop foreach (ModelMesh mesh in myModel.Meshes) { // This is where the mesh orientation is set foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } // Draw the mesh, will use the effects set above mesh.Draw(); } } } }

    Read the article

  • REST to Objects in C#

    RESTful interfaces for web services are all the rage for many Web 2.0 sites.  If you want to consume these in a very simple fashion, LINQ to XML can do the job pretty easily in C#.  If you go searching for help on this, youll find a lot of incomplete solutions and fairly large toolkits and frameworks (guess how I know this) this quick article is meant to be a no fluff just stuff approach to making this work. POCO Objects Lets assume you have a Model that you want to suck data into from a RESTful web service.  Ideally this is a Plain Old CLR Object, meaning it isnt infected with any persistence or serialization goop.  It might look something like this: public class Entry { public int Id; public int UserId; public DateTime Date; public float Hours; public string Notes; public bool Billable;   public override string ToString() { return String.Format("[{0}] User: {1} Date: {2} Hours: {3} Notes: {4} Billable {5}", Id, UserId, Date, Hours, Notes, Billable); } } Not that this isnt a completely trivial object.  Lets look at the API for the service.  RESTful HTTP Service In this case, its TickSpots API, with the following sample output: <?xml version="1.0" encoding="UTF-8"?> <entries type="array"> <entry> <id type="integer">24</id> <task_id type="integer">14</task_id> <user_id type="integer">3</user_id> <date type="date">2008-03-08</date> <hours type="float">1.00</hours> <notes>Had trouble with tribbles.</notes> <billable>true</billable> # Billable is an attribute inherited from the task <billed>true</billed> # Billed is an attribute to track whether the entry has been invoiced <created_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</created_at> <updated_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</updated_at> # The following attributes are derived and provided for informational purposes: <user_email>[email protected]</user_email> <task_name>Remove converter assembly</task_name> <sum_hours type="float">2.00</sum_hours> <budget type="float">10.00</budget> <project_name>Realign dilithium crystals</project_name> <client_name>Starfleet Command</client_name> </entry> </entries> Im assuming in this case that I dont necessarily care about all of the data fields the service is returning I just need some of them for my applications purposes.  Thus, you can see there are more elements in the <entry> XML than I have in my Entry class. Get The XML with C# The next step is to get the XML.  The following snippet does the heavy lifting once you pass it the appropriate URL: protected XElement GetResponse(string uri) { var request = WebRequest.Create(uri) as HttpWebRequest; request.UserAgent = ".NET Sample"; request.KeepAlive = false;   request.Timeout = 15 * 1000;   var response = request.GetResponse() as HttpWebResponse;   if (request.HaveResponse == true && response != null) { var reader = new StreamReader(response.GetResponseStream()); return XElement.Parse(reader.ReadToEnd()); } throw new Exception("Error fetching data."); } This is adapted from the Yahoo Developer article on Web Service REST calls.  Once you have the XML, the last step is to get the data back as your POCO. Use LINQ-To-XML to Deserialize POCOs from XML This is done via the following code: public IEnumerable<Entry> List(DateTime startDate, DateTime endDate) { string additionalParameters = String.Format("start_date={0}&end_date={1}", startDate.ToShortDateString(), endDate.ToShortDateString()); string uri = BuildUrl("entries", additionalParameters);   XElement elements = GetResponse(uri);   var entries = from e in elements.Elements() where e.Name.LocalName == "entry" select new Entry { Id = int.Parse(e.Element("id").Value), UserId = int.Parse(e.Element("user_id").Value), Date = DateTime.Parse(e.Element("date").Value), Hours = float.Parse(e.Element("hours").Value), Notes = e.Element("notes").Value, Billable = bool.Parse(e.Element("billable").Value) }; return entries; }   For completeness, heres the BuildUrl method for my TickSpot API wrapper: // Change these to your settings protected const string projectDomain = "DOMAIN.tickspot.com"; private const string authParams = "[email protected]&password=MyTickSpotPassword";   protected string BuildUrl(string apiMethod, string additionalParams) { if (projectDomain.Contains("DOMAIN")) { throw new ApplicationException("You must update your domain in ProjectRepository.cs."); } if (authParams.Contains("MyTickSpotPassword")) { throw new ApplicationException("You must update your email and password in ProjectRepository.cs."); } return string.Format("https://{0}/api/{1}?{2}&{3}", projectDomain, apiMethod, authParams, additionalParams); } Thats it!  Now go forth and consume XML and map it to classes you actually want to work with.  Have fun! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • REST to Objects in C#

    RESTful interfaces for web services are all the rage for many Web 2.0 sites.  If you want to consume these in a very simple fashion, LINQ to XML can do the job pretty easily in C#.  If you go searching for help on this, youll find a lot of incomplete solutions and fairly large toolkits and frameworks (guess how I know this) this quick article is meant to be a no fluff just stuff approach to making this work. POCO Objects Lets assume you have a Model that you want to suck data into from a RESTful web service.  Ideally this is a Plain Old CLR Object, meaning it isnt infected with any persistence or serialization goop.  It might look something like this: public class Entry { public int Id; public int UserId; public DateTime Date; public float Hours; public string Notes; public bool Billable;   public override string ToString() { return String.Format("[{0}] User: {1} Date: {2} Hours: {3} Notes: {4} Billable {5}", Id, UserId, Date, Hours, Notes, Billable); } } Not that this isnt a completely trivial object.  Lets look at the API for the service.  RESTful HTTP Service In this case, its TickSpots API, with the following sample output: <?xml version="1.0" encoding="UTF-8"?> <entries type="array"> <entry> <id type="integer">24</id> <task_id type="integer">14</task_id> <user_id type="integer">3</user_id> <date type="date">2008-03-08</date> <hours type="float">1.00</hours> <notes>Had trouble with tribbles.</notes> <billable>true</billable> # Billable is an attribute inherited from the task <billed>true</billed> # Billed is an attribute to track whether the entry has been invoiced <created_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</created_at> <updated_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</updated_at> # The following attributes are derived and provided for informational purposes: <user_email>[email protected]</user_email> <task_name>Remove converter assembly</task_name> <sum_hours type="float">2.00</sum_hours> <budget type="float">10.00</budget> <project_name>Realign dilithium crystals</project_name> <client_name>Starfleet Command</client_name> </entry> </entries> Im assuming in this case that I dont necessarily care about all of the data fields the service is returning I just need some of them for my applications purposes.  Thus, you can see there are more elements in the <entry> XML than I have in my Entry class. Get The XML with C# The next step is to get the XML.  The following snippet does the heavy lifting once you pass it the appropriate URL: protected XElement GetResponse(string uri) { var request = WebRequest.Create(uri) as HttpWebRequest; request.UserAgent = ".NET Sample"; request.KeepAlive = false;   request.Timeout = 15 * 1000;   var response = request.GetResponse() as HttpWebResponse;   if (request.HaveResponse == true && response != null) { var reader = new StreamReader(response.GetResponseStream()); return XElement.Parse(reader.ReadToEnd()); } throw new Exception("Error fetching data."); } This is adapted from the Yahoo Developer article on Web Service REST calls.  Once you have the XML, the last step is to get the data back as your POCO. Use LINQ-To-XML to Deserialize POCOs from XML This is done via the following code: public IEnumerable<Entry> List(DateTime startDate, DateTime endDate) { string additionalParameters = String.Format("start_date={0}&end_date={1}", startDate.ToShortDateString(), endDate.ToShortDateString()); string uri = BuildUrl("entries", additionalParameters);   XElement elements = GetResponse(uri);   var entries = from e in elements.Elements() where e.Name.LocalName == "entry" select new Entry { Id = int.Parse(e.Element("id").Value), UserId = int.Parse(e.Element("user_id").Value), Date = DateTime.Parse(e.Element("date").Value), Hours = float.Parse(e.Element("hours").Value), Notes = e.Element("notes").Value, Billable = bool.Parse(e.Element("billable").Value) }; return entries; }   For completeness, heres the BuildUrl method for my TickSpot API wrapper: // Change these to your settings protected const string projectDomain = "DOMAIN.tickspot.com"; private const string authParams = "[email protected]&password=MyTickSpotPassword";   protected string BuildUrl(string apiMethod, string additionalParams) { if (projectDomain.Contains("DOMAIN")) { throw new ApplicationException("You must update your domain in ProjectRepository.cs."); } if (authParams.Contains("MyTickSpotPassword")) { throw new ApplicationException("You must update your email and password in ProjectRepository.cs."); } return string.Format("https://{0}/api/{1}?{2}&{3}", projectDomain, apiMethod, authParams, additionalParams); } Thats it!  Now go forth and consume XML and map it to classes you actually want to work with.  Have fun! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Twitter ?? Nashorn ????(??)

    - by Homma
    ???? Nashorn ? Java ??????? Twitter ???????????????????? JavaFX ??????????????? ????? ??? jlaskey ??? Nashorn Blog ????????????? https://blogs.oracle.com/nashorn/entry/nashorn_in_the_twitterverse_continued ???????? ?? Twitter ???????????????????????? JavaFX ??????????????????????????????? Nashorn ?? JavaFX ??????????????JavaFX ???????????????????????????????????????Nashorn ? Java ????????????????????????????????????(JavaFX ?????????????????????)? ?????????????????????????????????????????????? Twitter ????????????????????????? var twitter4j = Packages.twitter4j; var TwitterFactory = twitter4j.TwitterFactory; var Query = twitter4j.Query; function getTrendingData() { var twitter = new TwitterFactory().instance; var query = new Query("nashorn OR nashornjs"); query.since("2012-11-21"); query.count = 100; var data = {}; do { var result = twitter.search(query); var tweets = result.tweets; for each (var tweet in tweets) { var date = tweet.createdAt; var key = (1900 + date.year) + "/" + (1 + date.month) + "/" + date.date; data[key] = (data[key] || 0) + 1; } } while (query = result.nextQuery()); return data; } ??????????????????getTrendingData() ??????????????(??????????Nashorn ???????? OpenJDK ?????? 2012 ? 11 ? 21 ???)??????????????????????????????????? ????JavaFX ? BarChart ??????????? var javafx = Packages.javafx; var Stage = javafx.stage.Stage var Scene = javafx.scene.Scene; var Group = javafx.scene.Group; var Chart = javafx.scene.chart.Chart; var FXCollections = javafx.collections.FXCollections; var ObservableList = javafx.collections.ObservableList; var CategoryAxis = javafx.scene.chart.CategoryAxis; var NumberAxis = javafx.scene.chart.NumberAxis; var BarChart = javafx.scene.chart.BarChart; var XYChart = javafx.scene.chart.XYChart; var Series = javafx.scene.chart.XYChart.Series; var Data = javafx.scene.chart.XYChart.Data; function graph(stage, data) { var root = new Group(); stage.scene = new Scene(root); var dates = Object.keys(data); var xAxis = new CategoryAxis(); xAxis.categories = FXCollections.observableArrayList(dates); var yAxis = new NumberAxis("Tweets", 0.0, 200.0, 50.0); var series = FXCollections.observableArrayList(); for (var date in data) { series.add(new Data(date, data[date])); } var tweets = new Series("Tweets", series); var barChartData = FXCollections.observableArrayList(tweets); var chart = new BarChart(xAxis, yAxis, barChartData, 25.0); root.children.add(chart); } ????????????????????????????????stage.scene = new Scene(root) ? stage.setScene(new Scene(root)) ????????????????????Nashorn ? stage ??????? scene ???????????????????(Dynalink ?????????)Java Beans ???????????????? (setScene()) ???????????????????????????????Nashorn ? FXCollections ??????????????????????????????observableArrayList(dates) ??????????Nashorn ? JavaScript ??? (dates) ? Java ???????????????????????????? JavaScript ?????????????????? Java ????????????????????????????????????????????????????????????? ????????????????????????????????? JavaFX ???????????????????????? JavaFX ??????????????javafx.application.Application ??????????????????????????? JavaFX ????????????????????????????????????????????????? import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import javafx.application.Application; import javafx.stage.Stage; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; public class TrendingMain extends Application { private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); private Trending trending; public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws Exception { trending = (Trending) load("Trending.js"); trending.start(stage); } @Override public void stop() throws Exception { trending.stop(); } private Object load(String script) throws IOException, ScriptException { try (final InputStream is = TrendingMain.class.getResourceAsStream(script)) { return engine.eval(new InputStreamReader(is, "utf-8")); } } } ???? Nashorn ??????? JSR-223 ? javax.script ????????? private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); ????????? JavaScript ???????? Nashorn ???????????????????? load ???????????????????????engine ???????????????load ????????????? ???????????????Java ???????????????????????????????????????????????????? Java ????????????????JavaFX ???????? start ????? stop ?????????????????????????????????????? public interface Trending { public void start(Stage stage) throws Exception; public void stop() throws Exception; } ?????????????????????????????? function newTrending() { return new Packages.Trending() { start: function(stage) { var data = getTrendingData(); graph(stage, data); stage.show(); }, stop: function() { } } } newTrending(); ?????? Trending ?????????????????????start ????? stop ??????????????????????????????????? eval ???? Java ??????????????? trending = (Trending) load("Trending.js"); ????????????????Trending.js ??????? getTrendingData ???????????? newTrending ????????????????????? Java ?????????newTrending ????????? eval ????????? Trending ????????????????????????????????????????????????????????? trending.start(stage); ???????? ???? Nashorn ????????? http://www.myexpospace.com/JavaOne2012/SessionFiles/CON5251_PDF_5251_0001.pdf ???????? Dynalink ??????? https://github.com/szegedi/dynalink ????????

    Read the article

  • ApplyCurrentValues in EF 4

    - by ali62b
    I just was playing with EF 4 in VS 2010 RC and just found that ApplyCurrentValues dont work when the Property is of type bool and the newly value is false !!!??? and it works when the newly value is true . I dont know if this is a bug or I'm missing something but I just work with a very ugly work around : public void UpdateProduct(Product updatedProduct) { using (model) { model.Products.Attach(new Product { ProductID = updatedProduct.ProductID }); model.Products.ApplyCurrentValues(updatedProduct); Product originalProduct = model.Products.Single(p => p.ProductID == updatedProduct.ProductID); originalProduct.Discontinued = updatedProduct.Discontinued; model.SaveChanges(); } } any idea or better work around?

    Read the article

  • ASP.NET MVC - ValidationMessageFor

    - by Murat
    Hi, I have asked this question on asp.net forums (http://forums.asp.net/t/1545006.aspx) but i had no valid answer therefore i'll try this here as well. I was working on migrating MVC1 app to MVC2 today and i have come across a problem while changing the ValidationMessage to ValidationMessageFor implementation. The below is the selectlist in my View <%=Html.DropDownListFor(model => model.SecurityQuestions[0].Question, "Some Security question", new { @class = "form_element_select" })%> The below code works fine and i can see the validation message came from modelstate. <%= Html.ValidationMessage("SecurityQuestions_0__Question")%> but this one does not work: <%= Html.ValidationMessageFor(model => model.SecurityQuestions[0].Question)%> SecurityQuestions is a generic list in my model public List<SecurityQuestion> SecurityQuestions { get; set; } Is this somewhat a bug in "ValidationMessageFor" or am i missing something here?

    Read the article

  • Question on MVVM pattern on WPF ?

    - by Ashish Ashu
    I have a user control let say UC1 . This user control have viewmodel UC1_vm. In the usercontrol UC1 I have a canvas in which drawing curve logic is implemented. This drawing curve logic is based on the data points property in the view model ( UC1_vm). The data points property inside the view model change with different condition. The generation of data points is written in the View Model. I want to bind the data points property in the view model to the draw curve logic inside the User control (view). I want whenever the data point property is changed in the view model , the canvas calls the draw curve method. Can I set the set any property of canvas which when changed it calls the on paint logic auto matically? Please suggest me the approach of implementing this scenario!!

    Read the article

  • Changing PerformancePoint Time Intelligence Post Formula Default Value

    - by Gabriel Guimarães
    Is there a way to change the default value of the Time Intelligence Post Formula?? I have a Dashboard where I use From Date and To Date filters, to calculate the values between the user selected period, for those filters I use the Time Intelligence Post Formula,however the default when the user enters the page is always today's date on both filters. (and from today to today analysis doesn't make sense for this) What I would like to to is have a From Date be something like 30 days before today's date, but not forced on the report receiving the filters, I just want the filter to have a default value and then let the user choose whatever he wants. Anybody knows of something that can be done, or this just can't be done?

    Read the article

  • What jquery code or plugin would I use to update the position of an element?

    - by Breadtruck
    I am using a jquery plugin from [ FilamentGroup ] called DateRangePicker. I have a simple form with two text inputs for the start and end date that I bind the DateRangePicker to using this $('input.tbDate').daterangepicker({ dateFormat: 'mm/dd/yy', earliestDate: new Date(minDate), latestDate: new Date(maxDate), datepickerOptions: { changeMonth: true, changeYear: true, minDate: new Date(minDate), maxDate: new Date(maxDate) } }); I have a collapsible table above this form that when shown, moves the form and the elements that the daterangepicker plugin is bound to, down lower on the page, but the daterangepicker appears to keep the position from when it was actually created. What code could I put in the daterangepicker's onShow Callback to update its position to be next to the element is was initially bound to? Or is there some specific jquery method or plugin that I could chain to the daterangepicker plugin so that it will update its position correctly. This would come in handy for some other plugins that I use that don't seem to keep their position relative to other elements correctly either.

    Read the article

  • Advice on my jQuery Ajax Function

    - by NessDan
    So on my site, a user can post a comment on 2 things: a user's profile and an app. The code works fine in PHP but we decided to add Ajax to make it more stylish. The comment just fades into the page and works fine. I decided I wanted to make a function so that I wouldn't have to manage 2 (or more) blocks of codes in different files. Right now, the code is as follows for the two pages (not in a separate .js file, they're written inside the head tags for the pages.): // App page $("input#comment_submit").click(function() { var comment = $("#comment_box").val(); $.ajax({ type: "POST", url: "app.php?id=<?php echo $id; ?>", data: {comment: comment}, success: function() { $("input#comment_submit").attr("disabled", "disabled").val("Comment Submitted!"); $("textarea#comment_box").attr("disabled", "disabled") $("#comments").prepend("<div class=\"comment new\"></div>"); $(".new").prepend("<a href=\"profile.php?username=<?php echo $_SESSION['username']; ?>\" class=\"commentname\"><?php echo $_SESSION['username']; ?></a><p class=\"commentdate\"><?php echo date("M. d, Y", time()) ?> - <?php echo date("g:i A", time()); ?></p><p class=\"commentpost\">" + comment + "</p>").hide().fadeIn(1000); } }); return false; }); And next up, // Profile page $("input#comment_submit").click(function() { var comment = $("#comment_box").val(); $.ajax({ type: "POST", url: "profile.php?username=<?php echo $user; ?>", data: {comment: comment}, success: function() { $("input#comment_submit").attr("disabled", "disabled").val("Comment Submitted!"); $("textarea#comment_box").attr("disabled", "disabled") $("#comments").prepend("<div class=\"comment new\"></div>"); $(".new").prepend("<a href=\"profile.php?username=<?php echo $_SESSION['username']; ?>\" class=\"commentname\"><?php echo $_SESSION['username']; ?></a><p class=\"commentdate\"><?php echo date("M. d, Y", time()) ?> - <?php echo date("g:i A", time()); ?></p><p class=\"commentpost\">" + comment + "</p>").hide().fadeIn(1000); } }); return false; }); Now, on each page the box names will always be the same (comment_box and comment_submit) so what do you guys think of this function (Note, the postComment is in the head tag on the page.): // On the page, (profile.php) $(function() { $("input#comment_submit").click(function() { postComment("profile", "<?php echo $user ?>", "<?php echo $_SESSION['username']; ?>", "<?php echo date("M. d, Y", time()) ?>", "<?php echo date("g:i A", time()); ?>"); }); }); Which leads to this function (which is stored in a separate file called functions.js): function postComment(page, argvalue, username, date, time) { if (page == "app") { var arg = "id"; } if (page == "profile") { var arg = "username"; } var comment = $("#comment_box").val(); $.ajax({ type: "POST", url: page + ".php?" + arg + "=" + argvalue, data: {comment: comment}, success: function() { $("textarea#comment_box").attr("disabled", "disabled") $("input#comment_submit").attr("disabled", "disabled").val("Comment Submitted!"); $("#comments").prepend("<div class=\"comment new\"></div>"); $(".new").prepend("<a href=\"" + page + ".php?" + arg + "=" + username + "\" class=\"commentname\">" + username + "</a><p class=\"commentdate\">" + date + " - " + time + "</p><p class=\"commentpost\">" + nl2br(comment) + "</p>").hide().fadeIn(1000); } }); return false; } That's what I came up with! So, some problems: When I hit the button the page refreshes. What fixed this was taking the return false from the function and putting it into the button click. Any way to keep it in the function and have the same effect? But my real question is this: Can any coders out there that are familiar to jQuery tell me techniques, coding practices, or ways to write my code more efficiently/elegantly? I've done a lot of work in PHP but I know that echoing the date may not be the most efficient way to get the date and time. So any tips that can really help me streamline this function and also make me better with writing jQuery are very welcome!

    Read the article

  • Time to stop using &ldquo;Execute Package Task&rdquo;&ndash; a way to execute package in SSIS catalog taking advantage of the new project deployment model ,and the logging and reporting feature

    - by Kevin Shyr
    I set out to find a way to dynamically call package in SSIS 2012.  The following are 2 excellent blogs I found; I used them heavily.  The code below has some addition to parameter types and message types, but was made essentially derived entirely from the blogs. http://sqlblog.com/blogs/jamie_thomson/archive/2011/07/16/ssis-logging-in-denali.aspx http://www.ssistalk.com/2012/07/24/quick-tip-run-ssis-2012-packages-synchronously-and-other-execution-options/   The code: Every package will be called by a PackageController package.  The packageController is initialized with some information on which package to run and what information to pass in.   The following is the stored procedure called from the “Execute SQL Task”.  Here is the highlight of the stored procedure It takes in packageName, project name, and folder name (folder in SSIS project deployment to SSIS catalog) The stored procedure sets the package variables of the upcoming package execution Execute package in SSIS Catalog Get the status of the execution.  Also, if exists, get the error message’s message_id and store them in the management database. Return value to “Execute SQL Task” to manage failure properly CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog]        @PackageName NVARCHAR(255)        , @ProjectFolder NVARCHAR(255)        , @ProjectName NVARCHAR(255)        , @AuditKey INT        , @DisableNotification BIT        , @PackageExecutionLogID INT AS BEGIN TRY        DECLARE @execution_id BIGINT = 0;        -- Create a package execution        EXEC [SSISDB].[catalog].[create_execution]                     @package_name=@PackageName,                     @execution_id=@execution_id OUTPUT,                     @folder_name=@ProjectFolder,                     @project_name=@ProjectName,                     @use32bitruntime=False;          UPDATE [AUDIT].[PackageInstanceExecutionLog] WITH(ROWLOCK)        SET [SSISCatalogExecutionID] = @execution_id        WHERE [PackageInstanceExecutionLogID] = @PackageExecutionLogID          -- this is to set the execution synchronized so that I can check the result in the end        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'SYNCHRONIZED',                     @parameter_value=1; -- true          /********************************************************         ********************************************************              Section: setting parameters                     Source table:  SSISDB.internal.object_parameters              object_type list:                     20: project level variables                     30: package level variables                     50: execution parameter         ********************************************************         ********************************************************/        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_AuditKey',                     @parameter_value=@AuditKey; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_DisableNotification',                     @parameter_value=@DisableNotification; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_PackageInstanceExecutionID',                     @parameter_value=@PackageExecutionLogID; -- true        /********************************************************         ********************************************************              Section: setting variables END         ********************************************************         ********************************************************/            /* This section is carried over from example code           I don't see a reason to change them yet        */        -- Set our package parameters        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_ON_EVENT',                     @parameter_value=1; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_EVENT_CODE',                     @parameter_value=N'0x80040E4D;0x80004005';          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'LOGGING_LEVEL',                     @parameter_value= 1; -- Basic          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_ON_ERROR',                     @parameter_value=1; -- true                              /********************************************************         ********************************************************              Section: EXECUTING         ********************************************************         ********************************************************/        EXEC [SSISDB].[catalog].[start_execution]                     @execution_id;        /********************************************************         ********************************************************              Section: EXECUTING END         ********************************************************         ********************************************************/            /********************************************************         ********************************************************              Section: checking execution result                     Source table:  [SSISDB].[catalog].[executions]              status:                     1: created                     2: running                     3: cancelled                     4: failed                     5: pending                     6: ended unexpectedly                     7: succeeded                     8: stopping                     9: completed         ********************************************************         ********************************************************/        if EXISTS(SELECT TOP 1 1                            FROM [SSISDB].[catalog].[executions] WITH(NOLOCK)                            WHERE [execution_id] = @execution_id                                  AND [status] NOT IN (2, 7, 9)) BEGIN                /********************************************************               ********************************************************                     Section: logging error messages                            Source table:  [SSISDB].[internal].[operation_messages]                     message type:                            10:  OnPreValidate                             20:  OnPostValidate                             30:  OnPreExecute                             40:  OnPostExecute                             60:  OnProgress                             70:  OnInformation                             90:  Diagnostic                             110:  OnWarning                            120:  OnError                            130:  Failure                            140:  DiagnosticEx                             200:  Custom events                             400:  OnPipeline                     message source type:                            10:  Messages logged by the entry APIs (e.g. T-SQL, CLR Stored procedures)                             20:  Messages logged by the external process used to run package (ISServerExec)                             30:  Messages logged by the package-level objects                             40:  Messages logged by tasks in the control flow                             50:  Messages logged by containers (For, ForEach, Sequence) in the control flow                             60:  Messages logged by the Data Flow Task                                    ********************************************************               ********************************************************/                INSERT INTO AUDIT.PackageInstanceExecutionOperationErrorLink                     SELECT @PackageExecutionLogID                                  ,[operation_message_id]                            FROM [SSISDB].[internal].[operation_messages] WITH(NOLOCK)                            WHERE operation_id = @execution_id                                  AND message_type IN (120, 130)                           EXEC [AUDIT].[FailPackageInstanceExecution] @PackageExecutionLogID, 'SSISDB Internal operation_messages found'                GOTO ReturnTrueAsErrorFlag                /********************************************************               ********************************************************                     Section: checking messages END               ********************************************************               ********************************************************/                /* This part is not really working, so now using rowcount to pass status              --DECLARE @PackageErrorMessage NVARCHAR(4000)              --SET @PackageErrorMessage = @PackageName + 'failed with executionID: ' + CONVERT(VARCHAR(20), @execution_id)                --RAISERROR (@PackageErrorMessage -- Message text.              --     , 18 -- Severity,              --     , 1 -- State,              --     , N'check table AUDIT.PackageInstanceExecutionErrorMessages' -- First argument.              --     );              */        END        ELSE BEGIN              GOTO ReturnFalseAsErrorFlagToSignalSuccess        END        /********************************************************         ********************************************************              Section: checking execution result END         ********************************************************         ********************************************************/ END TRY BEGIN CATCH        DECLARE @SSISCatalogCallError NVARCHAR(MAX)        SELECT @SSISCatalogCallError = ERROR_MESSAGE()          EXEC [AUDIT].[FailPackageInstanceExecution] @PackageExecutionLogID, @SSISCatalogCallError          GOTO ReturnTrueAsErrorFlag END CATCH;     /********************************************************  ********************************************************    Section: end result  ********************************************************  ********************************************************/ ReturnTrueAsErrorFlag:        SELECT CONVERT(BIT, 1) AS PackageExecutionErrorExists ReturnFalseAsErrorFlagToSignalSuccess:        SELECT CONVERT(BIT, 0) AS PackageExecutionErrorExists   GO

    Read the article

  • Django QuerySet filter method returns multiple entries for one record

    - by Yaroslav
    Trying to retrieve blogs (see model description below) that contain entries satisfying some criteria: Blog.objects.filter(entries__title__contains='entry') The results is: [<Blog: blog1>, <Blog: blog1>] The same blog object is retrieved twice because of JOIN performed to filter objects on related model. What is the right syntax for filtering only unique objects? Data model: class Blog(models.Model): name = models.CharField(max_length=100) def __unicode__(self): return self.name class Entry(models.Model): title = models.CharField(max_length=100) blog = models.ForeignKey(Blog, related_name='entries') def __unicode__(self): return self.title Sample data: b1 = Blog.objects.create(name='blog1') e1 = Entry.objects.create(title='entry 1', blog=b1) e1 = Entry.objects.create(title='entry 2', blog=b1)

    Read the article

  • Error deploying Spring with JAX-WS application in Jboss 6 server

    - by arlahiru
    I'm getting this error when I deploying spring+JAX-WS application in jboss server 6.1.0. 09:14:38,175 ERROR [org.springframework.web.context.ContextLoader] Context initialization failed: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.sun.xml.ws.transport.http.servlet.SpringBinding#0' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Cannot create inner bean '(inner bean)' of type [org.jvnet.jax_ws_commons.spring.SpringService] while setting bean property 'service'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)': FactoryBean threw exception on object creation; nested exception is java.lang.LinkageError: loader constraint violation: when resolving field "DATETIME" the class loader (instance of org/jboss/classloader/spi/base/BaseClassLoader) of the referring class, javax/xml/datatype/DatatypeConstants, and the class loader (instance of <bootloader>) for the field's resolved type, loader constraint violation: when resolving field "DATETIME" the class loader at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:230) [:2.5.6] at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122) [:2.5.6] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1245) [:2.5.6] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1010) [:2.5.6] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:472) [:2.5.6] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) [:2.5.6] at java.security.AccessController.doPrivileged(Native Method) [:1.7.0_05] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) [:2.5.6] at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) [:2.5.6] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) [:2.5.6] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) [:2.5.6] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) [:2.5.6] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) [:2.5.6] at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) [:2.5.6] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) [:2.5.6] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380) [:2.5.6] at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) [:2.5.6] at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) [:2.5.6] at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) [:2.5.6] at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:3369) [:6.1.0.Final] at org.apache.catalina.core.StandardContext.start(StandardContext.java:3828) [:6.1.0.Final] at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeployInternal(TomcatDeployment.java:294) [:6.1.0.Final] at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeploy(TomcatDeployment.java:146) [:6.1.0.Final] at org.jboss.web.deployers.AbstractWarDeployment.start(AbstractWarDeployment.java:476) [:6.1.0.Final] at org.jboss.web.deployers.WebModule.startModule(WebModule.java:118) [:6.1.0.Final] at org.jboss.web.deployers.WebModule.start(WebModule.java:95) [:6.1.0.Final] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [:1.7.0_05] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [:1.7.0_05] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [:1.7.0_05] at java.lang.reflect.Method.invoke(Method.java:601) [:1.7.0_05] at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) [:6.0.0.GA] at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) [:6.0.0.GA] at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) [:6.0.0.GA] at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:271) [:6.0.0.GA] at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:670) [:6.0.0.GA] at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:206) [:2.2.0.SP2] at $Proxy41.start(Unknown Source) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:53) [:2.2.0.SP2] at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:41) [:2.2.0.SP2] at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:379) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:301) [:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:2044) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:1083) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.executeOrIncrementStateDirectly(AbstractController.java:1322) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1246) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1139) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:939) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:654) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.system.ServiceController.doChange(ServiceController.java:671) [:6.1.0.Final (Build SVNTag:JBoss_6.1.0.Final date: 20110816)] at org.jboss.system.ServiceController.start(ServiceController.java:443) [:6.1.0.Final (Build SVNTag:JBoss_6.1.0.Final date: 20110816)] at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:189) [:6.1.0.Final] at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:102) [:6.1.0.Final] at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:49) [:6.1.0.Final] at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:63) [:2.2.2.GA] at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:55) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:179) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1832) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1550) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1571) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1491) [:2.2.2.GA] at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:379) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:2044) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:1083) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.executeOrIncrementStateDirectly(AbstractController.java:1322) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1246) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1139) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:939) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:654) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.deployers.plugins.deployers.DeployersImpl.change(DeployersImpl.java:1983) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:1076) [:2.2.2.GA] at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:679) [:2.2.2.GA] at org.jboss.system.server.profileservice.deployers.MainDeployerPlugin.process(MainDeployerPlugin.java:106) [:6.1.0.Final] at org.jboss.profileservice.dependency.ProfileControllerContext$DelegateDeployer.process(ProfileControllerContext.java:143) [:0.2.2] at org.jboss.profileservice.plugins.deploy.actions.DeploymentStartAction.doPrepare(DeploymentStartAction.java:98) [:0.2.2] at org.jboss.profileservice.management.actions.AbstractTwoPhaseModificationAction.prepare(AbstractTwoPhaseModificationAction.java:101) [:0.2.2] at org.jboss.profileservice.management.ModificationSession.prepare(ModificationSession.java:87) [:0.2.2] at org.jboss.profileservice.management.AbstractActionController.internalPerfom(AbstractActionController.java:234) [:0.2.2] at org.jboss.profileservice.management.AbstractActionController.performWrite(AbstractActionController.java:213) [:0.2.2] at org.jboss.profileservice.management.AbstractActionController.perform(AbstractActionController.java:150) [:0.2.2] at org.jboss.profileservice.plugins.deploy.AbstractDeployHandler.startDeployments(AbstractDeployHandler.java:168) [:0.2.2] at org.jboss.profileservice.management.upload.remoting.DeployHandlerDelegate.startDeployments(DeployHandlerDelegate.java:74) [:6.1.0.Final] at org.jboss.profileservice.management.upload.remoting.DeployHandler.invoke(DeployHandler.java:156) [:6.1.0.Final] at org.jboss.remoting.ServerInvoker.invoke(ServerInvoker.java:967) [:6.1.0.Final] at org.jboss.remoting.transport.socket.ServerThread.completeInvocation(ServerThread.java:791) [:6.1.0.Final] at org.jboss.remoting.transport.socket.ServerThread.processInvocation(ServerThread.java:744) [:6.1.0.Final] at org.jboss.remoting.transport.socket.ServerThread.dorun(ServerThread.java:548) [:6.1.0.Final] at org.jboss.remoting.transport.socket.ServerThread.run(ServerThread.java:234) [:6.1.0.Final] Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)': FactoryBean threw exception on object creation; nested exception is java.lang.LinkageError: loader constraint violation: when resolving field "DATETIME" the class loader (instance of org/jboss/classloader/spi/base/BaseClassLoader) of the referring class, javax/xml/datatype/DatatypeConstants, and the class loader (instance of <bootloader>) for the field's resolved type, loader constraint violation: when resolving field "DATETIME" the class loader at org.springframework.beans.factory.support.FactoryBeanRegistrySupport$1.run(FactoryBeanRegistrySupport.java:127) [:2.5.6] at java.security.AccessController.doPrivileged(Native Method) [:1.7.0_05] at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:116) [:2.5.6] at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:98) [:2.5.6] at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:223) [:2.5.6] ... 89 more Caused by: java.lang.LinkageError: loader constraint violation: when resolving field "DATETIME" the class loader (instance of org/jboss/classloader/spi/base/BaseClassLoader) of the referring class, javax/xml/datatype/DatatypeConstants, and the class loader (instance of <bootloader>) for the field's resolved type, loader constraint violation: when resolving field "DATETIME" the class loader at com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl.<clinit>(RuntimeBuiltinLeafInfoImpl.java:263) [:2.2] at com.sun.xml.bind.v2.model.impl.RuntimeTypeInfoSetImpl.<init>(RuntimeTypeInfoSetImpl.java:65) [:2.2] at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:133) [:2.2] at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:85) [:2.2] at com.sun.xml.bind.v2.model.impl.ModelBuilder.<init>(ModelBuilder.java:156) [:2.2] at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.<init>(RuntimeModelBuilder.java:93) [:2.2] at com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:473) [:2.2] at com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:319) [:2.2] at com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) [:2.2] at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:188) [:2.2] at com.sun.xml.bind.api.JAXBRIContext.newInstance(JAXBRIContext.java:111) [:2.2] at com.sun.xml.ws.developer.JAXBContextFactory$1.createJAXBContext(JAXBContextFactory.java:113) [:2.2.3] at com.sun.xml.ws.model.AbstractSEIModelImpl$1.run(AbstractSEIModelImpl.java:166) [:2.2.3] at com.sun.xml.ws.model.AbstractSEIModelImpl$1.run(AbstractSEIModelImpl.java:159) [:2.2.3] at java.security.AccessController.doPrivileged(Native Method) [:1.7.0_05] at com.sun.xml.ws.model.AbstractSEIModelImpl.createJAXBContext(AbstractSEIModelImpl.java:158) [:2.2.3] at com.sun.xml.ws.model.AbstractSEIModelImpl.postProcess(AbstractSEIModelImpl.java:99) [:2.2.3] at com.sun.xml.ws.model.RuntimeModeler.buildRuntimeModel(RuntimeModeler.java:250) [:2.2.3] at com.sun.xml.ws.server.EndpointFactory.createSEIModel(EndpointFactory.java:343) [:2.2.3] at com.sun.xml.ws.server.EndpointFactory.createEndpoint(EndpointFactory.java:205) [:2.2.3] at com.sun.xml.ws.api.server.WSEndpoint.create(WSEndpoint.java:513) [:2.2.3] at org.jvnet.jax_ws_commons.spring.SpringService.getObject(SpringService.java:333) [:] at org.jvnet.jax_ws_commons.spring.SpringService.getObject(SpringService.java:45) [:] at org.springframework.beans.factory.support.FactoryBeanRegistrySupport$1.run(FactoryBeanRegistrySupport.java:121) [:2.5.6] ... 93 more *** DEPLOYMENTS IN ERROR: Name -> Error vfs:///usr/jboss/jboss-6.1.0.Final/server/default/deploy/SpringWS.war -> org.jboss.deployers.spi.DeploymentException: URL file:/usr/jboss/jboss-6.1.0.Final/server/default/tmp/vfs/automountb159aa6e8c1b8582/SpringWS.war-4ec4d0151b4c7d7/ deployment failed DEPLOYMENTS IN ERROR: Deployment "vfs:///usr/jboss/jboss-6.1.0.Final/server/default/deploy/SpringWS.war" is in error due to the following reason(s): org.jboss.deployers.spi.DeploymentException: URL file:/usr/jboss/jboss-6.1.0.Final/server/default/tmp/vfs/automountb159aa6e8c1b8582/SpringWS.war-4ec4d0151b4c7d7/ deployment failed But this application is workinng correctly in glassfish 3.x server and web service is up and running. I'm using Netbeans IDE in Ubuntu 12.04 to build and deploy the application and I couldn't figure out what is the issue here. I guess its about spring and jboss because its working in glassfish smoothly. Please help me to solve this issue. Thanks all in advance.

    Read the article

  • FullCalender with JQuery and Google Calender

    - by Marv
    Hi, i got a problem with fullcalender and google calender. My fullcalender does not show the google entrys. Need help plz :) <script type="text/javascript">// <![CDATA[ $(document).ready(function() { var date = new Date(); var d = date.getDate(); var m = date.getMonth(); var y = date.getFullYear(); $('#calendar').fullCalendar({ header: { left: 'prev,next today', center: 'title', right: 'month,basicWeek,basicDay' }, }); $('#calendar').fullCalendar({ events: $.fullCalendar.gcalFeed( "http://www.google.com/calendar/feeds/marvin.spies%40gmx.de/public/basic/", { // put your options here className: 'gcal-event', editable: true, currentTimezone: 'Europe/Berlin' } ) }); }); // ]]  

    Read the article

  • jQuery datepicker getMinDate '+1d'

    - by Adrian Adkison
    Once I have set the minDate property of a datepicker with the convenient string syntax $(elem).datepicker('option','minDate','+1d +3m'); how can I get the date object of the minDate? To help illustrate, there is a method $(elem).datepicker('getDate'); which returns the date that is entered in the input in the format of a date object. I would like the same thing but for datepicker('getMinDate'). There is an option like this $(elem).datepicker('option','minDate'); but this returns '+1d +3m' which is not helpful. I need the actual date object to compare with another date object. Any ideas?

    Read the article

  • Getting Started with Prism (aka Composite Application Guidance for WPF and Silverlight)

    - by dotneteer
    Overview Prism is a framework from the Microsoft Patterns and Practice team that allow you to create WPF and Silverlight in a modular way. It is especially valuable for larger projects in which a large number of developers can develop in parallel. Prism achieves its goal by supplying several services: · Dependency Injection (DI) and Inversion of control (IoC): By using DI, Prism takes away the responsibility of instantiating and managing the life time of dependency objects from individual components to a container. Prism relies on containers to discover, manage and compose large number of objects. By varying the configuration, the container can also inject mock objects for unit testing. Out of the box, Prism supports Unity and MEF as container although it is possible to use other containers by subclassing the Bootstrapper class. · Modularity and Region: Prism supplies the framework to split application into modules from the application shell. Each module is a library project that contains both UI and code and is responsible to initialize itself when loaded by the shell. Each window can be further divided into regions. A region is a user control with associated model. · Model, view and view-model (MVVM) pattern: Prism promotes the user MVVM. The use of DI container makes it much easier to inject model into view. WPF already has excellent data binding and commanding mechanism. To be productive with Prism, it is important to understand WPF data binding and commanding well. · Event-aggregation: Prism promotes loosely coupled components. Prism discourages for components from different modules to communicate each other, thus leading to dependency. Instead, Prism supplies an event-aggregation mechanism that allows components to publish and subscribe events without knowing each other. Architecture In the following, I will go into a little more detail on the services provided by Prism. Bootstrapper In a typical WPF application, application start-up is controls by App.xaml and its code behind. The main window of the application is typically specified in the App.xaml file. In a Prism application, we start a bootstrapper in the App class and delegate the duty of main window to the bootstrapper. The bootstrapper will start a dependency-injection container so all future object instantiations are managed by the container. Out of box, Prism provides the UnityBootstrapper and MefUnityBootstrapper abstract classes. All application needs to either provide a concrete implementation of one of these bootstrappers, or alternatively, subclass the Bootstrapper class with another DI container. A concrete bootstrapper class must implement the CreateShell method. Its responsibility is to resolve and create the Shell object through the DI container to serve as the main window for the application. The other important method to override is ConfigureModuleCatalog. The bootstrapper can register modules for the application. In a more advance scenario, an application does not have to know all its modules at compile time. Modules can be discovered at run time. Readers to refer to one of the Open Modularity Quick Starts for more information. Modules Once modules are registered with or discovered by Prism, they are instantiated by the DI container and their Initialize method is called. The DI container can inject into a module a region registry that implements IRegionViewRegistry interface. The module, in its Initialize method, can then call RegisterViewWithRegion method of the registry to register its regions. Regions Regions, once registered, are managed by the RegionManager. The shell can then load regions either through the RegionManager.RegionName attached property or dynamically through code. When a view is created by the region manager, the DI container can inject view model and other services into the view. The view then has a reference to the view model through which it can interact with backend services. Service locator Although it is possible to inject services into dependent classes through a DI container, an alternative way is to use the ServiceLocator to retrieve a service on demard. Prism supplies a service locator implementation and it is possible to get an instance of the service by calling: ServiceLocator.Current.GetInstance<IServiceType>() Event aggregator Prism supplies an IEventAggregator interface and implementation that can be injected into any class that needs to communicate with each other in a loosely-coupled fashion. The event aggregator uses a publisher/subscriber model. A class can publishes an event by calling eventAggregator.GetEvent<EventType>().Publish(parameter) to raise an event. Other classes can subscribe the event by calling eventAggregator.GetEvent<EventType>().Subscribe(EventHandler, other options). Getting started The easiest way to get started with Prism is to go through the Prism Hands-On labs and look at the Hello World QuickStart. The Hello World QuickStart shows how bootstrapper, modules and region works. Next, I would recommend you to look at the Stock Trader Reference Implementation. It is a more in depth example that resemble we want to set up an application. Several other QuickStarts cover individual Prism services. Some scenarios, such as dynamic module discovery, are more advanced. Apart from the official prism document, you can get an overview by reading Glen Block’s MSDN Magazine article. I have found the best free training material is from the Boise Code Camp. To be effective with Prism, it is important to understands key concepts of WPF well first, such as the DependencyProperty system, data binding, resource, theme and ICommand. It is also important to know your DI container of choice well. I will try to explorer these subjects in depth in the future. Testimony Recently, I worked on a desktop WPF application using Prism. I had a wonderful experience with Prism. The Prism is flexible enough even in the presence of third party controls such as Telerik WPF controls. We have never encountered any significant obstacle.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >