Search Results

Search found 58436 results on 2338 pages for 'data'.

Page 499/2338 | < Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >

  • CodePlex Daily Summary for Monday, May 28, 2012

    CodePlex Daily Summary for Monday, May 28, 2012Popular ReleasesScreenShot: InstallScreenShot: This is the current stable release..Net Code Samples: Code Samples: Code samples (SLNs).LINQ_Koans: LinqKoans v.02: Cleaned up a bitKeelKit: KeelKit 3.0.7600.638: ?、??MySQL?Model?? Mysql ????? ? http://dev.mysql.com/downloads/connector/net/ ??? mysql-connector-net-6.5.4.msi ???, VS???KeelKit ???????MySQL , ????????? ?? DemoMySQL.rar ???, ???????????MySqL??Model. ?????? C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\machine.config ??? ??????。 <system.data> <DbProviderFactories> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlC...TwitterOAuth: TwitterOauth 0.25.16.0116: Beta releasetesttom05242012git02: d1: d1testdd05242012git001: zxczxc: zxczxczxcCODE Framework: 4.0.20524.0: This release has quite a few enhancements for WPF applications and SOA features. See change logs for more details.CommonLibrary.NET: CommonLibrary.NET 0.9.8 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Application: FluentScript Version: 0.9.8 Build: 0.9.8.4 Changeset: 75050 ( CommonLibrary.NET ) Release date: May 24, 2012 Binaries: CommonLibrary.dll Namespace: ComLib.Lang Project site: http://fluentscript.codeplex.com...System Center Orchestrator Integration Packs: Active Directory 3.2: An integration pack enabling AD Automation 3.2 Updates LDAP Pathing updated to support cross forest scenarios Get Object Property Value Filtering efficiency enhancementsBunch of Small Tools: Mélangeur de vocabulaire japonais: Permet de générer des exercices de vocabulaire aléatoire à partir de listes de vocabulaire japonais. 22 listes sont fournies avec le programme.Expression Tree Visualizer for VS 2010: Expression Tree Visualizer Beta: This is a beta release, in this release some expression types are not handled and use a default visualization behavior. The first release will be published soon. Wait for it...Ulfi: Ulfi source: Build with Visual Studio 2010 Express C# or betterJayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0 RC1 Refresh 2: JayData is a unified data access library for JavaScript developers to query and update data from different sources like webSQL, indexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video: http://www.youtube.com/watch?v=LlJHgj1y0CU RC1 R2 Release highlights Knockout.js integrationUsing the Knockout.js module, your UI can be automatically refreshed when the data model changes, so you can develop the front-end of your data manager app even faster. Querying 1:N relations in W...Christoc's DotNetNuke Module Development Template: 00.00.08 for DNN6: BEFORE USE YOU need to install the MSBuild Community Tasks available from http://msbuildtasks.tigris.org For best results you should configure your development environment as described in this blog post Then read this latest blog post about customizing and using these custom templates. Installation is simple To use this template place the ZIP (not extracted) file in your My Documents\Visual Studio 2010\Templates\ProjectTemplates\Visual C#\Web OR for VB My Documents\Visual Studio 2010\Te...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.53: fix issue #18106, where member operators on numeric literals caused the member part to be duplicated when not minifying numeric literals ADD NEW FEATURE: ability to create source map files! The first mapfile format to be supported is the Script# format. Use the new -map filename switch to create map files when building your sources.CreditAnalytics: CreditAnalytics Release 1.5: 22 May 2012 (v1.5) (Build 449) Regressor Framework: Implementation of the regressor set, tolerance check, curve scenario regressors, regression framework suite, and the eventual regression output. Discount Curve Regression: Regressing Base Curve Creation, scenario Curve creation, and calculation of spot/effective implied rates and discount factors. Credit Curve Regression: Regressing Base Curve Creation, scenario Curve creation, and calculation of spot/effective implied hazard rates, reco...BlackJumboDog: Ver5.6.3: 2012.05.22 Ver5.6.3  (1) HTTP????????、ftp://??????????????????????LogicCircuit: LogicCircuit 2.12.5.22: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionThis release is fixing start up issue.Orchard Project: Orchard 1.4.2: This is a service release to address 1.4 and 1.4.1 bugs. Please read our release notes for Orchard 1.4.2: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-NotesNew Projects.Net Code Samples: Various .Net code samples. AFSAspnetPusherV1: my new wns project lolAFSAspnetPusherV2: renewed version ofAFSAspnetPusherV4: renewed one v4AgileDesign Utilities Library: This library provides common functionality usable for most software projects: Logger - Asynchronous logging on top of new Microsoft logging class TraceSource with simplified API NameOf - Avoid using string names using static reflection Various reflection helpersAssociate Many to Many Relationship Entities Tool for Dynamics CRM 2011: Associate Many to many relationship tool is used for Dynamics CRM 2011 to associate or disassociate N:N relationship entities. This tool is dynamics crm 2011 solution, which consist of one entity and one plugin. Entity "Many to Many Relationship" record is used by Many to many relationship plugin to associate or disassociate entities. If many to many relationship entity record is created then plugin associate/disassociate entities from record data.Boxhead Multiplayer Server: A PHP dedicated server for my multiplayer version of Boxhead.CodedUITraceFiletoCSV: Console Application to parse the result file generated by Coded UI Test execution ".trx" into a comma separated file for more readable and detailed result.FlipExt: FlipExt is an easy to use image converter. It converts any image to .png .bmp .jpg .gif .tif .jpeg .tiff .ico. More extensions will be added soon.Foo Values Maker: Foo creates values for your test class variables so that you can write tests faster.FoodFree: Projeto de monitoramento de enchentesHouodeProject: ????ITLand of Dreams, Codename: Waterloo: We want to create a classical Live-MMORPG you can play on your smartphone (in the first step only “Windows Phone” will be supported) with the basic idea of Ultima Online or similiar games in our mind. You can create one or more characters, choose some name, gender, basic attributes (skin and hair color, …) a race (e.g. ‘Human’) and a profession (e.g. ‘fighter’ or ‘craftsman’). Now he can freely travel through the whole world, meeting other players, fighting monsters, absolving quests, tra...Makecert UI: Makecert UI is a shell layer application on top of the Microsoft makecert.exe utility. Makecert UI makes it easy for you to create self signed certificates, even from your own CA.MS CRM Rich Text box: Rich text box plug-in for MS CRM 2011. Hope it will be helpful for many of you. Thank you for using it and let me know if any further help needed.Nivo Slider Web Part SharePoint 2010: SharePoint 2010 implementation of Nivo Slider. Easy way to put Nivo slider on any page!!Office365 Weather WebPart: Office 365 WebPart that displays a 5 day weather forecast for a given location. The weather data is retrieved from the Met Office feed hosted on the Windows Azure Data Market. This is a free data feed that provides weather data for the UK only.Private Cloud Solution Design: This project is named “Training Cloud”. It provides an appropriate solution which can be used for technical audiences self learning with a hands-on-lab experience using Microsoft technology hosted in a virtualized environment built on System Center 2012. Since it depends on hardware, such as RAM, Sotrage, Network , etc. At last, the end user could have all labs ready which deloyed on private cloud. And it can be easily matain , setup labs with cloud’s function. pyUpdater: pyupdater provides a platform for updating python based applications.SharePoint Document Navigator: SP Document Navigator is a front-end solution for navigating a document library using jQuery and jQuery Mobile Trimetable: Train schedule for WP7Upload Master Pages & Page Layouts to Master Page Gallery using PowerShell: This document details the steps to upload Master pages and Page Layouts to Master Page Gallery using the “Upload Master Pages” Utility. 1- Download the .zip file 2- Edit the “UploadMasterPages.bat” file and Change the <<site collection url>> in the text below with respect to the environment. e.g. http://sitecollectionurl 3- Save the “UploadMasterPages.bat” file and close it. 4- Put all of your master pages and page layouts to Doc folder. 5- Run “UploadMasterPages.bat” file as Administrat...vivo: vivo Vietnamese Voice Vietnamese Voice recognition project thaihung.bkhn@gmail.com http://eking.vnvnv: VNV Vietnamese Voice Vietnamese Voice recognition project thaihung.bkhn@gmail.com http://eking.vnWindows Phone 7 User Guide Page: Take your app's users through a guided tour! Make your app's hidden gems shine, make users understand your app's logic and UX better.WP-FTS: This plugin for Wordpress replaces the default search engine, implemented using a simple "LIKE" operator, with the usage of the more powerful Full-Text Engine that comes with SQL Server.

    Read the article

  • SQL SERVER – DELETE, TRUNCATE and RESEED Identity

    - by pinaldave
    Yesterday I had a headache answering questions to one of the DBA on the subject of Reseting Identity Values for All Tables. After talking to the DBA I realized that he has no clue about how the identity column behaves when there is DELETE, TRUNCATE or RESEED Identity is used. Let us run a small T-SQL Script. Create a temp table with Identity column beginning with value 11. The seed value is 11. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO When seed value is 11 the next value which is inserted has the identity column value as 11. – Select Data SELECT * FROM [TestTable] GO Effect of DELETE statement -- Delete Data DELETE FROM [TestTable] GO When the DELETE statement is executed without WHERE clause it will delete all the rows. However, when a new record is inserted the identity value is increased from 11 to 12. It does not reset but keep on increasing. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] Effect of TRUNCATE statement -- Truncate table TRUNCATE TABLE [TestTable] GO When the TRUNCATE statement is executed it will remove all the rows. However, when a new record is inserted the identity value is increased from 11 (which is original value). TRUNCATE resets the identity value to the original seed value of the table. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Effect of RESEED statement If you notice I am using the reseed value as 1. The original seed value when I created table is 11. However, I am reseeding it with value 1. -- Reseed DBCC CHECKIDENT ('TestTable', RESEED, 1) GO When we insert the one more value and check the value it will generate the new value as 2. This new value logic is Reseed Value + Interval Value – in this case it will be 1+1 = 2. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Here is the clean up act. -- Clean up DROP TABLE [TestTable] GO Question for you: If I reseed value with some random number followed by the truncate command on the table what will be the seed value of the table. (Example, if original seed value is 11 and I reseed the value to 1. If I follow up with truncate table what will be the seed value now? Here is the complete script together. You can modify it and find the answer to the above question. Please leave a comment with your answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Endeca Information Discovery 3-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    For Oracle Partners, on October 3-5, 2012 in Milan, Italy: Register here. Endeca Information Discovery plays a key role with your big data analysis and complements Oracle Business Intelligence Solutions such as OBIEE. This FREE hands-on workshop for Oracle Partners highlights technical know-how of the product and helps understand its value proposition. We will walk you through four key components of the product: Oracle Endeca Server—A highly scalable, search-analytical database that derives the data model based on the data presented to it, thereby reducing data modeling requirements. Studio—A highly interactive, component-based user interface for configuring advanced, yet intuitive, analytical applications. Integration Suite—Provides rapid unification and enrichment of diverse sources of information into a single integrated view. Extensible Value-Added Modules—Add-on modules that provide value quickly through configuration instead of custom coding. Topics covered will include Data Exploration with Endeca Information Discovery, Data Ingest, Project Lifecycle, Building an Endeca Server data model and advanced modeling techniques, and Working with Studio. Lab Outline The labs showcase Oracle Endeca Information Discovery components and functionality by providing expertise on features and know-how of building such applications. The hands-on activities are based on a Quick Start application provided during the class. Audience Oracle Partners, Big Data Analytics Developer and Architects BI and EPM Application Developers and Implementers, Data Warehouse Developers Equipment Requirements This workshop requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware 8GB RAM is highly recommended (Windows 64 bit Machine is required) 40 GB free space (includes staging) USB 2.0 port (at least one available) Software One of the following operating systems: 64-bit Windows host/laptop OS (Windows 7 or Windows Server 2008) 64-bit host/laptop OS with a Windows VM (Server, or Win 7, BIC2g, etc.) Internet Explorer 8.x , Firefox 3.6 or Firefox 6.0 WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle Endeca Information Discovery Workshop Register here: October 3-5, 2012: Cinisello Balsamo, Milan.  We will confirm with you your place within 2 weeks. Questions?  Send email to: [email protected]  :  Oracle Platform Technologies Enablement Services.

    Read the article

  • How to transform gameObjects in array?

    - by user1764781
    I have an array of available gameObjects in the scene. An array of GO should be transformed according to received floats through UDP connection. I know it is simple, but can't figure it out how to accomplish the transformation an array of GO according to received unique floats for each GO, any help will be appreciated. Here is a transformation floats, it might be helpful I guess: x =ReadSingleBigEndian(data, signalOffset); signalOffset+=4; y= ReadSingleBigEndian(data, signalOffset); signalOffset+=4; z= ReadSingleBigEndian(data, signalOffset); signalOffset+=4; alpha= ReadSingleBigEndian(data, signalOffset); signalOffset+=4; theta= ReadSingleBigEndian(data,signalOffset); signalOffset+=4; phi= ReadSingleBigEndian(data,signalOffset); signalOffset+=4;

    Read the article

  • Azure &ndash; Part 5 &ndash; Repository Pattern for Table Service

    - by Shaun
    In my last post I created a very simple WCF service with the user registration functionality. I created an entity for the user data and a DataContext class which provides some methods for operating the entities such as add, delete, etc. And in the service method I utilized it to add a new entity into the table service. But I didn’t have any validation before registering which is not acceptable in a real project. So in this post I would firstly add some validation before perform the data creation code and show how to use the LINQ for the table service.   LINQ to Table Service Since the table service utilizes ADO.NET Data Service to expose the data and the managed library of ADO.NET Data Service supports LINQ we can use it to deal with the data of the table service. Let me explain with my current example: I would like to ensure that when register a new user the email address should be unique. So I need to check the account entities in the table service before add. If you remembered, in my last post I mentioned that there’s a method in the TableServiceContext class – CreateQuery, which will create a IQueryable instance from a given type of entity. So here I would create a method under my AccountDataContext class to return the IQueryable<Account> which named Load. 1: public class AccountDataContext : TableServiceContext 2: { 3: private CloudStorageAccount _storageAccount; 4:  5: public AccountDataContext(CloudStorageAccount storageAccount) 6: : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials) 7: { 8: _storageAccount = storageAccount; 9:  10: var tableStorage = new CloudTableClient(_storageAccount.TableEndpoint.AbsoluteUri, 11: _storageAccount.Credentials); 12: tableStorage.CreateTableIfNotExist("Account"); 13: } 14:  15: public void Add(Account accountToAdd) 16: { 17: AddObject("Account", accountToAdd); 18: SaveChanges(); 19: } 20:  21: public IQueryable<Account> Load() 22: { 23: return CreateQuery<Account>("Account"); 24: } 25: } The method returns the IQueryable<Account> so that I can perform the LINQ operation on it. And back to my service class, I will use it to implement my validation. 1: public bool Register(string email, string password) 2: { 3: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 4: var accountToAdd = new Account(email, password) { DateCreated = DateTime.Now }; 5: var accountContext = new AccountDataContext(storageAccount); 6:  7: // validation 8: var accountNumber = accountContext.Load() 9: .Where(a => a.Email == accountToAdd.Email) 10: .Count(); 11: if (accountNumber > 0) 12: { 13: throw new ApplicationException(string.Format("Your account {0} had been used.", accountToAdd.Email)); 14: } 15:  16: // create entity 17: try 18: { 19: accountContext.Add(accountToAdd); 20: return true; 21: } 22: catch (Exception ex) 23: { 24: Trace.TraceInformation(ex.ToString()); 25: } 26: return false; 27: } I used the Load method to retrieve the IQueryable<Account> and use Where method to find the accounts those email address are the same as the one is being registered. If it has I through an exception back to the client side. Let’s run it and test from my simple client application. Oops! Looks like we encountered an unexpected exception. It said the “Count” is not support by the ADO.NET Data Service LINQ managed library. That is because the table storage managed library (aka. TableServiceContext) is based on the ADO.NET Data Service and it supports very limit LINQ operation. Although I didn’t find a full list or documentation about which LINQ methods it supports I could even refer a page on msdn here. It gives us a roughly summary of which query operation the ADO.NET Data Service managed library supports and which doesn't. As you see the Count method is not in the supported list. Not only the query operation, there inner lambda expression in the Where method are limited when using the ADO.NET Data Service managed library as well. For example if you added (a => !a.DateDeleted.HasValue) in the Where method to exclude those deleted account it will raised an exception said "Invalid Input". Based on my experience you should always use the simple comparison (such as ==, >, <=, etc.) on the simple members (such as string, integer, etc.) and do not use any shortcut methods (such as string.Compare, string.IsNullOrEmpty etc.). 1: // validation 2: var accountNumber = accountContext.Load() 3: .Where(a => a.Email == accountToAdd.Email) 4: .ToList() 5: .Count; 6: if (accountNumber > 0) 7: { 8: throw new ApplicationException(string.Format("Your account {0} had been used.", accountToAdd.Email)); 9: } We changed the a bit and try again. Since I had created an account with my mail address so this time it gave me an exception said that the email had been used, which is correct.   Repository Pattern for Table Service The AccountDataContext takes the responsibility to save and load the account entity but only for that specific entity. Is that possible to have a dynamic or generic DataContext class which can operate any kinds of entity in my system? Of course yes. Although there's no typical database in table service we can threat the entities as the records, similar with the data entities if we used OR Mapping. As we can use some patterns for ORM architecture here we should be able to adopt the one of them - Repository Pattern in this example. We know that the base class - TableServiceContext provide 4 methods for operating the table entities which are CreateQuery, AddObject, UpdateObject and DeleteObject. And we can create a relationship between the enmity class, the table container name and entity set name. So it's really simple to have a generic base class for any kinds of entities. Let's rename the AccountDataContext to DynamicDataContext and make the type of Account as a type parameter if it. 1: public class DynamicDataContext<T> : TableServiceContext where T : TableServiceEntity 2: { 3: private CloudStorageAccount _storageAccount; 4: private string _entitySetName; 5:  6: public DynamicDataContext(CloudStorageAccount storageAccount) 7: : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials) 8: { 9: _storageAccount = storageAccount; 10: _entitySetName = typeof(T).Name; 11:  12: var tableStorage = new CloudTableClient(_storageAccount.TableEndpoint.AbsoluteUri, 13: _storageAccount.Credentials); 14: tableStorage.CreateTableIfNotExist(_entitySetName); 15: } 16:  17: public void Add(T entityToAdd) 18: { 19: AddObject(_entitySetName, entityToAdd); 20: SaveChanges(); 21: } 22:  23: public void Update(T entityToUpdate) 24: { 25: UpdateObject(entityToUpdate); 26: SaveChanges(); 27: } 28:  29: public void Delete(T entityToDelete) 30: { 31: DeleteObject(entityToDelete); 32: SaveChanges(); 33: } 34:  35: public IQueryable<T> Load() 36: { 37: return CreateQuery<T>(_entitySetName); 38: } 39: } I saved the name of the entity type when constructed for performance matter. The table name, entity set name would be the same as the name of the entity class. The Load method returned a generic IQueryable instance which supports the lazy load feature. Then in my service class I changed the AccountDataContext to DynamicDataContext and that's all. 1: var accountContext = new DynamicDataContext<Account>(storageAccount); Run it again and register another account. The DynamicDataContext now can be used for any entities. For example, I would like the account has a list of notes which contains 3 custom properties: Account Email, Title and Content. We create the note entity class. 1: public class Note : TableServiceEntity 2: { 3: public string AccountEmail { get; set; } 4: public string Title { get; set; } 5: public string Content { get; set; } 6: public DateTime DateCreated { get; set; } 7: public DateTime? DateDeleted { get; set; } 8:  9: public Note() 10: : base() 11: { 12: } 13:  14: public Note(string email) 15: : base(email, string.Format("{0}_{1}", email, Guid.NewGuid().ToString())) 16: { 17: AccountEmail = email; 18: } 19: } And no need to tweak the DynamicDataContext we can directly go to the service class to implement the logic. Notice here I utilized two DynamicDataContext instances with the different type parameters: Note and Account. 1: public class NoteService : INoteService 2: { 3: public void Create(string email, string title, string content) 4: { 5: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 6: var accountContext = new DynamicDataContext<Account>(storageAccount); 7: var noteContext = new DynamicDataContext<Note>(storageAccount); 8:  9: // validate - email must be existed 10: var accounts = accountContext.Load() 11: .Where(a => a.Email == email) 12: .ToList() 13: .Count; 14: if (accounts <= 0) 15: throw new ApplicationException(string.Format("The account {0} does not exsit in the system please register and try again.", email)); 16:  17: // save the note 18: var noteToAdd = new Note(email) { Title = title, Content = content, DateCreated = DateTime.Now }; 19: noteContext.Add(noteToAdd); 20: } 21: } And updated our client application to test the service. I didn't implement any list service to show all notes but we can have a look on the local SQL database if we ran it at local development fabric.   Summary In this post I explained a bit about the limited LINQ support for the table service. And then I demonstrated about how to use the repository pattern in the table service data access layer and make the DataContext dynamically. The DynamicDataContext I created in this post is just a prototype. In fact we should create the relevant interface to make it testable and for better structure we'd better separate the DataContext classes for each individual kind of entity. So it should have IDataContextBase<T>, DataContextBase<T> and for each entity we would have class AccountDataContext<Account> : IDataContextBase<Account>, DataContextBase<Account> { … } class NoteDataContext<Note> : IDataContextBase<Note>, DataContextBase<Note> { … }   Besides the structured data saving and loading, another common scenario would be saving and loading some binary data such as images, files. In my next post I will show how to use the Blob Service to store the bindery data - make the account be able to upload their logo in my example.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Conflict between Change Control and ASL Mapping

    - by Jie Chen
    Yesterday I got one strange report that on Agile 9.3.1.2, adding a Supplier into Item's Supplier tab will always remove all the data from Item.PageTwo.MultiList01 field which is assigned to a User Group list. The detailed problem description is like below. In JavaClient, MultiList01 attribute on Parts class's PageTwo tab is enabled and assigned with User Group list. On WebClient, user created a new Part and assign MultiList01 with two UserGroups: "Global User Group Test1" and "Personal Group_Test1". Then go to Suppliers tab to add three Suppliers. Switch back to Part's TitleBlock, will see MultiList01 loses the User Group data. To confirm if MultiList01 really loses the data or it saves with other wrong data, I need to check the database and find strange data that MultiList01 saves wrong data ",7976911,7976907,7976959,", which are exactly the ID of these three Suppliers. Then I can suspect the Supplier attribute on Suppliers tab must be mapped to MultiList01. However when I check Supplier in JavaClient, the "ASL mapped to" is blank. More interesting thing is the database clearly shows Supplier attribute (Base ID =2000004219) is mapped to 2090, which is PageTwo.MultiList01 Base ID. Till now, we can get a conclusion that Supplier data is really mapped to MultiList01, though we assign MultiList01 to User Group list and Supplier does not set "ASL mapped to". It must be another function which overrides "ASL mapped to" visibility in JavaClient with high priority. That is the "Change Controlled" function. We immediately see "ASL mapped to" with value "MultiList01" when we disable Change Controlled for Multilist01 If one attribute is Change Controlled, Supplier data cannot be mapped to this attribute theoretically because Supplier could be dynamically modified by users, not by Changes. In real situation of Agile 9.3.1.2, it could be a Code Defect. We can imagine the scenario customer met. He setup Parts.PageTwo.Multilist01 assigned with Supplier list, then in Parts.Suppliers.Supplier attribute, he set "ASL mapped to" to "Multilist01". Later company business is changed, so he set Multilist01 with Change Controlled and re-assign with User Group list. He forgot to remove "ASL mapped to" before he did modifications to Multilist01. Finally we know the solution, it depends on real business. If still need to mapping Supplier to Parts.PageTwo attribute, should modify "ASL mapped to" to other one attribute which already has assigned with Suppliers list. If do not need "ASL mapped to" function, should delete the data from database level. We cannot do it from JavaClient UI. delete propertytable where id in (select p.id from propertytable p, nodetable n where p.parentid =n.id and n.inherit=2000004219 and propertyid=794)

    Read the article

  • CodePlex Daily Summary for Thursday, November 29, 2012

    CodePlex Daily Summary for Thursday, November 29, 2012Popular ReleasesJayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...PDF Library: PDFLib v2.0: Release notes This new version include many bug fixes and include support for stream objects and cross-reference object streams. New FeatureExtract images from the PDFMCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.76: Fixed a typo in ObjectLiteralProperty.IsConstant that caused all object literals to be treated like they were constants, and possibly moved around in the code when they shouldn't be.Kooboo CMS: Kooboo CMS 3.3.0: New features: Dropdown/Radio/Checkbox Lists no longer references the userkey. Instead they refer to the UUID field for input value. You can now delete, export, import content from database in the site settings. Labels can now be imported and exported. You can now set the required password strength and maximum number of incorrect login attempts. Child sites can inherit plugins from its parent sites. The view parameter can be changed through the page_context.current value. Addition of c...Distributed Publish/Subscribe (Pub/Sub) Event System: Distributed Pub Sub Event System Version 3.0: Important Wsp 3.0 is NOT backward compatible with Wsp 2.1. Prerequisites You need to install the Microsoft Visual C++ 2010 Redistributable Package. You can find it at: x64 http://www.microsoft.com/download/en/details.aspx?id=14632x86 http://www.microsoft.com/download/en/details.aspx?id=5555 Wsp now uses Rx (Reactive Extensions) and .Net 4.0 3.0 Enhancements I changed the topology from a hierarchy to peer-to-peer groups. This should provide much greater scalability and more fault-resi...datajs - JavaScript Library for data-centric web applications: datajs version 1.1.0: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes...Team Foundation Server Administration Tool: 2.2: TFS Administration Tool 2.2 supports the Team Foundation Server 2012 Object Model. Visual Studio 2012 or Team Explorer 2012 must be installed before you can install this tool. You can download and install Team Explorer 2012 from http://aka.ms/TeamExplorer2012. There are no functional changes between the previous release (2.1) and this release.Coding Guidelines for C# 3.0, C# 4.0 and C# 5.0: Coding Guidelines for CSharp 3.0, 4.0 and 5.0: See Change History for a detailed list of modifications.Math.NET Numerics: Math.NET Numerics v2.3.0: Portable Library Build: Adds support for WP8 (.Net 4.0 and higher, SL5, WP8 and .NET for Windows Store apps) New: portable build also for F# extensions (.Net 4.5, SL5 and .NET for Windows Store apps) NuGet: portable builds are now included in the main packages, no more need for special portable packages Linear Algebra: Continued major storage rework, in this release focusing on vectors (previous release was on matrices) Thin QR decomposition (in addition to existing full QR) Static Cr...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.1: +2012-11-25 v3.2.1 +????????。 -MenuCheckBox?CheckedChanged??????,??????????。 -???????window.IDS??????????????。 -?????(??TabCollection,ControlBaseCollection)???,????????????????。 +Grid??。 -??SelectAllRows??。 -??PageItems??,?????????????,?????、??、?????。 -????grid/gridpageitems.aspx、grid/gridpageitemsrowexpander.aspx、grid/gridpageitems_pagesize.aspx。 -???????????????????。 -??ExpandAllRowExpanders??,?????????????????(grid/gridrowexpanderexpandall2.aspx)。 -??????ExpandRowExpande...VidCoder: 1.4.9 Beta: Updated HandBrake core to SVN 5079. Fixed crashes when encoding DVDs with title gaps.ZXing.Net: ZXing.Net 0.10.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2521 of the java version windows phone 8 assemblies improvements and fixesBlackJumboDog: Ver5.7.3: 2012.11.24 Ver5.7.3 (1)SMTP???????、?????????、??????????????????????? (2)?????????、?????????????????????????? (3)DNS???????CNAME????CNAME????????????????? (4)DNS????????????TTL???????? (5)???????????????????????、?????????????????? (6)???????????????????????????????Liberty: v3.4.3.0 Release 23rd November 2012: Change Log -Added -H4 A dialog which gives further instructions when attempting to open a "Halo 4 Data" file -H4 Added a short note to the weapon editor stating that dropping your weapons will cap their ammo -Reach Edit the world's gravity -Reach Fine invincibility controls in the object editor -Reach Edit object velocity -Reach Change the teams of AI bipeds and vehicles -Reach Enable/disable fall damage on the biped editor screen -Reach Make AIs deaf and/or blind in the objec...Umbraco CMS: Umbraco 4.11.1: NugetNuGet BlogRead the release blog post for 4.11.0. Read the release blog post for 4.11.1. Whats new50 bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesGetPropertyValue now returns an object, not a string (only affects upgrades from 4.10.x to 4.11.0) NoteIf you need Courier use the release candidate (as of build 26). The code editor has been greatly improved, but is sometimes problematic in Internet Explorer 9 and lower. Pr...New ProjectsConvection Game API: A basic game API written in C# using XNA 4.0CS^2 (Casaba's Simple Code Scanner): Casaba's simple code scanner is a tool for managing greps to be run over a source tree and correlating those greps to issues for bug generation. CSparse.NET: A Concise Sparse Matrix Package for .NETDocument Generation Utility: This is about extracting different XML entities and wrapping up with jumbled legal English alphabets and outputs as per File format defined in settings.DuinoExplorer: file manager, http, remote, copy & pasteGenomeOS: An experimental x86 object-oriented operanting system programmed in C/C++ and x86 intel assemblyHush.Project: DatabaseInvi: NALibreTimeTracker Client: Free alternative to the original TimeTracker Client.MO Virtual Router: Virtual Router For Windows 8My Word Game Publisher: This is a full web application which is developed in .NET 2.0 using c#, xml, MS SQL, aspx.MyWGP XmlDataValidator: This is designed and developed (in Silverlight 2, C#) to validate the requirements of data in xml files: the type & size of data, and the requirement status. It allows a user to choose the type of data, enter min & max size, and check the requirement status for data elements.OMX_AL_test_environment: OMX application layer simulation/testing environmentOrchard Coverflow: An Orchard CMS module that provides a way to create iTunes-like coverflow displays out of your Media items.SharePoint standart list form javascript utility (SPListFormUtility): SPListFormUtility is a small JavaScript library, that helps control the appearance and behavior of standart SharePoint list forms. SharePoint 2010, 2013 supportShuttle Core: Shuttle Core is a project that contains cross-cutting libraries for use in .net software development. The Prism architecture lack for the Interactions!: The Prism architecture lack for the Interactions! (Silverlight) The defect opens a popups more then once and creates memory leaks. Example of the lack here.YouCast: YouCast (from YouTube and Podcast) allows you to subscribe to video feeds on YouTube* as podcasts in any standard podcatcher like Zune PC, iTunes and so forth.ZEFIT: Zeiterfassungstool als Projekt im 4.Lehrjahr als Informatiker an der GIBB in Bern????: ?Windows Phone?????,??Windows Phone??????????????。

    Read the article

  • JS closures - Passing a function to a child, how should the shared object be accessed

    - by slicedtoad
    I have a design and am wondering what the appropriate way to access variables is. I'll demonstrate with this example since I can't seem to describe it better than the title. Term is an object representing a bunch of time data (a repeating duration of time defined by a bunch of attributes) Term has some print functionality but does not implement the print functions itself, rather they are passed in as anonymous functions by the parent. This would be similar to how shaders can be passed to a renderer rather than defined by the renderer. A container (let's call it Box) has a Schedule object that can understand and use Term objects. Box creates Term objects and passes them to Schedule as required. Box also defines the print functions stored in Term. A print function usually takes an argument and uses it to return a string based on that argument and Term's internal data. Sometime the print function could also use data stored in Schedule, though. I'm calling this data shared. So, the question is, what is the best way to access this shared data. I have a lot of options since JS has closures and I'm not familiar enough to know if I should be using them or avoiding them in this case. Options: Create a local "reference" (term used lightly) to the shared data (data is not a primitive) when defining the print function by accessing the shared data through Schedule from Box. Example: var schedule = function(){ var sched = Schedule(); var t1 = Term( function(x){ // Term.print() return (x + sched.data).format(); }); }; Bind it to Term explicitly. (Pass it in Term's constructor or something). Or bind it in Sched after Box passes it. And then access it as an attribute of Term. Pass it in at the same time x is passed to the print function, (from sched). This is the most familiar way for my but it doesn't feel right given JS's closure ability. Do something weird like bind some context and arguments to print. I'm hoping the correct answer isn't purely subjective. If it is, then I guess the answer is just "do whatever works". But I feel like there are some significant differences between the approaches that could have a large impact when stretched beyond my small example.

    Read the article

  • Coherence Webcast for Developers July 11

    - by jeckels
    Coming on July 11th, we look forward to having you join us for a special Coherence webcast - just for developers! Want to learn how you, the developer, can make applications Big Data and Fast data ready? Want to be able to customize and manage your applications and services to provide real-time data and processing with ease? Then this webcast is for you. Coherence Live Webcast Developers: Deploy Highly-Available Custom Services on Your Data Grid Products July 11, 10am Pacific Time >> Register now! <<  (of course, it's free)Join Brian Oliver of the Coherence team to see how you can create and deploy customized, highly-available services for your data grid, and how real-time data processing will allow you to provide unmatched end-user experiences. We look forward to having you join us.

    Read the article

  • How do I organize a GUI application for passing around events and for setting up reads from a shared resource

    - by Savanni D'Gerinel
    My tools involved here are GTK and Haskell. My questions are probably pretty trivial for anyone who has done significant GUI work, but I've been off in the equivalent of CGI applications for my whole career. I'm building an application that displays tabular data, displays the same data in a graph form, and has an edit field for both entering new data and for editing existing data. After asking about sharing resources, I decided that all of the data involved will be stored in an MVar so that every component can just read the current state from the MVar. All of that works, but now it is time for me to rearrange the application so that it can be interactive. With that in mind, I have three widgets: a TextView (for editing), a TreeView (for displaying the data), and a DrawingArea (for displaying the data as a graph). I THINK I need to do two things, and the core of my question is, are these the right things, or is there a better way. Thing the first: All event handlers, those functions that will be called any time a redisplay is needed, need to be written at a high level and then passed into the function that actually constructs the widget to begin with. For instance: drawStatData :: DrawingArea -> MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO () createStatView :: (DrawingArea -> IO ()) -> IO VBox createUI :: MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO HBox createUI storeMVar field = do graphs <- createStatView (\area -> drawStatData area storeMVar field) hbox <- hBoxNew False 10 boxPackStart hbox graphs PackNatural 0 return hbox In this case, createStatView builds up a VBox that contains a DrawingArea to graph the data and potentially other widgets. It attaches drawStatData to the realize and exposeEvent events for the DrawingArea. I would do something similar for the TreeView, but I am not completely sure what since I have not yet done it and what I am thinking of would involve replacing the TreeModel every time the TreeView needs to be updated. My alternative to the above would be... drawStatData :: DrawingArea -> MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO () createStatView :: IO (VBox, DrawingArea) ... but in this case, I would arrange createUI like so: createUI :: MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO HBox createUI storeMVar field = do (graphbox, graph) <- createStatView (\area -> drawStatData area storeMVar field) hbox <- hBoxNew False 10 boxPackStart hbox graphs PackNatural 0 on graph realize (drawStatData graph storeMVar field) on graph exposeEvent (do liftIO $ drawStatData graph storeMVar field return ()) return hbox I'm not sure which is better, but that does lead me to... Thing the second: it will be necessary for me to rig up an event system so that various events can send signals all the way to my widgets. I'm going to need a mediator of some kind to pass events around and to translate application-semantic events to the actual events that my widgets respond to. Is it better for me to pass my addressable widgets up the call stack to the level where the mediator lives, or to pass the mediator down the call stack and have the widgets register directly with it? So, in summary, my two questions: 1) pass widgets up the call stack to a global mediator, or pass the global mediator down and have the widgets register themselves to it? 2) pass my redraw functions to the builders and have the builders attach the redraw functions to the constructed widgets, or pass the constructed widgets back and have a higher level attach the redraw functions (and potentially link some widgets together)? Okay, and... 3) Books or wikis about GUI application architecture, preferably coherent architectures where people aren't arguing about minute details? The application in its current form (displays data but does not write data or allow for much interaction) is available at https://bitbucket.org/savannidgerinel/fitness . You can run the application by going to the root directory and typing runhaskell -isrc src/Main.hs data/ or... cabal build dist/build/fitness/fitness data/ You may need to install libraries, but cabal should tell you which ones.

    Read the article

  • Unobtrusive Maximum Input Lengths with JQuery and FluentValidation

    - by Steve Wilkes
    If you use FluentValidation and set a maximum length for a string or a maximum  value for a numeric property, JQuery validation is used to show an error message when the user inputs too many characters or a numeric value which is too big. On a recent project we wanted to use input’s maxlength attribute to prevent a user from entering too many characters rather than cure the problem with an error message, and I added this JQuery to add maxlength attributes based on JQuery validation’s data- attributes. $(function () { $("input[data-val-range-max],input[data-val-length-max]").each(function (i, e) { var input = $(e); var maxlength = input.is("[data-val-range-max]") ? input.data("valRangeMax").toString().length : input.data("valLengthMax"); input.attr("maxlength", maxlength); }); }); Presto!

    Read the article

  • Google Analytics Campaigns Not Tracking E-Commerce

    - by Paul
    I am running email campaigns via MailChimp and tracking the success of my campaigns via Google Analytics. I can successfully see data being tracked for: Reporting > Conversions > Ecommerce (Receiving Data) Reporting > Traffic Sources > Campaigns (Receiving Data) However, I am not receiving any Ecommerce data for the individual campaigns: Reporting > Traffic Sources > Campaigns > Ecommerce (No data) So I see data like: Visits: 18,501 Revenue: $0.00 Everything I have read leads me to believe this should just "work" if Ecommerce is setup. Is there some additional action I need to take for this work? Any help would be appreciated!

    Read the article

  • Legitimate use of the Windows "Documents" folder in programs.

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job1. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary? 1For the record, here's a quick summary of the various standard directories that should be used instead of "Documents": RoamingAppData for user-specific data and settings. This is the directory to use for user-specific non-temporary data. Anything placed here will be available on any machine that a given user logs on to in networks where this is configured. Do not place large files here though, because they slow down login/logout in such environments. LocalAppData for user-and-machine-specific data and settings. This data differs for every user and every machine. This is also where very large user-specific data should be placed. ProgramData for machine-specific data and settings. These are the same regardless of which user is logged on, and will not roam to other machines in a network. GetTempPath for all files that may be wiped without loss of data when not in use. This is also the place for things like caches, because like temporary data, a cache does not need to be backed up. Place your huge cache here and you'll save your user some backup trouble. "Documents" itself should only ever be used if the user specified it manually by entering a path or selecting it in a Save dialog. That is the only time it is ever appropriate to save stuff in "Documents".

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • Strategy to use two different measurement systems in software

    - by Dennis
    I have an application that needs to accept and output values in both US Custom Units and Metric system. Right now the conversion and input and output is a mess. You can only enter in US system, but you can choose the output to be US or Metric, and the code to do the conversions is everywhere. So I want to organize this and put together some simple rules. So I came up with this: Rules user can enter values in either US or Metric, and User Interface will take care of marking this properly All units internally will be stored as US, since the majority of the system already has most of the data stored like that and depends on this. It shouldn't matter I suppose as long as you don't mix unit. All output will be in US or Metric, depending on user selection/choice/preference. In theory this sounds great and seems like a solution. However, one little problem I came across is this: There is some data stored in code or in the database that already returns data like this: 4 x 13/16" screws, which means "four times screws". I need the to be in either US or Metric. Where exactly do I put the conversion code for doing the conversion for this unit? The above already mixing presentation and data, but the data for the field I need to populate is that whole string. I can certainly split it up into the number 4, the 13/16", and the " x " and the " screws", but the question remains... where do I put the conversion code? Different Locations for Conversion Routines 1) Right now the string is in a class where it's produced. I can put conversion code right into that class and it may be a good solution. Except then, I want to be consistent so I will be putting conversion procedures everywhere in the code at-data-source, or right after reading it from the database. The problem though is I think that my code will have to deal with two systems, all throughout the codebase after this, should I do this. 2) According to the rules, my idea was to put it in the view script, aka last change to modify it before it is shown to the user. And it may be the right thing to do, but then it strikes me it may not always be the best solution. (First, it complicates the view script a tad, second, I need to do more work on the data side to split things up more, or do extra parsing, such as in my case above). 3) Another solution is to do this somewhere in the data prep step before the view, aka somewhere in the middle, before the view, but after the data-source. This strikes me as messy and that could be the reason why my codebase is in such a mess right now. It seems that there is no best solution. What do I do?

    Read the article

  • Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket - (TOTD #185)

    - by arungupta
    The WebSocket API defines different send(xxx) methods that can be used to send text and binary data. This Tip Of The Day (TOTD) will show how to send and receive text and binary data using WebSocket. TOTD #183 explains how to get started with a WebSocket endpoint using GlassFish 4. A simple endpoint from that blog looks like: @WebSocketEndpoint("/endpoint") public class MyEndpoint { public void receiveTextMessage(String message) { . . . } } A message with the first parameter of the type String is invoked when a text payload is received. The payload of the incoming WebSocket frame is mapped to this first parameter. An optional second parameter, Session, can be specified to map to the "other end" of this conversation. For example: public void receiveTextMessage(String message, Session session) {     . . . } The return type is void and that means no response is returned to the client that invoked this endpoint. A response may be returned to the client in two different ways. First, set the return type to the expected type, such as: public String receiveTextMessage(String message) { String response = . . . . . . return response; } In this case a text payload is returned back to the invoking endpoint. The second way to send a response back is to use the mapped session to send response using one of the sendXXX methods in Session, when and if needed. public void receiveTextMessage(String message, Session session) {     . . .     RemoteEndpoint remote = session.getRemote();     remote.sendString(...);     . . .     remote.sendString(...);    . . .    remote.sendString(...); } This shows how duplex and asynchronous communication between the two endpoints can be achieved. This can be used to define different message exchange patterns between the client and server. The WebSocket client can send the message as: websocket.send(myTextField.value); where myTextField is a text field in the web page. Binary payload in the incoming WebSocket frame can be received if ByteBuffer is used as the first parameter of the method signature. The endpoint method signature in that case would look like: public void receiveBinaryMessage(ByteBuffer message) {     . . . } From the client side, the binary data can be sent using Blob, ArrayBuffer, and ArrayBufferView. Blob is a just raw data and the actual interpretation is left to the application. ArrayBuffer and ArrayBufferView are defined in the TypedArray specification and are designed to send binary data using WebSocket. In short, ArrayBuffer is a fixed-length binary buffer with no format and no mechanism for accessing its contents. These buffers are manipulated using one of the views defined by one of the subclasses of ArrayBufferView listed below: Int8Array (signed 8-bit integer or char) Uint8Array (unsigned 8-bit integer or unsigned char) Int16Array (signed 16-bit integer or short) Uint16Array (unsigned 16-bit integer or unsigned short) Int32Array (signed 32-bit integer or int) Uint32Array (unsigned 16-bit integer or unsigned int) Float32Array (signed 32-bit float or float) Float64Array (signed 64-bit float or double) WebSocket can send binary data using ArrayBuffer with a view defined by a subclass of ArrayBufferView or a subclass of ArrayBufferView itself. The WebSocket client can send the message using Blob as: blob = new Blob([myField2.value]);websocket.send(blob); where myField2 is a text field in the web page. The WebSocket client can send the message using ArrayBuffer as: var buffer = new ArrayBuffer(10);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = i;}websocket.send(buffer); A concrete implementation of receiving the binary message may look like: @WebSocketMessagepublic void echoBinary(ByteBuffer data, Session session) throws IOException {    System.out.println("echoBinary: " + data);    for (byte b : data.array()) {        System.out.print(b);    }    session.getRemote().sendBytes(data);} This method is just printing the binary data for verification but you may actually be storing it in a database or converting to an image or something more meaningful. Be aware of TYRUS-51 if you are trying to send binary data from server to client using method return type. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Custom payloads using encoder/decoder Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Taking a Projects Development to the Next Level

    - by user1745022
    I have been looking for some advice for a while on how to handle a project I am working on, but to no avail. I am pretty much on my fourth iteration of improving an "application" I am working on; the first two times were in Excel, the third Time in Access, and now in Visual Studio. The field is manufacturing. The basic idea is I am taking read-only data from a massive Sybase server, filtering it and creating much smaller tables in Access daily (using delete and append Queries) and then doing a bunch of stuff. More specifically, I use a series of queries to either combine data from multiple tables or group data in specific ways (aggregate functions), and then I place this data into a table (so I can sort and manipulate data using DAO.recordset and run multiple custom algorithms). This process is then repeated multiple times throughout the database until a set of relevant tables are created. Many times I will create a field in a query with a value such as 1.1 so that when I append it to a table I can store information in the field from the algorithms. So as the process continues the number of fields for the tables change. The overall application consists of 4 "back-end" databases linked together on a shared drive, with various output (either front-end access applications or Excel). So my question is is this how many data driven applications that solve problems essentially work? Each backend database is updated with fresh data daily and updating each takes around 10 seconds (for three) and 2 minutes(for 1). Project Objectives. I want/am moving to SQL Server soon. Front End will be a Web Application (I know basic web-development and like the administration flexibility) and visual-studio will be IDE with c#/.NET. Should these algorithms be run "inside the database," or using a series of C# functions on each server request. I know you're not supposed to store data in a database unless it is an actual data point, and in Access I have many columns that just hold calculations from algorithms in vba. The truth is, I have seen multiple professional Access applications, and have never seen one that has the complexity or does even close to what mine does (for better or worse). But I know some professional software applications are 1000 times better then mine. So Please Please Please give me a suggestion of some sort. I have been completely on my own and need some guidance on how to approach this project the right way.

    Read the article

  • Cost of Web Server that hosted and delivered text only

    - by slandau
    We are developing an application that needs a web server to interact with the two (or more) entities involved. They will not ever see anything on the web, but the server is required for the transfer of data between them. It's sort of a holding point. Now, the only thing the server is going to be holding is textual data. The two entities are going to be doing the work with the data. I was wondering what the cost of this type of server would be. Since it would be JUST a database with no front end, would it make sense to employ a service through Amazon or Google that just holds data for me to access instead of buying a server and making my own database? The amount of data can grow very large however it's only text, and all data over a day old will be deleted for the most part every day. Thanks!

    Read the article

  • Stairway to T-SQL DML Level 4: The Mathematics of SQL: Part 1

    A relational database contains tables that relate to each other by key values. When querying data from these related tables you may choose to select data from a single table or many tables. If you select data from many tables, you normally join those tables together using specified join criteria. The concepts of selecting data from tables and joining tables together is all about managing and manipulating sets of data. In Level 4 of this Stairway we will explore the concepts of set theory and mathematical operators to join, merge, and return data from multiple SQL Server tables. Get Smart with SQL Backup Pro Powerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school Discover why.

    Read the article

  • Facebook likes reset after moving to HTTPS (URL manually set in script, though)

    - by aarondicks
    Hi fellow Facebook developers. I've got a question regarding the Facebook like button. We worked on a piece recently that embeds a number of social share buttons (please see the source code below or here on Harvey Water Softeners' website) When the piece was released, it was on HTTP, and received over 2k likes (the URL 'slug' hasn't changed at all). The site was recently migrated to permanent-on HTTPS, and the like data has been reset, and we've been left with 50 new, recent likes. If you see in the source code, the URL is set explicitly to like the HTTP version, which I believe to be correct. Can anyone help me work out what's happened here? Here's the HTML bit of the like button: <div class="fb-like" data-href="http://www.harveywatersofteners.co.uk/history-interior-design" data-layout="box_count" data-action="like" data-show-faces="false" data-share="false"></div> Thanks in advance Aaron

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Deleted files not increasing available free space on Ubuntu (as reported by df -h)

    - by Homunculus Reticulli
    I am writing data munging scripts (python and bash), to munge data and import large quantities of text files into a database. I am currently in the test phase, so I am generating several K's of files and deleting them (the files consume about 20G of space). After a test run, I delete the files (sometimes without having imported into the database). I notice that there is a steady decrease in the amount of free space on my disk (as reported by df -h). I don't understand this, as I use rm * (in the data directory), and in the cases where I use Nautilus, I empty the Trash bin as well. Similarly, I notice that when I import the data into the (postgresql) database, and then delete the data from the tables using DELETE FROM tablename;, the size consumed in the postgresql data directory does not go down either. Currently, I have lost approximately 200G from hard drive, and I need to reclaim that - but don't know what to do to reclaim it - any ideas?. I am running Ubuntu 10.0.4 LTS + postgresql 8.4

    Read the article

  • Facebook likes reset after moving to HTTPS

    - by aarondicks
    I've got a question regarding the Facebook like button. We worked on a piece recently that embeds a number of social share buttons (please see the source code below). When the piece was released, it was on HTTP, and received over 2k likes (the URL 'slug' hasn't changed at all). The site was recently migrated to permanent-on HTTPS, and the like data has been reset, and we've been left with 50 new, recent likes. If you see in the source code, the URL is set explicitly to like the HTTP version, which I believe to be correct. Can anyone help me work out what's happened here? Here's the HTML bit of the like button: <div class="fb-like" data-href="http://www.harveywatersofteners.co.uk/history-interior-design" data-layout="box_count" data-action="like" data-show-faces="false" data-share="false"></div>

    Read the article

  • How to change the data in Telerik's RadGrid based on Calendar's selected dates?

    - by Jronny
    I was creating another usercontrol with Telerik's RadGrid and Calendar. <%@ Register Assembly="Telerik.Web.UI" Namespace="Telerik.Web.UI" TagPrefix="telerik" %> <table class="style1"> <tr> <td>From</td> <td>To</td> </tr> <tr> <td><asp:Calendar ID="Calendar1" runat="server" SelectionMode="Day"></asp:Calendar></td> <td><asp:Calendar ID="Calendar2" runat="server" SelectionMode="Day"></asp:Calendar></td> </tr> <tr> <td><asp:Button ID="btnSubmit" runat="server" Text="Submit" OnClick="btnSubmit_Click" /></td> <td><asp:Button ID="btnClear" runat="server" Text="Clear" OnClick="btnClear_Click" /></td> </tr> </table> <telerik:RadGrid ID="RadGrid1" runat="server"> <MasterTableView CommandItemDisplay="Top"></MasterTableView> </telerik:RadGrid> and I am using Linq in code-behind: Entities1 entities = new Entities1(); public static object DataSource = null; protected void Page_Load(object sender, EventArgs e) { if (DataSource == null) { DataSource = (from entity in entities.nsc_moneytransaction select new { date = entity.transaction_date.Value, username = entity.username, cashbalance = entity.cash_balance }).OrderByDescending(a => a.date); } BindData(); } public void BindData() { RadGrid1.DataSource = DataSource; } protected void btnSubmit_Click(object sender, EventArgs e) { DateTime startdate = new DateTime(); DateTime enddatedate = new DateTime(); if (Calendar1.SelectedDate != null && Calendar2.SelectedDate != null) { startdate = Calendar1.SelectedDate; enddatedate = Calendar2.SelectedDate; var queryDateRange = from entity in entities.nsc_moneytransaction where DateTime.Parse(entity.transaction_date.Value.ToShortDateString()) >= DateTime.Parse(startdate.ToShortDateString()) && DateTime.Parse(entity.transaction_date.Value.ToShortDateString()) <= DateTime.Parse(enddatedate.ToShortDateString()) select new { date = entity.transaction_date.Value, username = entity.username, cashbalance = entity.cash_balance }; DataSource = queryDateRange.OrderByDescending(a => a.date); } else if (Calendar1.SelectedDate != null) { startdate = Calendar1.SelectedDate; var querySetDate = from entity in entities.nsc_moneytransaction where entity.transaction_date.Value == startdate select new { date = entity.transaction_date.Value, username = entity.username, cashbalance = entity.cash_balance }; DataSource = querySetDate.OrderByDescending(a => a.date); ; } BindData(); } protected void btnClear_Click(object sender, EventArgs e) { Calendar1.SelectedDates.Clear(); Calendar2.SelectedDates.Clear(); } The problems are, (1) when I click the submit button. the data in the RadGrid is not changed. (2) how can we check if there is nothing selected in the Calendar controls, because there is a date (01/01/0001) set even if we do not select anything from that calendar, thus Calendar1.SelectedDate != null is not enough. =( Thanks.

    Read the article

< Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >