Search Results

Search found 28864 results on 1155 pages for 'ob start'.

Page 341/1155 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • How to improve my software project's speed?

    - by Blitzkr1eg
    I'm doing a school software project with my class mates in Java. We store the info on a remote db. When we start the application we pull all the information from the database and transform it into objects to use in our application (using java sql statemens). In the application we edit some of these objects and then when we exit the application we save or update information in the database using Hibernate. As you see we dont use Hibernate for pulling in information, we use it just for saving and updating. We have 2, but very similar problems. The loading of object(when we start the app) and the saving of objects(with Hibernate) in the db(when closing the app) is taking too much time. And our project its not a huge enterprise application, its a quite small app, we just manage some students, teachers, homeworks and tests. So our db is also very very small. How could we increase performance ? later edit: if we use a local database it runs very quick, it just runs slow on remote databases

    Read the article

  • Form doesn't resize smoothly with a timer event

    - by BDotA
    I have a grid control at the bottom of my form and it can be shown or hidden if user wants to show/hide it. So one way was to well use AutoSize of the form and change the Visuble property of that grid to true or false,... But I thought let's make it a little cooler! so I wanted the form to resize a little more slowly, like a garage door! So I dropped a Timer on the form and started increasing the height of the form little by little while the timer ticks... so something like this when user says show/hide the grid: timer1.Enabled = true; timer1.Start(); and something like this on the timer_click event: this.Height = this.Height + 5; if(this.Height -10 > ErrorsGrid.Bottom ) timer1.Stop(); It kind of works but still not perfect. For example it lags at the very beginning, stop a like a second and then start moving it...So now with this idea in mind what alterations do you suggest I should do to make this thing look and work better?

    Read the article

  • Passing command line arguments in C#

    - by Mark
    Hi, I'm trying to pass command line arguments to C# application, but I have problem passing something like this: "C:\Documents and Settings\All Users\Start Menu\Programs\App name" even if I add " " to the argument? Any help?? Here is the code: public ObjectModel(String[] args) { if (args.Length == 0) return; //no command line arg. //System.Windows.Forms.MessageBox.Show(args.Length.ToString()); //System.Windows.Forms.MessageBox.Show(args[0]); //System.Windows.Forms.MessageBox.Show(args[1]); //System.Windows.Forms.MessageBox.Show(args[2]); //System.Windows.Forms.MessageBox.Show(args[3]); if (args.Length == 3) { try { RemoveInstalledFolder(args[0]); RemoveUserAccount(args[1]); RemoveShortCutFolder(args[2]); RemoveRegistryEntry(); } catch (Exception e) { } } } And here is what I'm passing: C:\WINDOWS\Uninstaller.exe "C:\Program Files\Application name\" "username" "C:\Documents and Settings\All Users\Start Menu\Programs\application name" The problem is: I can get the first and the second args correct, but the last one it gets like this: C:\Documents

    Read the article

  • F# why my recursion is faster than Seq.exists?

    - by user38397
    I am pretty new to F#. I'm trying to understand how I can get a fast code in F#. For this, I tried to write two methods (IsPrime1 and IsPrime2) for benchmarking. My code is: // Learn more about F# at http://fsharp.net open System open System.Diagnostics #light let isDivisible n d = n % d = 0 let IsPrime1 n = Array.init (n-2) ((+) 2) |> Array.exists (isDivisible n) |> not let rec hasDivisor n d = match d with | x when x < n -> (n % x = 0) || (hasDivisor n (d+1)) | _ -> false let IsPrime2 n = hasDivisor n 2 |> not let SumOfPrimes max = [|2..max|] |> Array.filter IsPrime1 |> Array.sum let maxVal = 20000 let s = new Stopwatch() s.Start() let valOfSum = SumOfPrimes maxVal s.Stop() Console.WriteLine valOfSum Console.WriteLine("IsPrime1: {0}", s.ElapsedMilliseconds) ////////////////////////////////// s.Reset() s.Start() let SumOfPrimes2 max = [|2..max|] |> Array.filter IsPrime2 |> Array.sum let valOfSum2 = SumOfPrimes2 maxVal s.Stop() Console.WriteLine valOfSum2 Console.WriteLine("IsPrime2: {0}", s.ElapsedMilliseconds) Console.ReadKey() IsPrime1 takes 760 ms while IsPrime2 takes 260ms for the same result. What's going on here and how I can make my code even faster?

    Read the article

  • getting (this) at different function

    - by twen_ta
    I have couple of input fields with the class "link". All of them should start the jqueryUI dialog so this is why I bind the method to a class and not an single id. The difficulty is now that i can't use the (this) in line 12, because that gives me the identity of the dialog and not the input element. As I am an beginner I don't know how to pass a variable to this event with the element of the input field. What I want to archive is that the dialog should start from the input field and should write the result back to that input field. 1. // this is the click event for the input-field class called "link" 2. $('.link') 3. .button() 4. .click(function() { 5. $('#dialog-form').dialog('open'); 6. 7. }); 8. 9. //this is an excerpt from the opened dialog box and the write back to the input field 10. $("#dialog-form").dialog({ 11. if (bValid) { 12. $('.link').val('' + 14. name.val() + ''); 15. $(this).dialog('close'); 16. } 17. });

    Read the article

  • bluetooth BluetoothSocket.connect() thread. how to close this thread

    - by Hia
    I am trying to make an android app that makes connection with bluetooth device. It works fine but when I call BluetoothSocket.connect() and it is not able to connect to devise its blocking. The thread and does not throw any exception. So when I try to close application while connect() is running its not responding. How can I cancel it? Used BluetoothSocket.close() in ... but still its not working for me. protected void simpleComm(Integer port) { // The documents tell us to cancel the discovery process. try { Method m = mmDevice.getClass().getMethod("createRfcommSocket", new Class[] { int.class }); mmSocket = (BluetoothSocket) m.invoke(mmDevice, port); mmSocket.connect(); // <== blocks untill is not connected Log.d(TAG, " connection success==="); }catch(Exception e){ if (!abort) { connectionFailed(); // Close the socket try { mmSocket.close(); // Start the service over to restart listening mode BluetoothService.this.start(); } catch (IOException e2) { Log.e(TAG,"unable to close() socket during connection failure", e2); } } return; } } public void cancel() { try { abort = true; mmSocket.close(); } catch (IOException e) { Log.e(TAG, "close() of connect socket failed", e); } }

    Read the article

  • run two thread at the same time in java

    - by user1805005
    i have used timertask to schedule my java program. now when the run method of timertask is in process, i want to run two threads which run at the same time and do different functions. here is my code.. please help me.. import java.util.Calendar; import java.util.Date; import java.util.Timer; import java.util.TimerTask; public class timercheck extends TimerTask{ // my first thread Thread t1 = new Thread(){ public void run(){ for(int i = 1;i <= 10;i++) { System.out.println(i); } } }; // my second thread Thread t2 = new Thread(){ public void run(){ for(int i = 11;i <= 20;i++) { System.out.println(i); } } }; public static void main(String[] args){ long ONCE_PER_DAY = 1000*60*60*24; Calendar calendar = Calendar.getInstance(); calendar.set(Calendar.HOUR_OF_DAY, 12); calendar.set(Calendar.MINUTE, 05); calendar.set(Calendar.SECOND, 00); Date time = calendar.getTime(); TimerTask check = new timercheck(); Timer timer = new Timer(); timer.scheduleAtFixedRate(check, time ,ONCE_PER_DAY); } @Override // run method of timer task public void run() { t1.start(); t2.start(); } }

    Read the article

  • Q1 2010 New Feature: Paging with RadGridView for Silverlight and WPF

    We are glad to announce that the Q1 2010 Release has added another weapon to RadGridViews growing arsenal of features. This is the brand new RadDataPager control which provides the user interface for paging through a collection of data. The good news is that RadDataPager can be used to page any collection. It does not depend on RadGridView in any way, so you will be free to use it with the rest of your ItemsControls if you chose to do so. Before you read on, you might want to download the samples solution that I have attached. It contains a sample project for every scenario that I will discuss later on. Looking at the code while reading will make things much easier for you. There is something for everyone among the 10 Visual Studio projects that are included in the solution. So go and grab it. I. Paging essentials The single most important piece of software concerning paging in Silverlight is the System.ComponentModel.IPagedCollectionView interface. Those of you who are on the WPF front need not worry though. As you might already know, Teleriks Silverlight and WPF controls is share the same code-base. Since WPF does not contain a similar interface, Telerik has provided its own Telerik.Windows.Data.IPagedCollectionView. The IPagedCollectionView interface contains several important members which are used by RadGridView to perform the actual paging. Silverlight provides a default implementation of this interface which, naturally, is called PagedCollectionView. You should definitely take a look at its source code in case you are interested in what is going on under the hood. But this is not a prerequisite for our discussion. The WPF default implementation of the interface is Teleriks QueryableCollectionView which, among many other interfaces, implements IPagedCollectionView. II. No Paging In order to gradually build up my case, I will start with a very simple example that lacks paging whatsoever. It might sound stupid, but this will help us build on top of this paging-devoid example. Let us imagine that we have the simplest possible scenario. That is a simple IEnumerable and an ItemsControl that shows its contents. This will look like this: No Paging IEnumerable itemsSource = Enumerable.Range(0, 1000); this.itemsControl.ItemsSource = itemsSource; XAML <Border Grid.Row="0" BorderBrush="Black" BorderThickness="1" Margin="5">     <ListBox Name="itemsControl"/> </Border> <Border Grid.Row="1" BorderBrush="Black" BorderThickness="1" Margin="5">     <TextBlock Text="No Paging"/> </Border> Nothing special for now. Just some data displayed in a ListBox. The two sample projects in the solution that I have attached are: NoPaging_WPF NoPaging_SL3 With every next sample those two project will evolve in some way or another. III. Paging simple collections The single most important property of RadDataPager is its Source property. This is where you pass in your collection of data for paging. More often than not your collection will not be an IPagedCollectionView. It will either be a simple List<T>, or an ObservableCollection<T>, or anything that is simply IEnumerable. Unless you had paging in mind when you designed your project, it is almost certain that your data source will not be pageable out of the box. So what are the options? III. 1. Wrapping the simple collection in an IPagedCollectionView If you look at the constructors of PagedCollectionView and QueryableCollectionView you will notice that you can pass in a simple IEnumerable as a parameter. Those two classes will wrap it and provide paging capabilities over your original data. In fact, this is what RadGridView does internally. It wraps your original collection in an QueryableCollectionView in order to easily perform many useful tasks such as filtering, sorting, and others, but in our case the most important one is paging. So let us start our series of examples with the most simplistic one. Imagine that you have a simple IEnumerable which is the source for an ItemsControl. Here is how to wrap it in order to enable paging: Silverlight IEnumerable itemsSource = Enumerable.Range(0, 1000); var pagedSource = new PagedCollectionView(itemsSource); this.radDataPager.Source = pagedSource; this.itemsControl.ItemsSource = pagedSource; WPF IEnumerable itemsSource = Enumerable.Range(0, 1000); var pagedSource = new QueryableCollectionView(itemsSource); this.radDataPager.Source = pagedSource; this.itemsControl.ItemsSource = pagedSource; XAML <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <ListBox Name="itemsControl"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> This will do the trick. It is quite simple, isnt it? The two sample projects in the solution that I have attached are: PagingSimpleCollectionWithWrapping_WPF PagingSimpleCollectionWithWrapping_SL3 III. 2. Binding to RadDataPager.PagedSource In case you do not like this approach there is a better one. When you assign an IEnumerable as the Source of a RadDataPager it will automatically wrap it in a QueryableCollectionView and expose it through its PagedSource property. From then on, you can attach any number of ItemsControls to the PagedSource and they will be automatically paged. Here is how to do this entirely in XAML: Using RadDataPager.PagedSource <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1" Margin="5">     <ListBox Name="itemsControl"              ItemsSource="{Binding PagedSource, ElementName=radDataPager}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding ItemsSource}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> The two sample projects in the solution that I have attached are: PagingSimpleCollectionWithPagedSource_WPF PagingSimpleCollectionWithPagedSource_SL3 IV. Paging collections implementing IPagedCollectionView Those of you who are using WCF RIA Services should feel very lucky. After a quick look with Reflector or the debugger we can see that the DomainDataSource.Data property is in fact an instance of the DomainDataSourceView class. This class implements a handful of useful interfaces: ICollectionView IEnumerable INotifyCollectionChanged IEditableCollectionView IPagedCollectionView INotifyPropertyChanged Luckily, IPagedCollectionView is among them which lets you do the whole paging in the server. So lets do this. We will add a DomainDataSource control to our page/window and connect the items control and the pager to it. Here is how to do this: MainPage <riaControls:DomainDataSource x:Name="invoicesDataSource"                               AutoLoad="True"                               QueryName="GetInvoicesQuery">     <riaControls:DomainDataSource.DomainContext>         <services:ChinookDomainContext/>     </riaControls:DomainDataSource.DomainContext> </riaControls:DomainDataSource> <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <ListBox Name="itemsControl"              ItemsSource="{Binding Data, ElementName=invoicesDataSource}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding Data, ElementName=invoicesDataSource}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> By the way, you can replace the ListBox from the above code snippet with any other ItemsControl. It can be RadGridView, it can be the MS DataGrid, you name it. Essentially, RadDataPager is sending paging commands to the the DomainDataSource.Data. It does not care who, what, or how many different controls are bound to this same Data property of the DomainDataSource control. So if you would like to experiment with this, you can throw in any number of other ItemsControls next to the ListBox, bind them in the same manner, and all of them will be paged by our single RadDataPager. Furthermore, you can throw in any number of RadDataPagers and bind them to the same property. Then when you page with any one of them will automatically update all of the rest. The whole picture is simply beautiful and we can do all of this thanks to WCF RIA Services. The two sample projects (Silverlight only) in the solution that I have attached are: PagingIPagedCollectionView PagingIPagedCollectionView.Web IV. Paging RadGridView While you can replace the ListBox in any of the above examples with a RadGridView, RadGridView offers something extra. Similar to the DomainDataSource.Data property, the RadGridView.Items collection implements the IPagedCollectionView interface. So you are already thinking: Then why not bind the Source property of RadDataPager to RadGridView.Items? Well thats exactly what you can do and you will start paging RadGridView out-of-the-box. It is as simple as that, no code-behind is involved: MainPage <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1" Margin="5">     <telerikGrid:RadGridView Name="radGridView"                              ItemsSource="{Binding ItemsSource}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding Items, ElementName=radGridView}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> The two sample projects in the solution that I have attached are: PagingRadGridView_SL3 PagingRadGridView_WPF With this last example I think I have covered every possible paging combination. In case you would like to see an example of something that I have not covered, please let me know. Also, make sure you check out those great online examples: WCF RIA Services with DomainDataSource Paging Configurator Endless Paging Paging Any Collection Paging RadGridView Happy Paging! Download Full Source Code Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Visual Studio 2010 and .NET 4 Released

    - by ScottGu
    The final release of Visual Studio 2010 and .NET 4 is now available. Download and Install Today MSDN subscribers, as well as WebsiteSpark/BizSpark/DreamSpark members, can now download the final releases of Visual Studio 2010 and TFS 2010 through the MSDN subscribers download center.  If you are not an MSDN Subscriber, you can download free 90-day trial editions of Visual Studio 2010.  Or you can can download the free Visual Studio express editions of Visual Web Developer 2010, Visual Basic 2010, Visual C# 2010 and Visual C++.  These express editions are available completely for free (and never time out).  If you are looking for an easy way to setup a new machine for web-development you can automate installing ASP.NET 4, ASP.NET MVC 2, IIS, SQL Server Express and Visual Web Developer 2010 Express really quickly with the Microsoft Web Platform Installer (just click the install button on the page). What is new with VS 2010 and .NET 4 Today’s release is a big one – and brings with it a ton of new feature and capabilities. One of the things we tried hard to focus on with this release was to invest heavily in making existing applications, projects and developer experiences better.  What this means is that you don’t need to read 1000+ page books or spend time learning major new concepts in order to take advantage of the release.  There are literally thousands of improvements (both big and small) that make you more productive and successful without having to learn big new concepts in order to start using them.  Below is just a small sampling of some of the improvements with this release: Visual Studio 2010 IDE  Visual Studio 2010 now supports multiple-monitors (enabling much better use of screen real-estate).  It has new code Intellisense support that makes it easier to find and use classes and methods. It has improved code navigation support for searching code-bases and seeing how code is called and used.  It has new code visualization support that allows you to see the relationships across projects and classes within projects, as well as to automatically generate sequence diagrams to chart execution flow.  The editor now supports HTML and JavaScript snippet support as well as improved JavaScript intellisense. The VS 2010 Debugger and Profiling support is now much, much richer and enables new features like Intellitrace (aka Historical Debugging), debugging of Crash/Dump files, and better parallel debugging.  VS 2010’s multi-targeting support is now much richer, and enables you to use VS 2010 to target .NET 2, .NET 3, .NET 3.5 and .NET 4 applications.  And the infamous Add Reference dialog now loads much faster. TFS 2010 is now easy to setup (you can now install the server in under 10 minutes) and enables great source-control, bug/work-item tracking, and continuous integration support.  Testing (both automated and manual) is now much, much richer.  And VS 2010 Premium and Ultimate provide much richer architecture and design tooling support. VB and C# Language Features VB and C# in VS 2010 both contain a bunch of new features and capabilities.  VB adds new support for automatic properties, collection initializers, and implicit line continuation support among many other features.  C# adds support for optional parameters and named arguments, a new dynamic keyword, co-variance and contra-variance, and among many other features. ASP.NET 4 and ASP.NET MVC 2 With ASP.NET 4, Web Forms controls now render clean, semantically correct, and CSS friendly HTML markup. Built-in URL routing functionality allows you to expose clean, search engine friendly, URLs and increase the traffic to your Website.  ViewState within applications can now be more easily controlled and made smaller.  ASP.NET Dynamic Data support has been expanded.  More controls, including rich charting and data controls, are now built-into ASP.NET 4 and enable you to build applications even faster.  New starter project templates now make it easier to get going with new projects.  SEO enhancements make it easier to drive traffic to your public facing sites.  And web.config files are now clean and simple. ASP.NET MVC 2 is now built-into VS 2010 and ASP.NET 4, and provides a great way to build web sites and applications using a model-view-controller based pattern. ASP.NET MVC 2 adds features to easily enable client and server validation logic, provides new strongly-typed HTML and UI-scaffolding helper methods.  It also enables more modular/reusable applications.  The new <%: %> syntax in ASP.NET makes it easier to HTML encode output.  Visual Studio 2010 also now includes better tooling support for unit testing and TDD.  In particular, “Consume first intellisense” and “generate from usage" support within VS 2010 make it easier to write your unit tests first, and then drive your implementation from them. Deploying ASP.NET applications gets a lot easier with this release. You can now publish your Websites and applications to a staging or production server from within Visual Studio itself. Visual Studio 2010 makes it easy to transfer all your files, code, configuration, database schema and data in one complete package. VS 2010 also makes it easy to manage separate web.config configuration files settings depending upon whether you are in debug, release, staging or production modes. WPF 4 and Silverlight 4 WPF 4 includes a ton of new improvements and capabilities including more built-in controls, richer graphics features (cached composition, pixel shader 3 support, layoutrounding, and animation easing functions), a much improved text stack (with crisper text rendering, custom dictionary support, and selection and caret brush options).  WPF 4 also includes a bunch of support to enable you to take advantage of new Windows 7 features – including multi-touch and Windows 7 shell integration. Silverlight 4 will launch this week as well.  You can watch my Silverlight 4 launch keynote streamed live Tuesday (April 13th) at 8am Pacific Time.  Silverlight 4 includes a ton of new capabilities – including a bunch for making it possible to build great business applications and out of the browser applications.  I’ll be doing a separate blog post later this week (once it is live on the web) that talks more about its capabilities. Visual Studio 2010 now includes great tooling support for both WPF and Silverlight.  The new VS 2010 WPF and Silverlight designer makes it much easier to build client applications as well as build great line of business solutions, as well as integrate and bind with data.  Tooling support for Silverlight 4 with the final release of Visual Studio 2010 will be available when Silverlight 4 releases to the web this week. SharePoint and Azure Visual Studio 2010 now includes built-in support for building SharePoint applications.  You can now create, edit, build, and debug SharePoint applications directly within Visual Studio 2010.  You can also now use SharePoint with TFS 2010. Support for creating Azure-hosted applications is also now included with VS 2010 – allowing you to build ASP.NET and WCF based applications and host them within the cloud. Data Access Data access has a lot of improvements coming to it with .NET 4.  Entity Framework 4 includes a ton of new features and capabilities – including support for model first and POCO development, default support for lazy loading, built-in support for pluralization/singularization of table/property names within the VS 2010 designer, full support for all the LINQ operators, the ability to optionally expose foreign keys on model objects (useful for some stateless web scenarios), disconnected API support to better handle N-Tier and stateless web scenarios, and T4 template customization support within VS 2010 to allow you to customize and automate how code is generated for you by the data designer.  In addition to improvements with the Entity Framework, LINQ to SQL with .NET 4 also includes a bunch of nice improvements.  WCF and Workflow WCF includes a bunch of great new capabilities – including better REST, activation and configuration support.  WCF Data Services (formerly known as Astoria) and WCF RIA Services also now enable you to easily expose and work with data from remote clients. Windows Workflow is now much faster, includes flowchart services, and now makes it easier to make custom services than before.  More details can be found here. CLR and Core .NET Library Improvements .NET 4 includes the new CLR 4 engine – which includes a lot of nice performance and feature improvements.  CLR 4 engine now runs side-by-side in-process with older versions of the CLR – allowing you to use two different versions of .NET within the same process.  It also includes improved COM interop support.  The .NET 4 base class libraries (BCL) include a bunch of nice additions and refinements.  In particular, the .NET 4 BCL now includes new parallel programming support that makes it much easier to build applications that take advantage of multiple CPUs and cores on a computer.  This work dove-tails nicely with the new VS 2010 parallel debugger (making it much easier to debug parallel applications), as well as the new F# functional language support now included in the VS 2010 IDE.  .NET 4 also now also has the Dynamic Language Runtime (DLR) library built-in – which makes it easier to use dynamic language functionality with .NET.  MEF – a really cool library that enables rich extensibility – is also now built-into .NET 4 and included as part of the base class libraries.  .NET 4 Client Profile The download size of the .NET 4 redist is now much smaller than it was before (the x86 full .NET 4 package is about 36MB).  We also now have a .NET 4 Client Profile package which is a pure sub-set of the full .NET that can be used to streamline client application installs. C++ VS 2010 includes a bunch of great improvements for C++ development.  This includes better C++ Intellisense support, MSBuild support for projects, improved parallel debugging and profiler support, MFC improvements, and a number of language features and compiler optimizations. My VS 2010 and .NET 4 Blog Series I’ve been cranking away on a blog series the last few months that highlights many of the new VS 2010 and .NET 4 improvements.  The good news is that I have about 20 in-depth posts already written.  The bad news (for me) is that I have about 200 more to go until I’m done!  I’m going to try and keep adding a few more each week over the next few months to discuss the new improvements and how best to take advantage of them. Below is a list of the already written ones that you can check out today: Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 Stay tuned to my blog as I post more.  Also check out this page which links to a bunch of great articles and videos done by others. VS 2010 Installation Notes If you have installed a previous version of VS 2010 on your machine (either the beta or the RC) you must first uninstall it before installing the final VS 2010 release.  I also recommend uninstalling .NET 4 betas (including both the client and full .NET 4 installs) as well as the other installs that come with VS 2010 (e.g. ASP.NET MVC 2 preview builds, etc).  The uninstalls of the betas/RCs will clean up all the old state on your machine – after which you can install the final VS 2010 version and should have everything just work (this is what I’ve done on all of my machines and I haven’t had any problems). The VS 2010 and .NET 4 installs add a bunch of new managed assemblies to your machine.  Some of these will be “NGEN’d” to native code during the actual install process (making them run fast).  To avoid adding too much time to VS setup, though, we don’t NGEN all assemblies immediately – and instead will NGEN the rest in the background when your machine is idle.  Until it finishes NGENing the assemblies they will be JIT’d to native code the first time they are used in a process – which for large assemblies can sometimes cause a slight performance hit. If you run into this you can manually force all assemblies to be NGEN’d to native code immediately (and not just wait till the machine is idle) by launching the Visual Studio command line prompt from the Windows Start Menu (Microsoft Visual Studio 2010->Visual Studio Tools->Visual Studio Command Prompt).  Within the command prompt type “Ngen executequeueditems” – this will cause everything to be NGEN’d immediately. How to Buy Visual Studio 2010 You can can download and use the free Visual Studio express editions of Visual Web Developer 2010, Visual Basic 2010, Visual C# 2010 and Visual C++.  These express editions are available completely for free (and never time out). You can buy a new copy of VS 2010 Professional that includes a 1 year subscription to MSDN Essentials for $799.  MSDN Essentials includes a developer license of Windows 7 Ultimate, Windows Server 2008 R2 Enterprise, SQL Server 2008 DataCenter R2, and 20 hours of Azure hosting time.  Subscribers also have access to MSDN’s Online Concierge, and Priority Support in MSDN Forums. Upgrade prices from previous releases of Visual Studio are also available.  Existing Visual Studio 2005/2008 Standard customers can upgrade to Visual Studio 2010 Professional for a special $299 retail price until October.  You can take advantage of this VS Standard->Professional upgrade promotion here. Web developers who build applications for others, and who are either independent developers or who work for companies with less than 10 employees, can also optionally take advantage of the Microsoft WebSiteSpark program.  This program gives you three copies of Visual Studio 2010 Professional, 1 copy of Expression Studio, and 4 CPU licenses of both Windows 2008 R2 Web Server and SQL 2008 Web Edition that you can use to both develop and deploy applications with at no cost for 3 years.  At the end of the 3 years there is no obligation to buy anything.  You can sign-up for WebSiteSpark today in under 5 minutes – and immediately have access to the products to download. Summary Today’s release is a big one – and has a bunch of improvements for pretty much every developer.  Thank you everyone who provided feedback, suggestions and reported bugs throughout the development process – we couldn’t have delivered it without you.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • An easy way to create Side by Side registrationless COM Manifests with Visual Studio

    - by Rick Strahl
    Here's something I didn't find out until today: You can use Visual Studio to easily create registrationless COM manifest files for you with just a couple of small steps. Registrationless COM lets you use COM component without them being registered in the registry. This means it's possible to deploy COM components along with another application using plain xcopy semantics. To be sure it's rarely quite that easy - you need to watch out for dependencies - but if you know you have COM components that are light weight and have no or known dependencies it's easy to get everything into a single folder and off you go. Registrationless COM works via manifest files which carry the same name as the executable plus a .manifest extension (ie. yourapp.exe.manifest) I'm going to use a Visual FoxPro COM object as an example and create a simple Windows Forms app that calls the component - without that component being registered. Let's take a walk down memory lane… Create a COM Component I start by creating a FoxPro COM component because that's what I know and am working with here in my legacy environment. You can use VB classic or C++ ATL object if that's more to your liking. Here's a real simple Fox one: DEFINE CLASS SimpleServer as Session OLEPUBLIC FUNCTION HelloWorld(lcName) RETURN "Hello " + lcName ENDDEFINE Compile it into a DLL COM component with: BUILD MTDLL simpleserver FROM simpleserver RECOMPILE And to make sure it works test it quickly from Visual FoxPro: server = CREATEOBJECT("simpleServer.simpleserver") MESSAGEBOX( server.HelloWorld("Rick") ) Using Visual Studio to create a Manifest File for a COM Component Next open Visual Studio and create a new executable project - a Console App or WinForms or WPF application will all do. Go to the References Node Select Add Reference Use the Browse tab and find your compiled DLL to import  Next you'll see your assembly in the project. Right click on the reference and select Properties Click on the Isolated DropDown and select True Compile and that's all there's to it. Visual Studio will create a App.exe.manifest file right alongside your application's EXE. The manifest file created looks like this: xml version="1.0" encoding="utf-8"? assembly xsi:schemaLocation="urn:schemas-microsoft-com:asm.v1 assembly.adaptive.xsd" manifestVersion="1.0" xmlns:asmv1="urn:schemas-microsoft-com:asm.v1" xmlns:asmv2="urn:schemas-microsoft-com:asm.v2" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:co.v1="urn:schemas-microsoft-com:clickonce.v1" xmlns:co.v2="urn:schemas-microsoft-com:clickonce.v2" xmlns="urn:schemas-microsoft-com:asm.v1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / file name="simpleserver.DLL" asmv2:size="27293" hash xmlns="urn:schemas-microsoft-com:asm.v2" dsig:Transforms dsig:Transform Algorithm="urn:schemas-microsoft-com:HashTransforms.Identity" / dsig:Transforms dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" / dsig:DigestValuepuq+ua20bbidGOWhPOxfquztBCU=dsig:DigestValue hash typelib tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}" version="1.0" helpdir="" resourceid="0" flags="HASDISKIMAGE" / comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file assembly Now let's finish our super complex console app to test with: using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication1 {     class Program     {         static voidMain(string[] args)         { Type type = Type.GetTypeFromProgID("simpleserver.simpleserver",true); dynamic server = Activator.CreateInstance(type); Console.WriteLine(server.HelloWorld("rick")); Console.ReadLine(); } } } Now run the Console Application… As expected that should work. And why not? The COM component is still registered, right? :-) Nothing tricky about that. Let's unregister the COM component and then re-run and see what happens. Go to the Command Prompt Change to the folder where the DLL is installed Unregister with: RegSvr32 -u simpleserver.dll      To be sure that the COM component no longer works, check it out with the same test you used earlier (ie. o = CREATEOBJECT("SimpleServer.SimpleServer") in your development environment or VBScript etc.). Make sure you run the EXE and you don't re-compile the application or else Visual Studio will complain that it can't find the COM component in the registry while compiling. In fact now that we have our .manifest file you can remove the COM object from the project. When you run run the EXE from Windows Explorer or a command prompt to avoid the recompile. Watch out for embedded Manifest Files Now recompile your .NET project and run it… and it will most likely fail! The problem is that .NET applications by default embeds a manifest file into the compiled EXE application which results in the externally created manifest file being completely ignored. Only one manifest can be applied at a time and the compiled manifest takes precedency. Uh, thanks Visual Studio - not very helpful… Note that if you use another development tool like Visual FoxPro to create your EXE this won't be an issue as long as the tool doesn't automatically add a manifest file. Creating a Visual FoxPro EXE for example will work immediately with the generated manifest file as is. If you are using .NET and Visual Studio you have a couple of options of getting around this: Remove the embedded manifest file Copy the contents of the generated manifest file into a project manifest file and compile that in To remove an embedded manifest in a Visual Studio project: Open the Project Properties (Alt-Enter on project node) Go down to Resources | Manifest and select | Create Application without a Manifest   You can now add use the external manifest file and it will actually be respected when the app runs. The other option is to let Visual Studio create the manifest file on disk and then explicitly add the manifest file into the project. Notice on the dialog above I did this for app.exe.manifest and the manifest actually shows up in the list. If I select this file it will be compiled into the EXE and be used in lieu of any external files and that works as well. Remove the simpleserver.dll reference so you can compile your code and run the application. Now it should work without COM registration of the component. Personally I prefer external manifests because they can be modified after the fact - compiled manifests are evil in my mind because they are immutable - once they are there they can't be overriden or changed. So I prefer an external manifest. However, if you are absolutely sure nothing needs to change and you don't want anybody messing with your manifest, you can also embed it. The option to either is there. Watch for Manifest Caching While working trying to get this to work I ran into some problems at first. Specifically when it wasn't working at first (due to the embedded schema) I played with various different manifest layouts in different files etc.. There are a number of different ways to actually represent manifest files including offloading to separate folder (more on that later). A few times I made deliberate errors in the schema file and I found that regardless of what I did once the app failed or worked no amount of changing of the manifest file would make it behave differently. It appears that Windows is caching the manifest data for a given EXE or DLL. It takes a restart or a recompile of either the EXE or the DLL to clear the caching. Recompile your servers in order to see manifest changes unless there's an outright failure of an invalid manifest file. If the app starts the manifest is being read and caches immediately. This can be very confusing especially if you don't know that it's happening. I found myself always recompiling the exe after each run and before making any changes to the manifest file. Don't forget about Runtimes of COM Objects In the example I used above I used a Visual FoxPro COM component. Visual FoxPro is a runtime based environment so if I'm going to distribute an application that uses a FoxPro COM object the runtimes need to be distributed as well. The same is true of classic Visual Basic applications. Assuming that you don't know whether the runtimes are installed on the target machines make sure to install all the additional files in the EXE's directory alongside the COM DLL. In the case of Visual FoxPro the target folder should contain: The EXE  App.exe The Manifest file (unless it's compiled in) App.exe.manifest The COM object DLL (simpleserver.dll) Visual FoxPro Runtimes: VFP9t.dll (or VFP9r.dll for non-multithreaded dlls), vfp9rENU.dll, msvcr71.dll All these files should be in the same folder. Debugging Manifest load Errors If you for some reason get your manifest loading wrong there are a couple of useful tools available - SxSTrace and SxSParse. These two tools can be a huge help in debugging manifest loading errors. Put the following into a batch file (SxS_Trace.bat for example): sxstrace Trace -logfile:sxs.bin sxstrace Parse -logfile:sxs.bin -outfile:sxs.txt Then start the batch file before running your EXE. Make sure there's no caching happening as described in the previous section. For example, if I go into the manifest file and explicitly break the CLSID and/or ProgID I get a detailed report on where the EXE is looking for the manifest and what it's reading. Eventually the trace gives me an error like this: INFO: Parsing Manifest File C:\wwapps\Conf\SideBySide\Code\app.EXE.     INFO: Manifest Definition Identity is App.exe,processorArchitecture="x86",type="win32",version="1.0.0.0".     ERROR: Line 13: The value {AAaf2c2811-0657-4264-a1f5-06d033a969ff} of attribute clsid in element comClass is invalid. ERROR: Activation Context generation failed. End Activation Context Generation. pinpointing nicely where the error lies. Pay special attention to the various attributes - they have to match exactly in the different sections of the manifest file(s). Multiple COM Objects The manifest file that Visual Studio creates is actually quite more complex than is required for basic registrationless COM object invokation. The manifest file can be simplified a lot actually by stripping off various namespaces and removing the type library references altogether. Here's an example of a simplified manifest file that actually includes references to 2 COM servers: xml version="1.0" encoding="utf-8"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / file name="simpleserver.DLL" comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file file name = "sidebysidedeploy.dll" comClass clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}" progid="sidebysidedeploy.SidebysidedeployServer" description="SidebySideDeploy Server" threadingModel="apartment" / file assembly Simple enough right? Routing to separate Manifest Files and Folders In the examples above all files ended up in the application's root folder - all the DLLs, support files and runtimes. Sometimes that's not so desirable and you can actually create separate manifest files. The easiest way to do this is to create a manifest file that 'routes' to another manifest file in a separate folder. Basically you create a new 'assembly identity' via a named id. You can then create a folder and another manifest with the id plus .manifest that points at the actual file. In this example I create: App.exe.manifest A folder called App.deploy A manifest file in App.deploy All DLLs and runtimes in App.deploy Let's start with that master manifest file. This file only holds a reference to another manifest file: App.exe.manifest xml version="1.0" encoding="UTF-8" standalone="yes"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / dependency dependentAssembly assemblyIdentity name="App.deploy" version="1.0.0.0" type="win32" / dependentAssembly dependency assembly   Note this file only contains a dependency to App.deploy which is another manifest id. I can then create App.deploy.manifest in the current folder or in an App.deploy folder. In this case I'll create App.deploy and in it copy the DLLs and support runtimes. I then create App.deploy.manifest. App.deploy.manifest xml version="1.0" encoding="UTF-8" standalone="yes"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.deploy" type="win32" version="1.0.0.0" / file name="simpleserver.DLL" comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file file name="sidebysidedeploy.dll" comClass clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}" threadingModel="Apartment" progid="sidebysidedeploy.SidebysidedeployServer" description="SidebySideDeploy Server" / file assembly   In this manifest file I then host my COM DLLs and any support runtimes. This is quite useful if you have lots of DLLs you are referencing or if you need to have separate configuration and application files that are associated with the COM object. This way the operation of your main application and the COM objects it interacts with is somewhat separated. You can see the two folders here:   Routing Manifests to different Folders In theory registrationless COM should be pretty easy in painless - you've seen the configuration manifest files and it certainly doesn't look very complicated, right? But the devil's in the details. The ActivationContext API (SxS - side by side activation) is very intolerant of small errors in the XML or formatting of the keys, so be really careful when setting up components, especially if you are manually editing these files. If you do run into trouble SxsTrace/SxsParse are a huge help to track down the problems. And remember that if you do have problems that you'll need to recompile your EXEs or DLLs for the SxS APIs to refresh themselves properly. All of this gets even more fun if you want to do registrationless COM inside of IIS :-) But I'll leave that for another blog post…© Rick Strahl, West Wind Technologies, 2005-2011Posted in COM  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Jolicloud is a Nifty New OS for Your Netbook

    - by Matthew Guay
    Want to breathe new life into your netbook?  Here’s a quick look at Jolicloud, a unique new Linux based OS that lets you use your netbook in a whole new way. Netbooks have been an interesting category of computers.  When they were first released, most netbooks came with a stripped down Linux based operating system designed to let you easily access the internet first and foremost.  Consumers wanted more from their netbooks, so full OSes such as Windows XP and Ubuntu became the standard on netbooks.  Microsoft worked hard to get Windows 7 working great on netbooks, and today most netbooks run Windows 7 great.  But the Linux community hasn’t stood still either, and Jolicloud is proof of that.  Jolicloud is a unique OS designed to bring the best of both webapps and standard programs to your netbook.   Keep reading to see if this is the perfect netbook OS for you. Getting Started Installing Jolicloud on your netbook is easy thanks to a the Jolicloud Express installer for Windows.  Since many netbooks run Windows by default, this makes it easy to install Jolicloud.  Plus, your Windows install is left untouched, so you can still easily access all your Windows files and programs. Download and run the roughly 700Mb installer (link below) just as a normal installer in Windows. This will first extract the needed files. Click Get started to install Jolicloud on your netbook. Enter a username, password, and nickname for your computer.  Please note that the username must be all lowercase, and the nickname should not contain spaces or special characters.   Now you can review the default installation settings.  By default it will take up 39Gb and install on your C:\ drive in English.  If you wish to change this, click Change. We chose to install it on the D: drive on this netbook, as its harddrive was already partitioned into two parts.  Click Save when your settings are all correct, and then click Next in the previous window. Jolicloud will prepare for the installation.  This took about 5 minutes in our test.  Click Next when this is finished. Click Restart now to install and run Jolicloud. When your netbook reboots, it will initialize the Jolicloud setup. It will then automatically finish the installation.  Just sit back and wait; there’s nothing for you to do right now.  The installation took about 20 minutes in our test. Jolicloud will automatically reboot when the setup is finished. Once it’s rebooted, you’re ready to go!  Enter the username, then the password, that you chose earlier when you were installing Jolicloud from Windows. Welcome to your Jolicloud desktop! Hardware Support We installed Jolicloud on a Samsung N150 netbook with an Atom N450 processor, 1Gb Ram, 250Gb harddrive, and WiFi b/g/n with Bluetooth.  Amazingly, once Jolicloud was installed, everything was ready to use.  No drivers to install, no settings to hassle with, it was all installed and set up perfectly.  Power settings worked great, and closing the netbook put it to sleep just like in Windows. WiFi drivers have typically been difficult to find and install on Linux, but Jolicloud had our netbook’s wifi working immediately.  To get online, simply click the Wireless icon on the top right, and select the wireless network you want to connect to. Jolicloud will let you know when it is signed on. Wired Lan networking was also seamless; simply connect your cable and you’re ready to go.  The webcam and touchpad also worked perfectly directly.  The only thing missing was multitouch; this touchpad has two finger scroll, pinch zoom, and other nice multitouch features in Windows, but in Julicloud it only functioned as a standard touchpad.  It did have tap to click activated by default, as well as right-side scrolling, which is nice. Jolicloud also supported our video card without any extra work.  The native resolution was already selected, and the only problem we had with the screen was that there was no apparent way to change the brightness.  This is not a major problem, but would be nice to have.  The Samsung N150 has Intel GMA3150 integrated graphics, and Jolicloud promises 1080p HD video on it.  It did playback 720p H.264 video flawlessly without installing anything extra, but it stuttered on full 1080p HD (which is the exact same as this netbook’s video playback in Windows 7 – 720p works great, but it stutters on 1080p).  We would be excited to see full HD on this netbook, but 720p is definitely fine for most stuff.   Jolicloud supports a wide range of netbooks, and based on our experience we would expect it to work as good on any supported hardware.  Check out the list of supported netbooks to see if your netbook is supported; if not, it still may work but you may have to install special drivers. Jolicloud’s performance was very similar to Windows 7 on our netbook.  It boots in about 30 seconds, and apps load fairly quickly.  In general, we couldn’t tell much difference in performance between Jolicloud and Windows 7, though this isn’t a problem since Windows 7 runs great on the current generation of netbooks. Using Jolicloud Ready to start putting Jolicloud to use?  Your fresh Jolicloud install you can run several built-in apps, such as Firefox, a calculator, and the chat client Pidgin.  It also has a media player and file viewer installed, so you can play MP3s or MPG videos, or read PDF ebooks without installing anything extra.  It also has Flash player installed so you can watch videos online easily. You can also directly access all of your files from the right side of your home screen.  You can even access your Windows files; in our test, the 116.9 GB Media was C: from Windows.  Select it to browse and open any file you had saved in Windows. You may need to enter your password to access it. Once you’re authenticated it, you’ll see all of your Windows files and folders.  Your User files (Documents, Music, Videos, etc.) will be in the Users folder. And, you can easily add files from removable media such as USB flash drives and memory cards.  Jolicloud recognized a flash drive we tested with no trouble at all. Add new apps But, the best part about Jolicloud is that it makes it very easy to install new apps.  Click the Get Started button on your homescreen. You’ll first need to create an account.  You can then use this same account on another netbook if you wish, and your settings will automatically be synced between the two. You can either signup using your Facebook account, …or you can sign up the traditional way with your email address, name, and password.  If you sign up this way, you will need to confirm your email address before your account will be finished. Now, choose your netbook model from the list, and enter a name for your computer. And that’s it!  You’ll now see the Jolicloud dashboard, which will show you updates and notifications from friends who also use Jolicloud. Click the App directory to find new apps for your netbook.  Here you will find a variety of webapps, such as Gmail, along with native applications, such as Skype, that you can install on your netbook.  Simply click the Install button on the right to add the app to your netbook. You will be prompted to enter your system password, and then the app will install without any further input.   Once an app is installed, a check mark will appear beside its name.  You can remove it by clicking the Remove button, and it will uninstall seamlessly. Webapps, such as Gmail, actually run in in a Chrome-powered window that lets the webapp run full screen.  This gives the webapps a native feel, but actually they’re just running the same as they would in a standard web browser.   The Jolicloud Interface Most apps run maximized, and there is no way to run them smaller.  This in general works good, since with small screens most apps need to run full-screen anyhow. Smaller apps, such as a calculator or the Pidgin chat client, run in a window just like they do on other operating systems. You can switch to another app that’s running by selecting it’s icon on the top left, or you can go back to the home screen by clicking the home screen.  If you’re finished with an program, simply click the red X button on the top right of the window when you’re running it. Or, you can switch between programs using standard keyboard shortcuts such as Alt-tab. The default page on the home screen is the favorites page, and all of your other programs are orginized in their own sections on the left hand side.  But, if you want to add one of these to your favorites page, simply right-click on it and select Add to Favorites. When you’re done for the day, you can simply close your netbook to put it to sleep.  Or, if you want to shut down, just press the Quit button on the bottom right of the home screen and then select Shut Down. Booting Jolicloud When you install Jolicloud, it will set itself as the default operating system.  Now, when you boot your netbook, it will show you a list of installed operating systems.  You can select either Windows or Jolicloud, but if you don’t make a selection it will boot into Jolicloud after waiting 10 seconds. If you’d perfer to boot into Windows by default, you can easily change this.  First, boot your netbook in to Windows.  Open the start menu, right-click on the Computer button, and select Properties.   Click the “Advanced system settings” link on the left side. Click the Settings button in the Startup and Recovery section. Now, select Windows as the default operating system, and click Ok.  Your netbook will now boot into Windows by default, but will give you 10 seconds to choose to boot into Jolicloud when you start your computer. Or, if you decided you don’t want Jolicloud, you can easily uninstall it from within Windows. Please note that this will also remove any files you may have saved in Jolicloud, so be sure to copy them to your Windows drive before uninstalling. To uninstall Jolicloud from within Windows, open Control Panel, and select Uninstall a Program. Scroll down to select Jolicloud, and click Uninstall/Change. Click Yes to confirm that you want to uninstall Jolicloud. After a few moments, it will let you know that Jolicloud has been uninstalled.  You’re netbook is now back the same as it was before you installed Jolicloud, with only Windows installed. Closing Whether you’re wanting to replace your current OS on your netbook or would simply like to try out a fresh new Linux version on your netbook, Jolicloud is a great option for you.  We were very impressed by it’s solid hardware support and the ease of installing new apps in Jolicloud.  Rather than simply giving us a standard OS, Jolicloud offers a unique way to use your netbook with native programs and webapps.  And whether you’re an IT pro or are a new computer user, Jolicloud was easy enough to use that anyone can do it.  Give it a try, and let us know what your favorite netbook OS is! Link Download Jolicloud for your netbook Similar Articles Productive Geek Tips How To Change XSplash Themes in Ubuntu 9.10Verify the Integrity of Windows Vista System FilesMonitor Multiple Logs in a Single Shell with MultiTail for LinuxHide Some or All of the GUI Bars in FirefoxAsk the Readers: Do You Use a Laptop, Desktop, or Both? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Stop In The Name Of Love (Firefox addon) Chitika iPad Labs Gives Live iPad Sale Stats Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • Error in installing ZTE AC2738 on ubuntu 3.0.0-12-generic

    - by Netro
    I am getting this error ,struct usb_serial_driver has no member named shutdown. I am installing on 64bit ubuntu 3.0.0-12-generic ... Beginning Verify CD ... ... Verify CD Succeed! ... Beginning Copy Install Package Files ... ... will take a long time, waiting 5 seconds, please ... Copy Install Package Files Succeed! ... 'ztemtApp' previous version not found. and install now Beginning install ... ... Current linux release version is 'Ubuntu' ... Checking 'App' process ... Checking old installation ... Installing ... Current Path is : . : /tmp/ztemt_datacard/Linux 1. Checking Previous Version ... 2. Copying Data Bin ... ... will take a few seconds, please waiting ... /tmp/ztemt_datacard/Linux 3. Auto Load Usb Driver Module ... Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service acpid restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop acpid ; start acpid. The restart(8) utility is also available. acpid stop/waiting acpid start/running, process 11802 4. Changing pppd Options ... 5. Changing File Permission ... 6. Deleting Qt lib When Local QT Vertion > V4.4.0 ... ... Package 'libqtgui4' exist ... QT_VERSION = 4 7. Deleting process id file: EVDOApp.pid ... 8. Making USB Serial Driver Module : ztemt.ko ... ... will take a few seconds, please waiting ... make -C /lib/modules/3.0.0-12-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory `/usr/src/linux-headers-3.0.0-12-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘destroy_serial’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:159:14: error: ‘struct usb_serial_driver’ has no member named ‘shutdown’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:165:18: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_open’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:241:36: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:246:8: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:251:6: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:253:10: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: warning: passing argument 1 of ‘serial->type->open’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: warning: passing argument 2 of ‘serial->type->open’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: note: expected ‘struct usb_serial_port *’ but argument is of type ‘struct file *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:270:20: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:276:6: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:278:6: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:279:20: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_close’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:355:18: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:357:10: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:358:21: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:371:8: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:372:10: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:375:3: error: too many arguments to function ‘port->serial->type->close’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:377:11: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:378:12: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:379:9: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:380:8: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:386:20: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_write’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:407:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: warning: passing argument 1 of ‘port->serial->type->write’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: warning: passing argument 2 of ‘port->serial->type->write’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: note: expected ‘struct usb_serial_port *’ but argument is of type ‘const unsigned char *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: warning: passing argument 3 of ‘port->serial->type->write’ makes pointer from integer without a cast [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: note: expected ‘const unsigned char *’ but argument is of type ‘int’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: error: too few arguments to function ‘port->serial->type->write’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_write_room’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:429:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:435:2: warning: passing argument 1 of ‘port->serial->type->write_room’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:435:2: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_chars_in_buffer’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:451:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:457:2: warning: passing argument 1 of ‘port->serial->type->chars_in_buffer’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:457:2: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_throttle’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:472:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:479:3: warning: passing argument 1 of ‘port->serial->type->throttle’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:479:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_unthrottle’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:491:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:498:3: warning: passing argument 1 of ‘port->serial->type->unthrottle’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:498:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_ioctl’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:511:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: warning: passing argument 1 of ‘port->serial->type->ioctl’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: warning: passing argument 2 of ‘port->serial->type->ioctl’ makes integer from pointer without a cast [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: note: expected ‘unsigned int’ but argument is of type ‘struct file *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: error: too many arguments to function ‘port->serial->type->ioctl’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_set_termios’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:535:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: warning: passing argument 1 of ‘port->serial->type->set_termios’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: warning: passing argument 2 of ‘port->serial->type->set_termios’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: note: expected ‘struct usb_serial_port *’ but argument is of type ‘struct termios *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: error: too few arguments to function ‘port->serial->type->set_termios’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_break’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:554:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:561:3: warning: passing argument 1 of ‘port->serial->type->break_ctl’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:561:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_tiocmget’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:629:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:635:3: warning: passing argument 1 of ‘port->serial->type->tiocmget’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:635:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:635:3: error: too many arguments to function ‘port->serial->type->tiocmget’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_tiocmset’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:651:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: warning: passing argument 1 of ‘port->serial->type->tiocmset’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: warning: passing argument 2 of ‘port->serial->type->tiocmset’ makes integer from pointer without a cast [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: note: expected ‘unsigned int’ but argument is of type ‘struct file *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: error: too many arguments to function ‘port->serial->type->tiocmset’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_port_work’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:697:12: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘port_release’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:709:2: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_probe’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:858:2: error: implicit declaration of function ‘lock_kernel’ [-Werror=implicit-function-declaration] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:861:3: error: implicit declaration of function ‘unlock_kernel’ [-Werror=implicit-function-declaration] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1034:3: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1182:23: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1182:51: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1183:3: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_disconnect’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1257:13: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1258:21: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: At top level: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1280:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1280:2: warning: (near initialization for ‘serial_ops.ioctl’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1281:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1281:2: warning: (near initialization for ‘serial_ops.set_termios’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1284:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1284:2: warning: (near initialization for ‘serial_ops.break_ctl’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1286:2: error: unknown field ‘read_proc’ specified in initializer /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1286:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1286:2: warning: (near initialization for ‘serial_ops.ioctl’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1287:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1287:2: warning: (near initialization for ‘serial_ops.tiocmget’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1288:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1288:2: warning: (near initialization for ‘serial_ops.tiocmset’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_init’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1352:2: error: implicit declaration of function ‘info’ [-Werror=implicit-function-declaration] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘fixup_generic’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:2: error: ‘struct usb_serial_driver’ has no member named ‘shutdown’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:2: error: ‘struct usb_serial_driver’ has no member named ‘shutdown’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:1: error: ‘usb_serial_generic_shutdown’ undeclared (first use in this function) /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:1: note: each undeclared identifier is reported only once for each function it appears in cc1: some warnings being treated as errors make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.0.0-12-generic' make: *** [modules] Error 2 Install finished! Any suggestion? Update: from installation document , In some special cases, the setup package can’t automatically compile the driver, so you need change the configurations yourself and manually compile the driver. Method: enter the directory: /usr/local/bin/ztemtApp/zteusbserial, and find the corresponding kernel version of current system. I can see 2.6.27 2.6.28 2.6.29 2.6.30 2.6.31 2.6.32 2.6.33 2.6.34 2.6.35 2.6.36 2.6.37 2.6.38 2.6.39 below2.6.2 in the directory. Which version shall I pick up for installation? Readme file says, we have to do this to insert ztemt.ko in kernel.

    Read the article

  • Macbook Pro Wireless Reconnecting

    - by A Student at a University
    I'm using a WPA2 EAP network. I'm sitting next to the access point. The connection keeps dropping and taking ~10 seconds to reconnect. My other devices are staying online. What's causing it? syslog: 01:21:10 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:21:10 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX 01:21:10 NetworkManager[XX40]: <info> (eth1): DHCPv4 state changed reboot -> renew 01:21:10 NetworkManager[XX40]: <info> address XXX.XXX.XXX.XXX 01:21:10 NetworkManager[XX40]: <info> prefix 20 (XXX.XXX.XXX.XXX) 01:21:10 NetworkManager[XX40]: <info> gateway XXX.XXX.XXX.XXX 01:21:10 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:21:10 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:21:10 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:21:10 NetworkManager[XX40]: <info> domain name 'server.domain.tld' 01:21:10 dhclient: bound to XXX.XXX.XXX.XXX -- renewal in XXX seconds. 01:33:30 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:33:30 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX 01:33:30 dhclient: bound to XXX.XXX.XXX.XXX -- renewal in XXX seconds. 01:35:13 wpa_supplicant[XX60]: CTRL-EVENT-EAP-STARTED EAP authentication started 01:35:13 wpa_supplicant[XX60]: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected 01:35:14 wpa_supplicant[XX60]: EAP-MSCHAPV2: Authentication succeeded 01:35:14 wpa_supplicant[XX60]: EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed 01:35:14 wpa_supplicant[XX60]: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully 01:35:14 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> 4-way handshake 01:35:14 wpa_supplicant[XX60]: WPA: Key negotiation completed with XX:XX:XX:XX:XX:XX [PTK=CCMP GTK=TKIP] 01:35:14 NetworkManager[XX40]: <info> (eth1): supplicant connection state: 4-way handshake -> group handshake 01:35:14 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:35:17 wpa_supplicant[XX60]: CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys 01:35:17 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> disconnected 01:35:17 NetworkManager[XX40]: <info> (eth1): supplicant connection state: disconnected -> scanning 01:35:26 wpa_supplicant[XX60]: CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys 01:35:26 NetworkManager[XX40]: <info> (eth1): supplicant connection state: scanning -> disconnected 01:35:29 NetworkManager[XX40]: <info> (eth1): supplicant connection state: disconnected -> scanning 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 8 -> 3 (reason 11) 01:35:32 NetworkManager[XX40]: <info> (eth1): deactivating device (reason: 11). 01:35:32 NetworkManager[XX40]: <info> (eth1): canceled DHCP transaction, DHCP client pid XX27 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) starting connection 'Auto XXXXXXXXXX' 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 3 -> 4 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) scheduled... 01:35:32 NetworkManager[XX40]: <info> (eth1): supplicant connection state: scanning -> disconnected 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) started... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) scheduled... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) complete. 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) starting... 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 4 -> 5 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1/wireless): access point 'Auto XXXXXXXXXX' has security, but secrets are required. 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 5 -> 6 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) complete. 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) scheduled... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) started... 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 6 -> 4 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) scheduled... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) complete. 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) starting... 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 4 -> 5 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1/wireless): connection 'Auto XXXXXXXXXX' has security, and secrets exist. No new secrets needed. 01:35:32 NetworkManager[XX40]: <info> Config: added 'ssid' value 'XXXXXXXXXX' 01:35:32 NetworkManager[XX40]: <info> Config: added 'scan_ssid' value '1' 01:35:32 NetworkManager[XX40]: <info> Config: added 'key_mgmt' value 'WPA-EAP' 01:35:32 NetworkManager[XX40]: <info> Config: added 'password' value '<omitted>' 01:35:32 NetworkManager[XX40]: <info> Config: added 'eap' value 'PEAP' 01:35:32 NetworkManager[XX40]: <info> Config: added 'fragment_size' value 'XXX0' 01:35:32 NetworkManager[XX40]: <info> Config: added 'phase2' value 'auth=MSCHAPV2' 01:35:32 NetworkManager[XX40]: <info> Config: added 'ca_cert' value '/etc/ssl/certs/Equifax_Secure_CA.pem' 01:35:32 NetworkManager[XX40]: <info> Config: added 'identity' value 'XXXXXXX' 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) complete. 01:35:32 NetworkManager[XX40]: <info> Config: set interface ap_scan to 1 01:35:32 NetworkManager[XX40]: <info> (eth1): supplicant connection state: disconnected -> scanning 01:35:36 wpa_supplicant[XX60]: Associated with XX:XX:XX:XX:XX:XX 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: scanning -> associated 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-EAP-STARTED EAP authentication started 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected 01:35:36 wpa_supplicant[XX60]: EAP-MSCHAPV2: Authentication succeeded 01:35:36 wpa_supplicant[XX60]: EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: associated -> 4-way handshake 01:35:36 wpa_supplicant[XX60]: WPA: Could not find AP from the scan results 01:35:36 wpa_supplicant[XX60]: WPA: Key negotiation completed with XX:XX:XX:XX:XX:XX [PTK=CCMP GTK=TKIP] 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-CONNECTED - Connection to XX:XX:XX:XX:XX:XX completed (reauth) [id=0 id_str=] 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: 4-way handshake -> group handshake 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:35:36 NetworkManager[XX40]: <info> Activation (eth1/wireless) Stage 2 of 5 (Device Configure) successful. Connected to wireless network 'XXXXXXXXXX'. 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 3 of 5 (IP Configure Start) scheduled. 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 3 of 5 (IP Configure Start) started... 01:35:36 NetworkManager[XX40]: <info> (eth1): device state change: 5 -> 7 (reason 0) 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Beginning DHCPv4 transaction (timeout in 45 seconds) 01:35:36 NetworkManager[XX40]: <info> dhclient started with pid XX87 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 3 of 5 (IP Configure Start) complete. 01:35:36 dhclient: Internet Systems Consortium DHCP Client VXXX.XXX.XXX 01:35:36 dhclient: Copyright 2004-2009 Internet Systems Consortium. 01:35:36 dhclient: All rights reserved. 01:35:36 dhclient: For info, please visit https://www.isc.org/software/dhcp/ 01:35:36 dhclient: 01:35:36 NetworkManager[XX40]: <info> (eth1): DHCPv4 state changed nbi -> preinit 01:35:36 dhclient: Listening on LPF/eth1/XX:XX:XX:XX:XX:XX 01:35:36 dhclient: Sending on LPF/eth1/XX:XX:XX:XX:XX:XX 01:35:36 dhclient: Sending on Socket/fallback 01:35:36 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:35:36 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX 01:35:36 dhclient: bound to XXX.XXX.XXX.XXX -- renewal in XXX seconds. 01:35:36 NetworkManager[XX40]: <info> (eth1): DHCPv4 state changed preinit -> reboot 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 4 of 5 (IP4 Configure Get) scheduled... 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 4 of 5 (IP4 Configure Get) started... 01:35:36 NetworkManager[XX40]: <info> address XXX.XXX.XXX.XXX 01:35:36 NetworkManager[XX40]: <info> prefix 20 (XXX.XXX.XXX.XXX) 01:35:36 NetworkManager[XX40]: <info> gateway XXX.XXX.XXX.XXX 01:35:36 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:35:36 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:35:36 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:35:36 NetworkManager[XX40]: <info> domain name 'server.domain.tld' 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 5 of 5 (IP Configure Commit) scheduled... 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 4 of 5 (IP4 Configure Get) complete. 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 5 of 5 (IP Configure Commit) started... 01:35:37 NetworkManager[XX40]: <info> (eth1): device state change: 7 -> 8 (reason 0) 01:35:37 NetworkManager[XX40]: <info> (eth1): roamed from BSSID XX:XX:XX:XX:XX:XX (XXXXXXXXXX) to XX:XX:XX:XX:XX:XX (XXXXXXXXX) 01:35:37 NetworkManager[XX40]: <info> Policy set 'Auto XXXXXXXXXX' (eth1) as default for IPv4 routing and DNS. 01:35:37 NetworkManager[XX40]: <info> Activation (eth1) successful, device activated. 01:35:37 NetworkManager[XX40]: <info> Activation (eth1) Stage 5 of 5 (IP Configure Commit) complete. 01:35:43 wpa_supplicant[XX60]: Trying to associate with XX:XX:XX:XX:XX:XX (SSID='XXXXXXXXXX' freq=2412 MHz) 01:35:43 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> associating 01:35:43 wpa_supplicant[XX60]: Association request to the driver failed 01:35:46 wpa_supplicant[XX60]: Associated with XX:XX:XX:XX:XX:XX 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: associating -> associated 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: associated -> 4-way handshake 01:35:46 wpa_supplicant[XX60]: WPA: Key negotiation completed with XX:XX:XX:XX:XX:XX [PTK=CCMP GTK=TKIP] 01:35:46 wpa_supplicant[XX60]: CTRL-EVENT-CONNECTED - Connection to XX:XX:XX:XX:XX:XX completed (reauth) [id=0 id_str=] 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: 4-way handshake -> group handshake 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:40:47 wpa_supplicant[XX60]: WPA: Group rekeying completed with XX:XX:XX:XX:XX:XX [GTK=TKIP] 01:40:47 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> group handshake 01:40:47 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:50:19 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:50:19 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • 12.04 upgrade broke grub? (not wubi related)

    - by kaare
    I just updated from 11.10 to 12.04, with no major problems (it took a while to get past a request to restart ssh, mysql and some other services, but I did no fiddling by myself, everything was done by the installer). However, after restarting, grub can't do anything. Picking the new linux installation (first entry), I just get error: no such partition error: no such partition error: no such partition and picking the recovery-version just gives 5 lines instead of 3. I have windows 7 installed on a different drive, and can run it by booting from that drive instead. Picking it from the grub menu gives the same error as above (can't remember how many lines, though). I'll be honest and say that I don't remember if win 7 could be booted from grub before the update, though. In short, nothing on the grub menu works. any solutions? The grub menu changed appearance - before it was on a purple background, small letters, now it's white-on-black, big letters, looking very basic. The original installation was from a usb-drive, and I hadn't heard about wubi until I started googling this problem, so I doubt there's any connection. I really hope there are some grub-savvy people out there :) EDIT: ok. so, I made a bootable usb, and am running from that right now. when I ran the bootinfoscript, it warned me that "gawk" could not be found, using "busybox awk" instead. This may lead to unreliable results. just so you know. The contents of RESULTS.txt are: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== => Windows is installed in the MBR of /dev/sda. => Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos3)/boot/grub on this drive. => Syslinux MBR (4.04 and higher) is installed in the MBR of /dev/sdc. sda1: __________________________________________ File system: vfat Boot sector type: Dell Utility: FAT16 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /DELLBIO.BIN /DELLRMK.BIN /COMMAND.COM sda2: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda3: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda4: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: vfat Boot sector type: Windows 7: FAT32 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows XP Boot files: /boot.ini /bootmgr /ntldr /NTDETECT.COM sdb1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sdb2: __________________________________________ File system: swap Boot sector type: - Boot sector info: sdb3: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sdb3 and looks at sector 375893584 of the same hard drive for core.img. core.img is at this location and looks for (,msdos3)/boot/grub on this drive. Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb4: __________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Boot files: sdc1: __________________________________________ File system: ntfs Boot sector type: SYSLINUX 4.06 4.06-pre1 Boot sector info: Syslinux looks at sector 4649656 of /dev/sdc1 for its second stage. SYSLINUX is installed in the directory. The integrity check of the ADV area failed. No errors found in the Boot Parameter Block. Operating System: Boot files: /boot/grub/grub.cfg /syslinux/syslinux.cfg /ldlinux.sys ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 63 240,974 240,912 de Dell Utility /dev/sda2 241,664 21,213,183 20,971,520 7 NTFS / exFAT / HPFS /dev/sda3 * 21,213,184 483,151,863 461,938,680 7 NTFS / exFAT / HPFS /dev/sda4 483,151,872 488,394,751 5,242,880 f W95 Extended (LBA) /dev/sda5 483,153,920 488,394,751 5,240,832 dd Dell Media Direct Drive: sdb _______________________________________ Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 63 345,886,749 345,886,687 7 NTFS / exFAT / HPFS /dev/sdb2 345,888,768 361,510,911 15,622,144 82 Linux swap / Solaris /dev/sdb3 * 361,510,912 390,807,786 29,296,875 83 Linux /dev/sdb4 390,809,600 488,394,751 97,585,152 83 Linux Drive: sdc _______________________________________ Disk /dev/sdc: 8015 MB, 8015282176 bytes 255 heads, 63 sectors/track, 974 cylinders, total 15654848 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdc1 * 2,048 15,652,863 15,650,816 7 NTFS / exFAT / HPFS "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 07D8-0411 vfat DellUtility /dev/sda2 E2765BBC765B9061 ntfs RECOVERY /dev/sda3 98DC5E54DC5E2D2E ntfs OS /dev/sda5 7061-9DF5 vfat MEDIADIRECT /dev/sdb1 01CBBB4C3374C3B0 ntfs Data1 /dev/sdb2 1ca45f3f-f888-43d1-8137-02699597189a swap /dev/sdb3 6bc1b599-ad4b-403c-a155-a5bc81211f5e ext4 /dev/sdb4 58e2b257-8608-4b11-b20b-dc162bb80b62 ext4 /dev/sdc1 0C02B64402B63316 ntfs PENDRIVE ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sdb4 /media/58e2b257-8608-4b11-b20b-dc162bb80b62 ext4 (rw,nosuid,nodev,uhelper=udisks) /dev/sdc1 /cdrom fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096) ================================ sda5/boot.ini: ================================ [boot loader] timeout=0 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Embedded" /fastdetect /KERNEL=NTOSBOOT.EXE /maxmem=1024 =========================== sdb3/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="$1" if [ "$1" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux /boot/vmlinuz-3.2.0-24-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-24-generic } menuentry 'Ubuntu, with Linux 3.2.0-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e echo 'Loading Linux 3.2.0-24-generic ...' linux /boot/vmlinuz-3.2.0-24-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-24-generic } submenu "Previous Linux versions" { menuentry 'Ubuntu, with Linux 3.0.0-19-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux /boot/vmlinuz-3.0.0-19-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.0.0-19-generic } menuentry 'Ubuntu, with Linux 3.0.0-19-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e echo 'Loading Linux 3.0.0-19-generic ...' linux /boot/vmlinuz-3.0.0-19-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.0.0-19-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda3)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos3)' search --no-floppy --fs-uuid --set=root 98DC5E54DC5E2D2E chainloader +1 } menuentry "Microsoft Windows XP Embedded (on /dev/sda5)" --class windows --class os { insmod part_msdos insmod fat set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root 7061-9DF5 drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### =============================== sdb3/etc/fstab: ================================ # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb3 during installation UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e / ext4 errors=remount-ro 0 1 # /home was on /dev/sdb4 during installation UUID=58e2b257-8608-4b11-b20b-dc162bb80b62 /home ext4 defaults,user_xattr 0 2 # swap was on /dev/sdb2 during installation UUID=1ca45f3f-f888-43d1-8137-02699597189a none swap sw 0 0 =================== sdb3: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) = boot/grub/core.img 1 = boot/grub/grub.cfg 1 = boot/initrd.img-3.0.0-19-generic 2 = boot/initrd.img-3.2.0-24-generic 2 = boot/vmlinuz-3.0.0-19-generic 2 = boot/vmlinuz-3.2.0-24-generic 1 = vmlinuz 1 = vmlinuz.old 2 =========================== sdc1/boot/grub/grub.cfg: =========================== if loadfont /boot/grub/font.pf2 ; then set gfxmode=auto insmod efi_gop insmod efi_uga insmod gfxterm terminal_output gfxterm fi set menu_color_normal=white/black set menu_color_highlight=black/light-gray menuentry "Try Ubuntu without installing" { set gfxpayload=keep linux /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper quiet splash -- initrd /casper/initrd.lz } menuentry "Install Ubuntu" { set gfxpayload=keep linux /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper only-ubiquity quiet splash -- initrd /casper/initrd.lz } menuentry "Check disc for defects" { set gfxpayload=keep linux /casper/vmlinuz boot=casper integrity-check quiet splash -- initrd /casper/initrd.lz } ========================= sdc1/syslinux/syslinux.cfg: ========================== # D-I config version 2.0 include menu.cfg default vesamenu.c32 prompt 0 timeout 50 # If you would like to use the new menu and be presented with the option to install or run from USB at startup, remove # from the following line. This line was commented out (by request of many) to allow the old menu to be presented and to enable booting straight into the Live Environment! # ui gfxboot bootlogo =================== sdc1: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) ?? = ?? boot/grub/grub.cfg 0 ================= sdc1: Location of files loaded by Syslinux: ================== GiB - GB File Fragment(s) ?? = ?? ldlinux.sys 1 ?? = ?? syslinux/chain.c32 1 ?? = ?? syslinux/gfxboot.c32 1 ?? = ?? syslinux/syslinux.cfg 0 ?? = ?? syslinux/vesamenu.c32 1 ============== sdc1: Version of COM32(R) files used by Syslinux: =============== syslinux/chain.c32 : COM32R module (v4.xx) syslinux/gfxboot.c32 : COM32R module (v4.xx) syslinux/vesamenu.c32 : COM32R module (v4.xx) =============================== StdErr Messages: =============================== xz: (stdin): Compressed data is corrupt xz: (stdin): Compressed data is corrupt awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in ./bootinfoscript: line 1646: [: 2.73495e+09: integer expression expected

    Read the article

  • Introducing Data Annotations Extensions

    - by srkirkland
    Validation of user input is integral to building a modern web application, and ASP.NET MVC offers us a way to enforce business rules on both the client and server using Model Validation.  The recent release of ASP.NET MVC 3 has improved these offerings on the client side by introducing an unobtrusive validation library built on top of jquery.validation.  Out of the box MVC comes with support for Data Annotations (that is, System.ComponentModel.DataAnnotations) and can be extended to support other frameworks.  Data Annotations Validation is becoming more popular and is being baked in to many other Microsoft offerings, including Entity Framework, though with MVC it only contains four validators: Range, Required, StringLength and Regular Expression.  The Data Annotations Extensions project attempts to augment these validators with additional attributes while maintaining the clean integration Data Annotations provides. A Quick Word About Data Annotations Extensions The Data Annotations Extensions project can be found at http://dataannotationsextensions.org/, and currently provides 11 additional validation attributes (ex: Email, EqualTo, Min/Max) on top of Data Annotations’ original 4.  You can find a current list of the validation attributes on the afore mentioned website. The core library provides server-side validation attributes that can be used in any .NET 4.0 project (no MVC dependency). There is also an easily pluggable client-side validation library which can be used in ASP.NET MVC 3 projects using unobtrusive jquery validation (only MVC3 included javascript files are required). On to the Preview Let’s say you had the following “Customer” domain model (or view model, depending on your project structure) in an MVC 3 project: public class Customer { public string Email { get; set; } public int Age { get; set; } public string ProfilePictureLocation { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When it comes time to create/edit this Customer, you will probably have a CustomerController and a simple form that just uses one of the Html.EditorFor() methods that the ASP.NET MVC tooling generates for you (or you can write yourself).  It should look something like this: With no validation, the customer can enter nonsense for an email address, and then can even report their age as a negative number!  With the built-in Data Annotations validation, I could do a bit better by adding a Range to the age, adding a RegularExpression for email (yuck!), and adding some required attributes.  However, I’d still be able to report my age as 10.75 years old, and my profile picture could still be any string.  Let’s use Data Annotations along with this project, Data Annotations Extensions, and see what we can get: public class Customer { [Email] [Required] public string Email { get; set; }   [Integer] [Min(1, ErrorMessage="Unless you are benjamin button you are lying.")] [Required] public int Age { get; set; }   [FileExtensions("png|jpg|jpeg|gif")] public string ProfilePictureLocation { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now let’s try to put in some invalid values and see what happens: That is very nice validation, all done on the client side (will also be validated on the server).  Also, the Customer class validation attributes are very easy to read and understand. Another bonus: Since Data Annotations Extensions can integrate with MVC 3’s unobtrusive validation, no additional scripts are required! Now that we’ve seen our target, let’s take a look at how to get there within a new MVC 3 project. Adding Data Annotations Extensions To Your Project First we will File->New Project and create an ASP.NET MVC 3 project.  I am going to use Razor for these examples, but any view engine can be used in practice.  Now go into the NuGet Extension Manager (right click on references and select add Library Package Reference) and search for “DataAnnotationsExtensions.”  You should see the following two packages: The first package is for server-side validation scenarios, but since we are using MVC 3 and would like comprehensive sever and client validation support, click on the DataAnnotationsExtensions.MVC3 project and then click Install.  This will install the Data Annotations Extensions server and client validation DLLs along with David Ebbo’s web activator (which enables the validation attributes to be registered with MVC 3). Now that Data Annotations Extensions is installed you have all you need to start doing advanced model validation.  If you are already using Data Annotations in your project, just making use of the additional validation attributes will provide client and server validation automatically.  However, assuming you are starting with a blank project I’ll walk you through setting up a controller and model to test with. Creating Your Model In the Models folder, create a new User.cs file with a User class that you can use as a model.  To start with, I’ll use the following class: public class User { public string Email { get; set; } public string Password { get; set; } public string PasswordConfirm { get; set; } public string HomePage { get; set; } public int Age { get; set; } } Next, create a simple controller with at least a Create method, and then a matching Create view (note, you can do all of this via the MVC built-in tooling).  Your files will look something like this: UserController.cs: public class UserController : Controller { public ActionResult Create() { return View(new User()); }   [HttpPost] public ActionResult Create(User user) { if (!ModelState.IsValid) { return View(user); }   return Content("User valid!"); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Create.cshtml: @model NuGetValidationTester.Models.User   @{ ViewBag.Title = "Create"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>User</legend> @Html.EditorForModel() <p> <input type="submit" value="Create" /> </p> </fieldset> } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In the Create.cshtml view, note that we are referencing jquery validation and jquery unobtrusive (jquery is referenced in the layout page).  These MVC 3 included scripts are the only ones you need to enjoy both the basic Data Annotations validation as well as the validation additions available in Data Annotations Extensions.  These references are added by default when you use the MVC 3 “Add View” dialog on a modification template type. Now when we go to /User/Create we should see a form for editing a User Since we haven’t yet added any validation attributes, this form is valid as shown (including no password, email and an age of 0).  With the built-in Data Annotations attributes we can make some of the fields required, and we could use a range validator of maybe 1 to 110 on Age (of course we don’t want to leave out supercentenarians) but let’s go further and validate our input comprehensively using Data Annotations Extensions.  The new and improved User.cs model class. { [Required] [Email] public string Email { get; set; }   [Required] public string Password { get; set; }   [Required] [EqualTo("Password")] public string PasswordConfirm { get; set; }   [Url] public string HomePage { get; set; }   [Integer] [Min(1)] public int Age { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now let’s re-run our form and try to use some invalid values: All of the validation errors you see above occurred on the client, without ever even hitting submit.  The validation is also checked on the server, which is a good practice since client validation is easily bypassed. That’s all you need to do to start a new project and include Data Annotations Extensions, and of course you can integrate it into an existing project just as easily. Nitpickers Corner ASP.NET MVC 3 futures defines four new data annotations attributes which this project has as well: CreditCard, Email, Url and EqualTo.  Unfortunately referencing MVC 3 futures necessitates taking an dependency on MVC 3 in your model layer, which may be unadvisable in a multi-tiered project.  Data Annotations Extensions keeps the server and client side libraries separate so using the project’s validation attributes don’t require you to take any additional dependencies in your model layer which still allowing for the rich client validation experience if you are using MVC 3. Custom Error Message and Globalization: Since the Data Annotations Extensions are build on top of Data Annotations, you have the ability to define your own static error messages and even to use resource files for very customizable error messages. Available Validators: Please see the project site at http://dataannotationsextensions.org/ for an up-to-date list of the new validators included in this project.  As of this post, the following validators are available: CreditCard Date Digits Email EqualTo FileExtensions Integer Max Min Numeric Url Conclusion Hopefully I’ve illustrated how easy it is to add server and client validation to your MVC 3 projects, and how to easily you can extend the available validation options to meet real world needs. The Data Annotations Extensions project is fully open source under the BSD license.  Any feedback would be greatly appreciated.  More information than you require, along with links to the source code, is available at http://dataannotationsextensions.org/. Enjoy!

    Read the article

  • Silverlight Tree View with Multiple Levels

    - by psheriff
    There are many examples of the Silverlight Tree View that you will find on the web, however, most of them only show you how to go to two levels. What if you have more than two levels? This is where understanding exactly how the Hierarchical Data Templates works is vital. In this blog post, I am going to break down how these templates work so you can really understand what is going on underneath the hood. To start, let’s look at the typical two-level Silverlight Tree View that has been hard coded with the values shown below: <sdk:TreeView>  <sdk:TreeViewItem Header="Managers">    <TextBlock Text="Michael" />    <TextBlock Text="Paul" />  </sdk:TreeViewItem>  <sdk:TreeViewItem Header="Supervisors">    <TextBlock Text="John" />    <TextBlock Text="Tim" />    <TextBlock Text="David" />  </sdk:TreeViewItem></sdk:TreeView> Figure 1 shows you how this tree view looks when you run the Silverlight application. Figure 1: A hard-coded, two level Tree View. Next, let’s create three classes to mimic the hard-coded Tree View shown above. First, you need an Employee class and an EmployeeType class. The Employee class simply has one property called Name. The constructor is created to accept a “name” argument that you can use to set the Name property when you create an Employee object. public class Employee{  public Employee(string name)  {    Name = name;  }   public string Name { get; set; }} Finally you create an EmployeeType class. This class has one property called EmpType and contains a generic List<> collection of Employee objects. The property that holds the collection is called Employees. public class EmployeeType{  public EmployeeType(string empType)  {    EmpType = empType;    Employees = new List<Employee>();  }   public string EmpType { get; set; }  public List<Employee> Employees { get; set; }} Finally we have a collection class called EmployeeTypes created using the generic List<> class. It is in the constructor for this class where you will build the collection of EmployeeTypes and fill it with Employee objects: public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;            type = new EmployeeType("Manager");    type.Employees.Add(new Employee("Michael"));    type.Employees.Add(new Employee("Paul"));    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} You now have a data hierarchy in memory (Figure 2) which is what the Tree View control expects to receive as its data source. Figure 2: A hierachial data structure of Employee Types containing a collection of Employee objects. To connect up this hierarchy of data to your Tree View you create an instance of the EmployeeTypes class in XAML as shown in line 13 of Figure 3. The key assigned to this object is “empTypes”. This key is used as the source of data to the entire Tree View by setting the ItemsSource property as shown in Figure 3, Callout #1. Figure 3: You need to start from the bottom up when laying out your templates for a Tree View. The ItemsSource property of the Tree View control is used as the data source in the Hierarchical Data Template with the key of employeeTypeTemplate. In this case there is only one Hierarchical Data Template, so any data you wish to display within that template comes from the collection of Employee Types. The TextBlock control in line 20 uses the EmpType property of the EmployeeType class. You specify the name of the Hierarchical Data Template to use in the ItemTemplate property of the Tree View (Callout #2). For the second (and last) level of the Tree View control you use a normal <DataTemplate> with the name of employeeTemplate (line 14). The Hierarchical Data Template in lines 17-21 sets its ItemTemplate property to the key name of employeeTemplate (Line 19 connects to Line 14). The source of the data for the <DataTemplate> needs to be a property of the EmployeeTypes collection used in the Hierarchical Data Template. In this case that is the Employees property. In the Employees property there is a “Name” property of the Employee class that is used to display the employee name in the second level of the Tree View (Line 15). What is important here is that your lowest level in your Tree View is expressed in a <DataTemplate> and should be listed first in your Resources section. The next level up in your Tree View should be a <HierarchicalDataTemplate> which has its ItemTemplate property set to the key name of the <DataTemplate> and the ItemsSource property set to the data you wish to display in the <DataTemplate>. The Tree View control should have its ItemsSource property set to the data you wish to display in the <HierarchicalDataTemplate> and its ItemTemplate property set to the key name of the <HierarchicalDataTemplate> object. It is in this way that you get the Tree View to display all levels of your hierarchical data structure. Three Levels in a Tree View Now let’s expand upon this concept and use three levels in our Tree View (Figure 4). This Tree View shows that you now have EmployeeTypes at the top of the tree, followed by a small set of employees that themselves manage employees. This means that the EmployeeType class has a collection of Employee objects. Each Employee class has a collection of Employee objects as well. Figure 4: When using 3 levels in your TreeView you will have 2 Hierarchical Data Templates and 1 Data Template. The EmployeeType class has not changed at all from our previous example. However, the Employee class now has one additional property as shown below: public class Employee{  public Employee(string name)  {    Name = name;    ManagedEmployees = new List<Employee>();  }   public string Name { get; set; }  public List<Employee> ManagedEmployees { get; set; }} The next thing that changes in our code is the EmployeeTypes class. The constructor now needs additional code to create a list of managed employees. Below is the new code. public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;    Employee emp;    Employee managed;     type = new EmployeeType("Manager");    emp = new Employee("Michael");    managed = new Employee("John");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Tim");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);     emp = new Employee("Paul");    managed = new Employee("Michael");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Sara");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} Now that you have all of the data built in your classes, you are now ready to hook up this three-level structure to your Tree View. Figure 5 shows the complete XAML needed to hook up your three-level Tree View. You can see in the XAML that there are now two Hierarchical Data Templates and one Data Template. Again you list the Data Template first since that is the lowest level in your Tree View. The next Hierarchical Data Template listed is the next level up from the lowest level, and finally you have a Hierarchical Data Template for the first level in your tree. You need to work your way from the bottom up when creating your Tree View hierarchy. XAML is processed from the top down, so if you attempt to reference a XAML key name that is below where you are referencing it from, you will get a runtime error. Figure 5: For three levels in a Tree View you will need two Hierarchical Data Templates and one Data Template. Each Hierarchical Data Template uses the previous template as its ItemTemplate. The ItemsSource of each Hierarchical Data Template is used to feed the data to the previous template. This is probably the most confusing part about working with the Tree View control. You are expecting the content of the current Hierarchical Data Template to use the properties set in the ItemsSource property of that template. But you need to look to the template lower down in the XAML to see the source of the data as shown in Figure 6. Figure 6: The properties you use within the Content of a template come from the ItemsSource of the next template in the resources section. Summary Understanding how to put together your hierarchy in a Tree View is simple once you understand that you need to work from the bottom up. Start with the bottom node in your Tree View and determine what that will look like and where the data will come from. You then build the next Hierarchical Data Template to feed the data to the previous template you created. You keep doing this for each level in your Tree View until you get to the last level. The data for that last Hierarchical Data Template comes from the ItemsSource in the Tree View itself. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “Silverlight TreeView with Multiple Levels” from the drop down list.

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 2

    - by shiju
    In my previous post Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1, we have discussed on how to work with ASP.NET MVC 3 and EF Code First for developing web apps. We have created generic repository and unit of work with EF Code First for our ASP.NET MVC 3 application and did basic CRUD operations against a simple domain entity. In this post, I will demonstrate on working with domain entity with deep object graph, Service Layer and View Models and will also complete the rest of the demo application. In the previous post, we have done CRUD operations against Category entity and this post will be focus on Expense entity those have an association with Category entity. You can download the source code from http://efmvc.codeplex.com . The following frameworks will be used for this step by step tutorial.    1. ASP.NET MVC 3 RTM    2. EF Code First CTP 5    3. Unity 2.0 Domain Model Category Entity public class Category   {       public int CategoryId { get; set; }       [Required(ErrorMessage = "Name Required")]       [StringLength(25, ErrorMessage = "Must be less than 25 characters")]       public string Name { get; set;}       public string Description { get; set; }       public virtual ICollection<Expense> Expenses { get; set; }   } Expense Entity public class Expense     {                public int ExpenseId { get; set; }                public string  Transaction { get; set; }         public DateTime Date { get; set; }         public double Amount { get; set; }         public int CategoryId { get; set; }         public virtual Category Category { get; set; }     } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. Repository class for Expense Transaction Let’s create repository class for handling CRUD operations for Expense entity public class ExpenseRepository : RepositoryBase<Expense>, IExpenseRepository     {     public ExpenseRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface IExpenseRepository : IRepository<Expense> { } Service Layer If you are new to Service Layer, checkout Martin Fowler's article Service Layer . According to Martin Fowler, Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations. Controller classes should be lightweight and do not put much of business logic onto it. We can use the service layer as the business logic layer and can encapsulate the rules of the application. Let’s create a Service class for coordinates the transaction for Expense public interface IExpenseService {     IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime ednDate);     Expense GetExpense(int id);             void CreateExpense(Expense expense);     void DeleteExpense(int id);     void SaveExpense(); } public class ExpenseService : IExpenseService {     private readonly IExpenseRepository expenseRepository;            private readonly IUnitOfWork unitOfWork;     public ExpenseService(IExpenseRepository expenseRepository, IUnitOfWork unitOfWork)     {                  this.expenseRepository = expenseRepository;         this.unitOfWork = unitOfWork;     }     public IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime endDate)     {         var expenses = expenseRepository.GetMany(exp => exp.Date >= startDate && exp.Date <= endDate);         return expenses;     }     public void CreateExpense(Expense expense)     {         expenseRepository.Add(expense);         unitOfWork.Commit();     }     public Expense GetExpense(int id)     {         var expense = expenseRepository.GetById(id);         return expense;     }     public void DeleteExpense(int id)     {         var expense = expenseRepository.GetById(id);         expenseRepository.Delete(expense);         unitOfWork.Commit();     }     public void SaveExpense()     {         unitOfWork.Commit();     } }   View Model for Expense Transactions In real world ASP.NET MVC applications, we need to design model objects especially for our views. Our domain objects are mainly designed for the needs for domain model and it is representing the domain of our applications. On the other hand, View Model objects are designed for our needs for views. We have an Expense domain entity that has an association with Category. While we are creating a new Expense, we have to specify that in which Category belongs with the new Expense transaction. The user interface for Expense transaction will have form fields for representing the Expense entity and a CategoryId for representing the Category. So let's create view model for representing the need for Expense transactions. public class ExpenseViewModel {     public int ExpenseId { get; set; }       [Required(ErrorMessage = "Category Required")]     public int CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]     public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]     public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } The ExpenseViewModel is designed for the purpose of View template and contains the all validation rules. It has properties for mapping values to Expense entity and a property Category for binding values to a drop-down for list values of Category. Create Expense transaction Let’s create action methods in the ExpenseController for creating expense transactions public ActionResult Create() {     var expenseModel = new ExpenseViewModel();     var categories = categoryService.GetCategories();     expenseModel.Category = categories.ToSelectListItems(-1);     expenseModel.Date = DateTime.Today;     return View(expenseModel); } [HttpPost] public ActionResult Create(ExpenseViewModel expenseViewModel) {                      if (!ModelState.IsValid)         {             var categories = categoryService.GetCategories();             expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);             return View("Save", expenseViewModel);         }         Expense expense=new Expense();         ModelCopier.CopyModel(expenseViewModel,expense);         expenseService.CreateExpense(expense);         return RedirectToAction("Index");              } In the Create action method for HttpGet request, we have created an instance of our View Model ExpenseViewModel with Category information for the drop-down list and passing the Model object to View template. The extension method ToSelectListItems is shown below   public static IEnumerable<SelectListItem> ToSelectListItems(         this IEnumerable<Category> categories, int  selectedId) {     return           categories.OrderBy(category => category.Name)                 .Select(category =>                     new SelectListItem                     {                         Selected = (category.CategoryId == selectedId),                         Text = category.Name,                         Value = category.CategoryId.ToString()                     }); } In the Create action method for HttpPost, our view model object ExpenseViewModel will map with posted form input values. We need to create an instance of Expense for the persistence purpose. So we need to copy values from ExpenseViewModel object to Expense object. ASP.NET MVC futures assembly provides a static class ModelCopier that can use for copying values between Model objects. ModelCopier class has two static methods - CopyCollection and CopyModel.CopyCollection method will copy values between two collection objects and CopyModel will copy values between two model objects. We have used CopyModel method of ModelCopier class for copying values from expenseViewModel object to expense object. Finally we did a call to CreateExpense method of ExpenseService class for persisting new expense transaction. List Expense Transactions We want to list expense transactions based on a date range. So let’s create action method for filtering expense transactions with a specified date range. public ActionResult Index(DateTime? startDate, DateTime? endDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!startDate.HasValue)     {         startDate = new DateTime(dtNow.Year, dtNow.Month, 1);         endDate = startDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of start date's month, if end date is not passed     if (startDate.HasValue && !endDate.HasValue)     {         endDate = (new DateTime(startDate.Value.Year, startDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }     var expenses = expenseService.GetExpenses(startDate.Value ,endDate.Value);     //if request is Ajax will return partial view     if (Request.IsAjaxRequest())     {         return PartialView("ExpenseList", expenses);     }     //set start date and end date to ViewBag dictionary     ViewBag.StartDate = startDate.Value.ToShortDateString();     ViewBag.EndDate = endDate.Value.ToShortDateString();     //if request is not ajax     return View(expenses); } We are using the above Index Action method for both Ajax requests and normal requests. If there is a request for Ajax, we will call the PartialView ExpenseList. Razor Views for listing Expense information Let’s create view templates in Razor for showing list of Expense information ExpenseList.cshtml @model IEnumerable<MyFinance.Domain.Expense>   <table>         <tr>             <th>Actions</th>             <th>Category</th>             <th>                 Transaction             </th>             <th>                 Date             </th>             <th>                 Amount             </th>         </tr>       @foreach (var item in Model) {              <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.ExpenseId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.ExpenseId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divExpenseList" })             </td>              <td>                 @item.Category.Name             </td>             <td>                 @item.Transaction             </td>             <td>                 @String.Format("{0:d}", item.Date)             </td>             <td>                 @String.Format("{0:F}", item.Amount)             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New Expense", "Create") |         @Html.ActionLink("Create New Category", "Create","Category")     </p> Index.cshtml @using MyFinance.Helpers; @model IEnumerable<MyFinance.Domain.Expense> @{     ViewBag.Title = "Index"; }    <h2>Expense List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery-ui.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.ui.datepicker.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/jquery-ui-1.8.6.custom.css")" rel="stylesheet" type="text/css" />      @using (Ajax.BeginForm(new AjaxOptions{ UpdateTargetId="divExpenseList", HttpMethod="Get"})) {     <table>         <tr>         <td>         <div>           Start Date: @Html.TextBox("StartDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["StartDate"].ToString())), new { @class = "ui-datepicker" })         </div>         </td>         <td><div>            End Date: @Html.TextBox("EndDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["EndDate"].ToString())), new { @class = "ui-datepicker" })          </div></td>          <td> <input type="submit" value="Search By TransactionDate" /></td>         </tr>     </table>         }   <div id="divExpenseList">             @Html.Partial("ExpenseList", Model)     </div> <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Ajax search functionality using Ajax.BeginForm The search functionality of Index view is providing Ajax functionality using Ajax.BeginForm. The Ajax.BeginForm() method writes an opening <form> tag to the response. You can use this method in a using block. In that case, the method renders the closing </form> tag at the end of the using block and the form is submitted asynchronously by using JavaScript. The search functionality will call the Index Action method and this will return partial view ExpenseList for updating the search result. We want to update the response UI for the Ajax request onto divExpenseList element. So we have specified the UpdateTargetId as "divExpenseList" in the Ajax.BeginForm method. Add jQuery DatePicker Our search functionality is using a date range so we are providing two date pickers using jQuery datepicker. You need to add reference to the following JavaScript files to working with jQuery datepicker. jquery-ui.js jquery.ui.datepicker.js For theme support for datepicker, we can use a customized CSS class. In our example we have used a CSS file “jquery-ui-1.8.6.custom.css”. For more details about the datepicker component, visit jquery UI website at http://jqueryui.com/demos/datepicker . In the jQuery ready event, we have used following JavaScript function to initialize the UI element to show date picker. <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script>   Source Code You can download the source code from http://efmvc.codeplex.com/ . Summary In this two-part series, we have created a simple web application using ASP.NET MVC 3 RTM, Razor and EF Code First CTP 5. I have demonstrated patterns and practices  such as Dependency Injection, Repository pattern, Unit of Work, ViewModel and Service Layer. My primary objective was to demonstrate different practices and options for developing web apps using ASP.NET MVC 3 and EF Code First. You can implement these approaches in your own way for building web apps using ASP.NET MVC 3. I will refactor this demo app on later time.

    Read the article

  • eBooks on iPad vs. Kindle: More Debate than Smackdown

    - by andrewbrust
    When the iPad was presented at its San Francisco launch event on January 28th, Steve Jobs spent a significant amount of time explaining how well the device would serve as an eBook reader. He showed the iBooks reader application and iBookstore and laid down the gauntlet before Amazon and its beloved Kindle device. Almost immediately afterwards, criticism came rushing forth that the iPad could never beat the Kindle for book reading. The curious part of that criticism is that virtually no one offering it had actually used the iPad yet. A few weeks later, on April 3rd, the iPad was released for sale in the United States. I bought one on that day and in the few additional weeks that have elapsed, I’ve given quite a workout to most of its capabilities, including its eBook features. I’ve also spent some time with the Kindle, albeit a first-generation model, to see how it actually compares to the iPad. I had some expectations going in, but I came away with conclusions about each device that were more scenario-based than absolute. I present my findings to you here.   Vital Statistics Let’s start with an inventory of each device’s underlying technology. The iPad has a color, backlit LCD screen and an on-screen keyboard. It has a battery which, on a full charge, lasts anywhere from 6-10 hours. The Kindle offers a monochrome, reflective E Ink display, a physical keyboard and a battery that on my first gen loaner unit can go up to a week between charges (Amazon claims the battery on the Kindle 2 can last up to 2 weeks on a single charge). The Kindle connects to Amazon’s Kindle Store using a 3G modem (the technology and network vary depending on the model) that incurs no airtime service charges whatsoever. The iPad units that are on-sale today work over WiFi only. 3G-equipped models will be on sale shortly and will command a $130 premium over their WiFi-only counterparts. 3G service on the iPad, in the U.S. from AT&T, will be fee-based, with a 250MB plan at $14.99 per month and an unlimited plan at $29.99. No contract is required for 3G service. All these tech specs aside, I think a more useful observation is that the iPad is a multi-purpose Internet-connected entertainment device, while the Kindle is a dedicated reading device. The question is whether those differences in design and intended use create a clear-cut winner for reading electronic publications. Let’s take a look at each device, in isolation, now.   Kindle To me, what’s most innovative about the Kindle is its E Ink display. E Ink really looks like ink on a sheet of paper. It requires no backlight, it’s fully visible in direct sunlight and it causes almost none of the eyestrain that LCD-based computer display technology (like that used on the iPad) does. It’s really versatile in an all-around way. Forgive me if this sounds precious, but reading on it is really a joy. In fact, it’s a genuinely relaxing experience. Through the Kindle Store, Amazon allows users to download books (including audio books), magazines, newspapers and blog feeds. Books and magazines can be purchased either on a single-issue basis or as an annual subscription. Books, of course, are purchased singly. Oddly, blogs are not free, but instead carry a monthly subscription fee, typically $1.99. To me this is ludicrous, but I suppose the free 3G service is partially to blame. Books and magazine issues download quickly. Magazine and blog subscriptions cause new issues or posts to be pushed to your device on an automated basis. Available blogs include 9000-odd feeds that Amazon offers on the Kindle Store; unless I missed something, arbitrary RSS feeds are not supported (though there are third party workarounds to this limitation). The shopping experience is integrated well, has an huge selection, and offers certain graphical perks. For example, magazine and newspaper logos are displayed in menus, and book cover thumbnails appear as well. A simple search mechanism is provided and text entry through the physical keyboard is relatively painless. It’s very easy and straightforward to enter the store, find something you like and start reading it quickly. If you know what you’re looking for, it’s even faster. Given Kindle’s high portability, very reliable battery, instant-on capability and highly integrated content acquisition, it makes reading on whim, and in random spurts of downtime, very attractive. The Kindle’s home screen lists all of your publications, and easily lets you select one, then start reading it. Once opened, publications display in crisp, attractive text that is adjustable in size. “Turning” pages is achieved through buttons dedicated to the task. Notes can be recorded, bookmarks can be saved and pages can be saved as clippings. I am not an avid book reader, and yet I found the Kindle made it really fun, convenient and soothing to read. There’s something about the easy access to the material and the simplicity of the display that makes the Kindle seduce you into chilling out and reading page after page. On the other hand, the Kindle has an awkward navigation interface. While menus are displayed clearly on the screen, the method of selecting menu items is tricky: alongside the right-hand edge of the main display is a thin column that acts as a second display. It has a white background, and a scrollable silver cursor that is moved up or down through the use of the device’s scrollwheel. Picking a menu item on the main display involves scrolling the silver cursor to a position parallel to that menu item and pushing the scrollwheel in. This navigation technique creates a disconnect, literally. You don’t really click on a selection so much as you gesture toward it. I got used to this technique quickly, but I didn’t love it. It definitely created a kind of anxiety in me, making me feel the need to speed through menus and get to my destination document quickly. Once there, I could calm down and relax. Books are great on the Kindle. Magazines and newspapers much less so. I found the rendering of photographs, and even illustrations, to be unacceptably crude. For this reason, I expect that reading textbooks on the Kindle may leave students wanting. I found that the original flow and layout of any publication was sacrificed on the Kindle. In effect, browsing a magazine or newspaper was almost impossible. Reading the text of individual articles was enjoyable, but having to read this way made the whole experience much more “a la carte” than cohesive and thematic between articles. I imagine that for academic journals this is ideal, but for consumer publications it imposes a stripped-down, low-fidelity experience that evokes a sense of deprivation. In general, the Kindle is great for reading text. For just about anything else, especially activity that involves exploratory browsing, meandering and short-attention-span reading, it presents a real barrier to entry and adoption. Avid book readers will enjoy the Kindle (if they’re not already). It’s a great device for losing oneself in a book over long sittings. Multitaskers who are more interested in periodicals, be they online or off, will like it much less, as they will find compromise, and even sacrifice, to be palpable.   iPad The iPad is a very different device from the Kindle. While the Kindle is oriented to pages of text, the iPad orbits around applications and their interfaces. Be it the pinch and zoom experience in the browser, the rich media features that augment content on news and weather sites, or the ability to interact with social networking services like Twitter, the iPad is versatile. While it shares a slate-like form factor with the Kindle, it’s effectively an elegant personal computer. One of its many features is the iBook application and integration of the iBookstore. But it’s a multi-purpose device. That turns out to be good and bad, depending on what you’re reading. The iBookstore is great for browsing. It’s color, rich animation-laden user interface make it possible to shop for books, rather than merely search and acquire them. Unfortunately, its selection is rather sparse at the moment. If you’re looking for a New York Times bestseller, or other popular titles, you should be OK. If you want to read something more specialized, it’s much harder. Unlike the awkward navigation interface of the Kindle, the iPad offers a nearly flawless touch-screen interface that seduces the user into tinkering and kibitzing every bit as much as the Kindle lulls you into a deep, concentrated read. It’s a dynamic and interactive device, whereas the Kindle is static and passive. The iBook reader is slick and fun. Use the iPad in landscape mode and you can read the book in 2-up (left/right 2-page) display; use it in portrait mode and you can read one page at a time. Rather than clicking a hardware button to turn pages, you simply drag and wipe from right-to-left to flip the single or right-hand page. The page actually travels through an animated path as it would in a physical book. The intuitiveness of the interface is uncanny. The reader also accommodates saving of bookmarks, searching of the text, and the ability to highlight a word and look it up in a dictionary. Pages display brightly and clearly. They’re easy to read. But the backlight and the glare made me less comfortable than I was with the Kindle. The knowledge that completely different applications (including the Web and email and Twitter) were just a few taps away made me antsy and very tempted to task-switch. The knowledge that battery life is an issue created subtle discomfort. If the Kindle makes you feel like you’re in a library reading room, then the iPad makes you feel, at best, like you’re under fluorescent lights at a Barnes and Noble or Borders store. If you’re lucky, you’d be on a couch or at a reading table in the store, but you might also be standing up, in the aisles. Clearly, I didn’t find this conducive to focused and sustained reading. But that may have more to do with my own tendency to read periodicals far more than books, and my neurotic . And, truth be known, the book reading experience, when not explicitly compared to Kindle’s, was still pleasant. It is also important to point out that Kindle Store-sourced books can be read on the iPad through a Kindle reader application, from Amazon, specific to the device. This offered a less rich experience than the iBooks reader, but it was completely adequate. Despite the Kindle brand of the reader, however, it offered little in terms of simulating the reading experience on its namesake device. When it comes to periodicals, the iPad wins hands down. Magazines, even if merely scanned images of their print editions, read on the iPad in a way that felt similar to reading hard copy. The full color display, touch navigation and even the ability to render advertisements in their full glory makes the iPad a great way to read through any piece of work that is measured in pages, rather than chapters. There are many ways to get magazines and newspapers onto the iPad, including the Zinio reader, and publication-specific applications like the Wall Street Journal’s and Popular Science’s. The New York Times’ free Editors’ Choice application offers a Times Reader-like interface to a subset of the Gray Lady’s daily content. The completely Web-based but iPad-optimized Times Skimmer site (at www.nytimes.com/timesskimmer) works well too. Even conventional Web sites themselves can be read much like magazines, given the iPad’s ability to zoom in on the text and crop out advertisements on the margins. While the Kindle does have an experimental Web browser, it reminded me a lot of early mobile phone browsers, only in a larger size. For text-heavy sites with simple layout, it works fine. For just about anything else, it becomes more trouble than it’s worth. And given the way magazine articles make me think of things I want to look up online, I think that’s a real liability for the Kindle.   Summing Up What I came to realize is that the Kindle isn’t so much a computer or even an Internet device as it is a printer. While it doesn’t use physical paper, it still renders its content a page at a time, just like a laser printer does, and its output appears strikingly similar. You can read the rendered text, but you can’t interact with it in any way. That’s why the navigation requires a separate cursor display area. And because of the page-oriented rendering behavior, turning pages causes a flash on the display and requires a sometimes long pause before the next page is rendered. The good side of this is that once the page is generated, no battery power is required to display it. That makes for great battery life, optimal viewing under most lighting conditions (as long as there is some light) and low-eyestrain text-centric display of content. The Kindle is highly portable, has an excellent selection in its store and is refreshingly distraction-free. All of this is ideal for reading books. And iPad doesn’t offer any of it. What iPad does offer is versatility, variety, richness and luxury. It’s flush with accoutrements even if it’s low on focused, sustained text display. That makes it inferior to the Kindle for book reading. But that also makes it better than the Kindle for almost everything else. As such, and given that its book reading experience is still decent (even if not superior), I think the iPad will give Kindle a run for its money. True book lovers, and people on a budget, will want the Kindle. People with a robust amount of discretionary income may want both devices. Everyone else who is interested in a slate form factor e-reading device, especially if they also wish to have leisure-friendly Internet access, will likely choose the iPad exclusively. One thing is for sure: iPad has reduced Kindle’s market, and may have shifted its mass market potential to a mere niche play. If Amazon is smart, it will improve its iPad-based Kindle reader app significantly. It can then leverage the iPad channel as a significant market for the Kindle Store. After all, selling the eBooks themselves is what Amazon should care most about.

    Read the article

  • CodePlex Daily Summary for Tuesday, May 25, 2010

    CodePlex Daily Summary for Tuesday, May 25, 2010New ProjectsBibleNames: BibleNames BibleNames BibleNames BibleNames BibleNamesBing Search for PHP Developers: This is the Bing SDK for PHP, a toolkit that allows you to easily use Bing's API to fetch search results and use in your own application. The Bin...Fading Clock: Fading Clock is a normal clock that has the ability to fade in out. It's developed in C# as a Windows App in .net 2.0.Fuzzy string matching algorithm in C# and LINQ: Fuzzy matching strings. Find out how similar two string is, and find the best fuzzy matching string from a string table. Given a string (strA) and...HexTile editor: Testing hexagonal tile editorhgcheck12: hgcheck12Metaverse Router: Metaverse Router makes it easier for MIIS, ILM, FIM Sync engine administrators to manage multiple provisioning modules, turn on/off provisioning wi...MyVocabulary: Use MyVocabulary to structure and test the words you want to learn in a foreign language. This is a .net 3.5 windows forms application developed in...phpxw: Phpxw 是一个简易的PHP框架。以我自己的姓名命名的。 Phpxw is a simple PHP framework. Take my name named.Plop: Social networking wrappers and projects.PST Data Structure View Tool: PST Data Structure View Tool (PSTViewTool) is a tool supporting the PST file format documentation effort. It allows the user to browse the internal...PST File Format SDK: PST File Format SDK (pstsdk) is a cross platform header only C++ library for reading PST files.QWine: QWine is Queue Machine ApplicationSharePoint Cross Site Collection Security Trimmed Navigation: This SP2010 project will show security trimmed navigation that works across site collections. The project is written for SP2010, but can be easily ...SharePoint List Field Manager: The SharePoint List Field Manager allows users to manage the Boolean properties of a list such as Read Only, Hidden, Show in New Form etc... It sup...Silverlight Toolbar for DataGrid working with RIA Services or local data: DataGridToolbar contains two controls for Silverlight DataGrid designed for RIA Services and local data. It can be used to filter or remove a data,...SilverShader - Silverlight Pixel Shader Demos: SilverShader is an extensible Silverlight application that is used to demonstrate the effect of different pixel shaders. The shaders can be applied...SNCFT Gadget: Ce gadget permet de consulter les horaires des trains et de chercher des informations sur le site de la société nationale des chemins de fer tunisi...Software Transaction Memory: Software Transaction Memory for .NETStreamInsight Samples: This project contains sample code for StreamInsight, Microsoft's platform for complex event processing. The purpose of the samples is to provide a ...StyleAnalizer: A CSS parserSudoku (Multiplayer in RnD): Sudoku project was to practice on C# by making a desktop application using some algorithm Before this, I had worked on http://shaktisaran.tech.o...Tiplican: A small website built for the purpose of learning .Net 4 and MVC 2TPager: Mercurial pager with color support on Windowsunirca: UNIRCA projectWcfTrace: The WcfTrace is a easy way to collect information about WCF-service call order, processing time and etc. It's developed in C#.New ReleasesASP.NET TimePicker Control: 12 24 Hour Bug Fix: 12 24 Hour Bug FixASP.NET TimePicker Control: ASP.NET TimePicker Control: This release fixes a focus bug that manifests itself when switching focus between two different timepicker controls on the same page, and make chan...ASP.NET TimePicker Control: ASP.NET TimePicker Control - colon CSS Fix: Fixes ":" seperator placement issues being too hi and too low in IE and FireFox, respectively.ASP.NET TimePicker Control: Release fixes 24 Hour Mode Bug: Release fixes 24 Hour Mode BugBFBC2 PRoCon: PRoCon 0.5.1.1: Visit http://phogue.net/?p=604 for release notes.BFBC2 PRoCon: PRoCon 0.5.1.2: Release notes can be found at http://phogue.net/?p=604BFBC2 PRoCon: PRoCon 0.5.1.4: Ha.. choosing the "stable" option at the moment is a bit of a joke =\ Release notes at http://phogue.net/?p=604BFBC2 PRoCon: PRoCon 0.5.1.5: BWHAHAHA stable.. ha. Actually this ones looking pretty good now. http://phogue.net/?p=604Bojinx: Bojinx Core V4.5.14: Issues fixed in this release: Fixed an issue that caused referencePropertyName when used through a property configuration in the context to not wo...Bojinx: Bojinx Debugger V0.9B: Output trace and filtering that works with the Bojinx logger.CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1.5 and 4.0.1.5 beta3: Fixed fairly serious bug #13290 http://cassinidev.codeplex.com/WorkItem/View.aspx?WorkItemId=13290Content Rendering: Content Rendering API 1.0.0 Revision 46406: Initial releaseDeploy Workflow Manager: Deploy Workflow Manager Web Part v2: Recommend you test in your development environment first BEFORE using in production.dotSpatial: System.Spatial.Projections Zip May 24, 2010: Adds a new spherical projection.eComic: eComic 2010.0.0.2: Quick release to fix a couple of bugs found in the previous version. Version 2010.0.0.2 Fixed crash error when accessing the "Go To Page" dialog ...Exchange 2010 RBAC Editor (RBAC GUI) - updated on 5/24/2010: RBAC Editor 0.9.4.1: Some bugs fixed; support for unscopoedtoplevel (adding script is not working yet) Please use email address in About menu of the tool for any feedb...Facebook Graph Toolkit: Preview 1 Binaries: The first preview release of the toolkit. Not recommended for use in production, but enought to get started developing your app.Fading Clock: Clock.zip: Clock.zip is a zip file that contains the application Clock.exe.hgcheck12: Rel8082: Rel8082hgcheck12: scsc: scasMetaverse Router: Metaverse Router v1.0: Initial stable release (v.1.0.0.0) of Metaverse Router Working with: FIM 2010 Synchronization Engine ILM 2007 MIIS 2003MSTestGlider: MSTestGlider 1.5: What MSTestGlider 1.5.zip unzips to: http://i49.tinypic.com/2lcv4eg.png If you compile MSTestGlider from its source you will need to change the ou...MyVocabulary: Version 1.0: First releaseNLog - Advanced .NET Logging: Nightly Build 2010.05.24.001: Changes since the last build:2010-05-23 20:45:37 Jarek Kowalski fixed whitespace in NLog.wxs 2010-05-23 12:01:48 Jarek Kowalski BufferingTargetWra...NoteExpress User Tools (NEUT) - Do it by ourselves!: NoteExpress User Tools 2.0.0: 测试版本:NoteExpress 2.5.0.1154 +调整了Tab页的排列方式 注:2.0未做大的改动,仅仅是运行环境由原来的.net 3.5升级到4.0。openrs: Beta Release (Revision 1): This is the beta release of the openrs framework. Please make sure you submit issues in the issue tracker tab. As this is beta, extreme, flawless ...openrs: Revision 2: Revision 2 of the framework. Basic worker example as well as minor improvements on the auth packet.phpxw: Phpxw: Phpxw 1.0 phpxw 是一个简易的PHP框架。以我自己的姓名命名的。 Phpxw is a simple PHP framework. Take my name named. 支持基本的业务逻辑流程,功能模块化,实现了简易的模板技术,同时也可以支持外接模板引擎。 Support...sELedit: sELedit v1.1a: Fixed: clean file before overwriting Fixed: list57 will only created when eLC.Lists.length > 57sGSHOPedit: sGSHOPedit v1.0a: Fixed: bug with wrong item array re-size when adding items after deleting items Added: link to project page pwdatabase.com version is now selec...SharePoint Cross Site Collection Security Trimmed Navigation: Release 1.0.0.0: If you want just the .wsp, and start using this, you can grab it here. Just stsadm add/deploy to your website, and activate the feature as describ...Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4.0 v1.24 Beta: - Updated the demo and added clipboard cut/copy and paste functionality. - Added delay on hover events for both parent and child menus. - Parent me...Silverlight Toolbar for DataGrid working with RIA Services or local data: DataGridToolBar Beta: For Silverlight 4.0sMAPtool: sMAPtool v0.7d (without Maps): Added: link to project pagesMODfix: sMODfix v1.0a: Added: Support for ECM v52 modelssNPCedit: sNPCedit v0.9a: browse source commits for all changes...SocialScapes: SocialScapes TwitterWidget 1.0.0: The SocialScapes TwitterWidget is a DotNetNuke Widget for displaying Twitter searches. This widget will be used to replace the twitter functionali...SQL Server 2005 and 2008 - Backup, Integrity Check and Index Optimization: 23 May 2010: This is the latest version of my solution for Backup, Integrity Check and Index Optimization in SQL Server 2005, SQL Server 2008 and SQL Server 200...sqwarea: Sqwarea 0.0.280.0 (alpha): This release brings a lot of improvements. We strongly recommend you to upgrade to this version.sTASKedit: sTASKedit v0.7c: Minor Changes in GUI & BehaviourSudoku (Multiplayer in RnD): Sudoku (Multiplayer in RnD) 1.0.0.0 source: Sudoku project was to practice on C# by making a desktop application using some algorithm Idea: The basic idea of algorithm is from http://www.ac...Sudoku (Multiplayer in RnD): Sudoku (Multiplayer in RnD) 1.0.0.1 source: Worked on user-interface, would improve it Sudoku project was to practice on C# by making a desktop application using some algorithm Idea: The b...TFS WorkItem Watcher: TFS WorkItem Watcher Version 1.0: This version contains the following new features: Added support to autodetect whether to start as a service or to start in console mode. The "-c" ...TfsPolicyPack: TfsPolicyPack 0.1: This is the first release of the TfsPolicyPack. This release includes the following policies: CustomRegexPathPolicythinktecture Starter STS (Community Edition): StarterSTS v1.1 CTP: Added ActAs / identity delegation support.TPager: TPager-20100524: TPager 2010-05-24 releaseTrance Layer: TranceLayer Transformer: Transformer is a Beta version 2, morphing from "Digger" to "Transformer" release cycle. It is intended to be used as a demonstration of muscles wh...TweetSharp: TweetSharp v1.0.0.0: Changes in v1.0.0Added 100% public code comments Bug fixes based on feedback from the Release Candidate Changes to handle Twitter schema additi...VCC: Latest build, v2.1.30524.0: Automatic drop of latest buildWCF Client Generator: Version 0.9.2.33468: Version 0.9.2.33468 Fixed: Nested enum types names are not handled correctly. Can't close Visual Studio if generated files are open when the code...Word 2007 Redaction Tool: Version 1.2: A minor update to the Word 2007 Redaction Tool. This version can be installed directly over any existing version. Updates to Version 1.2Fixed bugs:...xPollinate - Windows Live Writer Cross Post Plugin: 1.0.0.5 for WLW 14.0.8117.416: This version works with WLW 14.0.8117.416. This release includes a fix to enable publishing posts that have been opened directly from a blog, but ...Yet another developer blog - Examples: jQuery Autocomplete in ASP.NET MVC: This sample application shows how to use jQuery Autocomplete plugin in ASP.NET MVC. This application is accompanied by the following entry on Yet a...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrpatterns & practices: Windows Azure Security GuidanceSqlServerExtensionsGMap.NET - Great Maps for Windows Forms & PresentationMono.AddinsCaliburn: An Application Framework for WPF and SilverlightBlogEngine.NETIonics Isapi Rewrite FilterSQL Server PowerShell Extensions

    Read the article

  • Auto blocking attacking IP address

    - by dong
    This is to share my PowerShell code online. I original asked this question on MSDN forum (or TechNet?) here: http://social.technet.microsoft.com/Forums/en-US/winserversecurity/thread/f950686e-e3f8-4cf2-b8ec-2685c1ed7a77 In short, this is trying to find attacking IP address then add it into Firewall block rule. So I suppose: 1, You are running a Windows Server 2008 facing the Internet. 2, You need to have some port open for service, e.g. TCP 21 for FTP; TCP 3389 for Remote Desktop. You can see in my code I’m only dealing with these two since that’s what I opened. You can add further port number if you like, but the way to process might be different with these two. 3, I strongly suggest you use STRONG password and follow all security best practices, this ps1 code is NOT for adding security to your server, but reduce the nuisance from brute force attack, and make sys admin’s life easier: i.e. your FTP log won’t hold megabytes of nonsense, your Windows system log will not roll back and only can tell you what happened last month. 4, You are comfortable with setting up Windows Firewall rules, in my code, my rule has a name of “MY BLACKLIST”, you need to setup a similar one, and set it to BLOCK everything. 5, My rule is dangerous because it has the risk to block myself out as well. I do have a backup plan i.e. the DELL DRAC5 so that if that happens, I still can remote console to my server and reset the firewall. 6, By no means the code is perfect, the coding style, the use of PowerShell skills, the hard coded part, all can be improved, it’s just that it’s good enough for me already. It has been running on my server for more than 7 MONTHS. 7, Current code still has problem, I didn’t solve it yet, further on this point after the code. :)    #Dong Xie, March 2012  #my simple code to monitor attack and deal with it  #Windows Server 2008 Logon Type  #8: NetworkCleartext, i.e. FTP  #10: RemoteInteractive, i.e. RDP    $tick = 0;  "Start to run at: " + (get-date);    $regex1 = [regex] "192\.168\.100\.(?:101|102):3389\s+(\d+\.\d+\.\d+\.\d+)";  $regex2 = [regex] "Source Network Address:\t(\d+\.\d+\.\d+\.\d+)";    while($True) {   $blacklist = @();     "Running... (tick:" + $tick + ")"; $tick+=1;    #Port 3389  $a = @()  netstat -no | Select-String ":3389" | ? { $m = $regex1.Match($_); `    $ip = $m.Groups[1].Value; if ($m.Success -and $ip -ne "10.0.0.1") {$a = $a + $ip;} }  if ($a.count -gt 0) {    $ips = get-eventlog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+10"} | foreach { `      $m = $regex2.Match($_.Message); $ip = $m.Groups[1].Value; $ip; } | Sort-Object | Tee-Object -Variable list | Get-Unique    foreach ($ip in $a) { if ($ips -contains $ip) {      if (-not ($blacklist -contains $ip)) {        $attack_count = ($list | Select-String $ip -SimpleMatch | Measure-Object).count;        "Found attacking IP on 3389: " + $ip + ", with count: " + $attack_count;        if ($attack_count -ge 20) {$blacklist = $blacklist + $ip;}      }      }    }  }      #FTP  $now = (Get-Date).AddMinutes(-5); #check only last 5 mins.     #Get-EventLog has built-in switch for EventID, Message, Time, etc. but using any of these it will be VERY slow.  $count = (Get-EventLog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+8" -and `              $_.TimeGenerated.CompareTo($now) -gt 0} | Measure-Object).count;  if ($count -gt 50) #threshold  {     $ips = @();     $ips1 = dir "C:\inetpub\logs\LogFiles\FPTSVC2" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;       $ips2 = dir "C:\inetpub\logs\LogFiles\FTPSVC3" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;     $ips += $ips1; $ips += $ips2; $ips = $ips | where {$_ -ne "10.0.0.1"} | Sort-Object | Get-Unique;         foreach ($ip in $ips) {       if (-not ($blacklist -contains $ip)) {        "Found attacking IP on FTP: " + $ip;        $blacklist = $blacklist + $ip;       }     }  }        #Firewall change <# $current = (netsh advfirewall firewall show rule name="MY BLACKLIST" | where {$_ -match "RemoteIP"}).replace("RemoteIP:", "").replace(" ","").replace("/255.255.255.255",""); #inside $current there is no \r or \n need remove. foreach ($ip in $blacklist) { if (-not ($current -match $ip) -and -not ($ip -like "10.0.0.*")) {"Adding this IP into firewall blocklist: " + $ip; $c= 'netsh advfirewall firewall set rule name="MY BLACKLIST" new RemoteIP="{0},{1}"' -f $ip, $current; Invoke-Expression $c; } } #>    foreach ($ip in $blacklist) {    $fw=New-object –comObject HNetCfg.FwPolicy2; # http://blogs.technet.com/b/jamesone/archive/2009/02/18/how-to-manage-the-windows-firewall-settings-with-powershell.aspx    $myrule = $fw.Rules | where {$_.Name -eq "MY BLACKLIST"} | select -First 1; # Potential bug here?    if (-not ($myrule.RemoteAddresses -match $ip) -and -not ($ip -like "10.0.0.*"))      {"Adding this IP into firewall blocklist: " + $ip;         $myrule.RemoteAddresses+=(","+$ip);      }  }    Wait-Event -Timeout 30 #pause 30 secs    } # end of top while loop.   Further points: 1, I suppose the server is listening on port 3389 on server IP: 192.168.100.101 and 192.168.100.102, you need to replace that with your real IP. 2, I suppose you are Remote Desktop to this server from a workstation with IP: 10.0.0.1. Please replace as well. 3, The threshold for 3389 attack is 20, you don’t want to block yourself just because you typed your password wrong 3 times, you can change this threshold by your own reasoning. 4, FTP is checking the log for attack only to the last 5 mins, you can change that as well. 5, I suppose the server is serving FTP on both IP address and their LOG path are C:\inetpub\logs\LogFiles\FPTSVC2 and C:\inetpub\logs\LogFiles\FPTSVC3. Change accordingly. 6, FTP checking code is only asking for the last 200 lines of log, and the threshold is 10, change as you wish. 7, the code runs in a loop, you can set the loop time at the last line. To run this code, copy and paste to your editor, finish all the editing, get it to your server, and open an CMD window, then type powershell.exe –file your_powershell_file_name.ps1, it will start running, you can Ctrl-C to break it. This is what you see when it’s running: This is when it detected attack and adding the firewall rule: Regarding the design of the code: 1, There are many ways you can detect the attack, but to add an IP into a block rule is no small thing, you need to think hard before doing it, reason for that may include: You don’t want block yourself; and not blocking your customer/user, i.e. the good guy. 2, Thus for each service/port, I double check. For 3389, first it needs to show in netstat.exe, then the Event log; for FTP, first check the Event log, then the FTP log files. 3, At three places I need to make sure I’m not adding myself into the block rule. –ne with single IP, –like with subnet.   Now the final bit: 1, The code will stop working after a while (depends on how busy you are attacked, could be weeks, months, or days?!) It will throw Red error message in CMD, don’t Panic, it does no harm, but it also no longer blocking new attack. THE REASON is not confirmed with MS people: the COM object to manage firewall, you can only give it a list of IP addresses to the length of around 32KB I think, once it reaches the limit, you get the error message. 2, This is in fact my second solution to use the COM object, the first solution is still in the comment block for your reference, which is using netsh, that fails because being run from CMD, you can only throw it a list of IP to 8KB. 3, I haven’t worked the workaround yet, some ideas include: wrap that RemoteAddresses setting line with error checking and once it reaches the limit, use the newly detected IP to be the list, not appending to it. This basically reset your block rule to ground zero and lose the previous bad IPs. This does no harm as it sounds, because given a certain period has passed, any these bad IPs still not repent and continue the attack to you, it only got 30 seconds or 20 guesses of your password before you block it again. And there is the benefit that the bad IP may turn back to the good hands again, and you are not blocking a potential customer or your CEO’s home pc because once upon a time, it’s a zombie. Thus the ZEN of blocking: never block any IP for too long. 4, But if you insist to block the ugly forever, my other ideas include: You call MS support, ask them how can we set an arbitrary length of IP addresses in a rule; at least from my experiences at the Forum, they don’t know and they don’t care, because they think the dynamic blocking should be done by some expensive hardware. Or, from programming perspective, you can create a new rule once the old is full, then you’ll have MY BLACKLIST1, MY  BLACKLIST2, MY BLACKLIST3, … etc. Once in a while you can compile them together and start a business to sell your blacklist on the market! Enjoy the code! p.s. (PowerShell is REALLY REALLY GREAT!)

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • CodePlex Daily Summary for Friday, June 10, 2011

    CodePlex Daily Summary for Friday, June 10, 2011Popular ReleasesZen4Sync, Orchestration and Test Load platform for SQL Server Merge Replication: Zen4Sync Report 1.0 - Excel Add-In: Zen4Sync Report is an Excel Add-In allowing you to generate reports based on the data generated by your Test Sessions. Choose a Zen4Sync Report release according to your version of Excel. The downloaded .zip archive contains all the resources needed to install the Zen4Sync Report Add-In for your version of Excel. For instrunctions on how to install, use and customize Zen4Sync Report, please see the Zen4Sync Report Guide.Candescent NUI: Candescent NUI Start Menu: This is the first version of the Candescent NUI Start Menu. There is currently only one item in the start menu by default (Windows Explorer). There is no user interface for configuration yet, but you can add programs yourself by adding lines to the file menu_config.csv. Please don't change the first line. Lines for programs have the following format: Name;Path Default Explorer;c:\windows\explorer.exe MyApp;c:\...\myapp.exe To show the menu, present your open hand to the kinect at a distan...Media Companion: MC 3.406b weekly: An minor issue was found with 3.406b, the fixed version is now posted.... Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. Refer to the documentation on this site for the Installation & Setup Guide Important! If you find MC not displaying movie data properly, please try a 'movie rebuild' to reload the data from the nfo's into MC's cache. Fixes Movies Readded movie preference to rename invalid or scene nfo's to info extension Fix crash during ...SCCM Client Actions Tool: SCCM Client Actions Tool v0.5.1: SCCM Client Actions Tool v0.5.1 is currently the most stable version and includes all of the functionality requested so far. It comes with following changes since last version: Fixed an incorrect path to x64 client setup folder. It comes as a ZIP file that contains three files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA on Windows XP. Cmdkey.exe is natively a...Windows Azure VM Assistant: AzureVMAssist V1.0.0.5: AzureVMAssist V1.0.0.5 (Debug) - Test Release VersionSizeOnDisk: 1.8.0.3: Fix (issue 317): Main window icon loading error on windows server 2003 32bit runing on x86 cpu. Bypass of the Microsoft windows comexception.SketchFlow Template for Windows Phone: SketchFlow Template for Windows Phone 7: Initial release. Please make sure you are using a version of Blend with SketchFlow enabled and have also installed the the Mango developer tools for Windows Phone. fastJSON: v1.8: - Silverlight 4.0+ support merged - RegisterCustomType() for user defined serialization/deserialization without changing the source code open closed principalNetOffice - The easiest way to use Office in .NET: NetOffice Release 0.9: Changes: - fix examples (include issue 16026) - add new examples - 32Bit/64Bit Walkthrough is now available in technical Documentation. Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:..............................................................COM Proxy Management, Events, etc. - Examples in C# and VB.Net:............................................................Excel, Word, Outlook, PowerPoint, Access - COMAddi...Reusable Library: V1.1.3: A collection of reusable abstractions for enterprise application developerClosedXML - The easy way to OpenXML: ClosedXML 0.54.0: New on this release: 1) Mayor performance improvements. 2) AdjustToContents now take into account the text rotation. 3) Fixed issues 6782, 6784, 6788HTML-IDEx: HTML-IDEx .15 ALPHA: This release fixes line counting a little bit and adds the masshighlight() sub, which highlights pasted and inserted code.AutoLoL: AutoLoL v2.0.3: - Improved summoner spells are now displayed - Fixed some of the startup errors people got - Double clicking an item selects it - Some usability changes that make using AutoLoL just a little easier - Bug fixes AutoLoL v2 is not an update, but an entirely new version! Please install to a different directory than AutoLoL v1Host Profiles: Host Profiles 1.0: Host Profiles 1.0 Release Quickly modify host file Automatically flush dnsVidCoder: 0.9.2: Updated to HandBrake 4024svn. This fixes problems with mpeg2 sources: corrupted previews, incorrect progress indicators and encodes that incorrectly report as failed. Fixed a problem that prevented target sizes above 2048 MB.SharePoint Search XSL Samples: SharePoint 2010 Samples: I have updated some of the samples from the 2007 release. These all work in SharePoint 2010. I removed the Pivot on File Extension because SharePoint 2010 search has refiners that perform the same function.AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta5: ??AcDown?????????????,??????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta5 ?????????? ???? ?? ???????? ???"????????"?? ????????????? ????????/???? ?? ???"????"??? ?? ??????????? ?? ?? ??????????? ?? ?????????????????? ??????????????????? ???????????????? ????????????Discussions???????? ????AcDown??????????????VFPX: GoFish 4 Beta 1: Current beta is Build 144 (released 2011-06-07 ) See the GoFish4 info page for details and video link: http://vfpx.codeplex.com/wikipage?title=GoFishSterling NoSQL OODB for .NET 4.0, Silverlight 4 and 5, and Windows Phone 7: Sterling OODB v1.5: Welcome to the Sterling 1.5 RTM. This version is backwards compatible without modification to the 1.4 beta. For the 1.0, you will need to upgrade your database. Please see this discussion for details. You must modify your 1.0 code for persistence. The 1.5 version defaults to an in-memory driver. To save to isolated storage or use one of the new mechanisms, see the available drivers and pass an instance of the appropriate one to your database (different databases may use different drivers). ...patterns & practices: Project Silk: Project Silk Community Drop 10 - June 3, 2011: Changes from previous drop: Many code changes: please see the readme.mht for details. New "Application Notifications" chapter. Updated "Server-Side Implementation" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To install and run the reference implementation, you must perform the fol...New ProjectsAngry Apps: A game platform written on top of XNA Game Studio. The purpose of this project is to project a vanilla type Game project which can be used in many types of games.BLooD_ICQ: bloodicqCloud Fox: Cloud Fox is a Windows Phone application that allows you to view your Firefox Sync data on your mobile phone. It is similar to Firefox Home for iPhone. The current version will target phones running NoDo, but future versions will eventually require Mango. The application is developed in C# using Silverlight, Json.Net, Mvvm Light and Ninject.Configuration files Merger: This program is to help to merge different config files(environmental difference) and common config file into a single web.config/app.config. CRM 2011 Plugin Utilities: This project contains utilities from CRM 2011 plugins. Genera/calculate full name of a custom entity given the first, last and middle name.CUDA driver API: Making the CUDA driver API as simple to use as the runtime API. Almost.DBXMLTransfer: Command line application that: 1) extracts xml returned by a stored procedure to a file and 2) passes xml contained in a file to a stored procedure (which can use it for inserts & updates for example)Dot Generator: This is a project I did to generate numbers using the number of the dots in the number it self to get a visual representation of how big larg numbers areDRYlib.Net: DRY (Don't Repeat Yourself) -- or, in other words, code-reuse -- makes me create this Class Library so I don't have to keep creating the same code here and there. Feel free to use these snippets. I'm releasing them under a permissive license (Apache Public License 2.0). The DRYlib is created using Visual BASIC 2010 Express. What you can find in this DRYlib include, but not limited to: * CRC Hash algorithms * Simple, high-performance Integer extensions * Simple, oft-used Stri...DW.Configurations: DW.Configurations - Enables an easy handling of application settings Please see www.my.libraries.de for more information and documentation.DW.Game.MauMau: Its a MauMau game with the possibility to define all rules DW.Game.Sudoku: It's just a Sudoku gameDW.Interactivity: DW.Interactivity - Brings additional functionalities to WPF controls Please see www.my.libraries.de for more information and documentation.DW.Logging: DW.Logging - Supports an easy working with log files Please see www.my.libraries.de for more information and documentation.DW.Services: DW.Services - Brings standard services Please see www.my.libraries.de for more information and documentation.DW.SharpTools: DW.SharpTools - Brings additional possibilities to C# Please see www.my.libraries.de for more information and documentation.DW.UnitTests: DW.UnitTests - Gives some objects for easy UnitTests Please see www.my.libraries.de for more information and documentation.DW.WPFDev: DW.WPFDev - Some useful objects for developing custom controls and behaviors Please see www.my.libraries.de for more information and documentation.DW.WPFToolkit: DW.WPFToolkit - A custom controls library Please see www.my.libraries.de for more information and documentation.Elucidate: A GUI to drive the SnapRAID command line (All supported platforms): Definition: explain in detail Synonyms: annotate, clarify, clear, clear up, decode, demonstrate, enlighten, exemplify, explicate, expound, get across, illuminate, illustrate, interpret, make perfectly clear This will take on the task of creating a SnapRAID GUI to drive it's command line options, but give a little help and clarification (And logging) to guide the novice user.FixWordProperties for Office 2003, 2007 and 2010: Ken Getz originally wrote the FixWordProperties I believe in 2006. I had a a few extra requirements like the ability to unlock locked files, without passwords, using the office interop model, instead of word and a few more things that I needed in the winter of 2007.Information System Alumni Community: Here is our Internet Programming project. This site help alumni to always keep interact with their friends through this site.They can track his friend name, city, occupation and then interact with them to (whoa..like Social Network right!!).Go check this out int main code samples: Repository for all demos/samples posted at http://blog.r2d2rigo.es/english/LarX - XNA Game Engine: LarX is an XNA Game Engine, 2D and 3D, that uses SunBurn for Rendering, sgMotion for animations, and BEPU for Physics. It enables developers to write quicly AAA games.Membership, Role and Profile Providers for develop, debug, test: These ASP.NET providers for Membership, Roles and Profiles are valuable during development, debugging and testing due to the ease of creating, removing and changing users, user roles, etc. Music search engine: Ilovethismusic.com lets you listen & download music based on your mood, weather, or any type of expression you can think of. Just press Play.NicolasLight: This is for business projectOlympic.Magazine: sport supplements e-shop projectProject Obscura: Project Obscura is a game about to be in development by a group of friends, more data soon...RegularExpressionTest: .SalesManagementSystem: SalesManagementSystemShoozla: Very simple and powerful tool to search for missing covers. It finds album covers automatically and periodically. It uses LASTFM web service (website registration needed). WPF Application developed in C# following a MVVM design pattern.Silverlight Mind Map demo: A Mind Map control library and sample application created in Silveright. Although the library can be useful by itself, the main goal of this project is to server as a demo and reference application for Wiki that shows different Silverlight testing techniques. SketchFlow Template for Windows Phone: The SketchFlow Template for Windows Phone adds a new SketchFlow template for Expression Blend users, making it fast and easy to prototype Windows Phone applications.Smart Blog: Smart Blog ????,??ASP.NET??,?????Entity Framework、MVC 3.0(razor)、WCF???。???????SqlCE?????,????????,????、??。 SQLiteManager (sys_27): SQLiteManager makes for manage SQLite Database. It's developed in C#. (WPF and use MVVM-patern)testing access to TFS: This is a just a trivial set of test files to learn about TFS. I am testing each step of this process and will try to document it for others. Maybe this is obvious to others, but I am still learning TFS.TFS NuGetter: NuGetter is a TFS 2010 Build Activity designed to provide packaging and deployment management to projects destined for a NuGet Gallery.

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >