Search Results

Search found 35291 results on 1412 pages for 'start menu'.

Page 558/1412 | < Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >

  • XNA, C# - Check if a Vector2 path crosses another Vector2 path.

    - by Nick
    Hello all, I have an XNA question for those with more experience in these matters than myself (maths). Background: I have a game that implements a boundary class, this simply hosts 2 Vector2 objects, a start and an end point. The current implementation crudely handles collision detection by assuming boundaries are always vertical or horizontal, i.e. if start.x and end.x are the same check I am not trying to pass x etc. Ideally what I would like to implement is a method that accepts two Vector2 parameters. The first being a current location, the second being a requested location (where I would like to move it to assuming no objections). The method would also accept a boundary object. What the method should then do is tell me if I am going to cross the boundry in this move. this could be a bool or ideally something representing how far I can actually move. This empty method might explain better than I can in words. /// <summary> /// Checks the move. /// </summary> /// <param name="current">The current.</param> /// <param name="requested">The requested.</param> /// <param name="boundry">The boundry.</param> /// <returns></returns> public bool CheckMove(Vector2 current, Vector2 requested, Boundry boundry) { //return a bool that indicated if the suggested move will cross the boundry. return true; }

    Read the article

  • Creating a new window that stays on top even when in full screen mode (Qt on Linux)

    - by Lorenz03Tx
    I'm using Qt 4.6.3, and ubuntu linux on an embedded target. I call dlg->setWindowState(Qt::WindowFullScreen); on my windows in my application (so I don't loose any real-estate on the touch screen to task bar and status panel on the top and bottom of the screen. This all works fine and as expected. The issue comes in when I want to popup the on screen keyboard to allow the user to input some data. I use m_keyProc= new QProcess(); m_keyProc->start("onboard -s 640x120"); This pops up the keyboard but it is behind the full screen window. The onbaord keyboards preferences are set such that it is always on top, but that seems to actually mean "except for full screen windows". I guess that makes sense and probably meets most use cases, but I need it to be really on top. Can I either A) Not be full screen mode (so the keyboard works) and programmatically hide the task bars? or B) Force the keyboard to be on top despite my full screen status? Note: On windows we call m_keyProc->start("C:\\Windows\\system32\\osk.exe"); and the osk keyboard is on top despite the full screen status. So, I'm guessing this is a difference in window mangers on the different operating systems. So do I need to set some flag on the window with the linux window manager?

    Read the article

  • Why would gnu ld link order causes Signal 11 (SEGV) on startup?

    - by Benoit
    We are building a large application in C++ that includes the use of many (static) libraries. We have a problem where the application crashes on startup with a Signal 11, before we even reach main. After much debugging, we have observed that if we explicitly reference an object file so its link order is early, the program crashes on startup. If the file is referenced later (or not referenced at all), the program does not crash. To be clear, there is NO code invoked directly from this object file. However, as it is C++, there might be static objects that do get constructed (it's a CORBA IDL generated file). We use the -Wl,--start-group ... --end-group arguments to multi-pass link the symbols since the libraries are interdependent. Here is a representation of what I mean. This is what the linker's object file order is: Order 1 Order 2 Order 3 foo.o foo.o foo.o ... ... ... main.o main.o main.o crasher.o libA.o libA.o libA.o LibB.o LibB.o LibB.o LibC.o LibC.o LibC.o crasher.o Results: NO CRASH NO CRASH CRASH Does any one have an idea why the linkage order has an effect on the crash? It would be nice if we could force the crasher.o to link later, but we're really after an explanation. Also, is there a way to force the linker to place crasher.o towards the end? Just to add to the fun, in actuality, crasher.o is part of a Library in the --start/--end-group.

    Read the article

  • how to make custom ROMs for lesser-known Android devices (i.e., WellcoM A69)

    - by gonzobrains
    Hi, I have a WellcoM A69 Android phone I would like to start hacking. I bought it in Thailand (I think WellcoM is a Thai company). However, the docs don't explain how to get a recovery menu or anything like that. I would like to figure out how to make custom ROMs for it, because it doesn't have any Google Experience apps and I also want to change the boot screen. How can I go about doing this? If I can't do this, I want to at least be able to use the device in the Eclipse debugger. I select the debugging option under applications but the device still isn't recognized. Is this something that can be disabled by the manufacturer? In any case, I would like to re-enable it if a custom ROM can allow this. If this cannot be done at all, please at least point me in the direction of where I can start writing/building my own ROMs for the G1? I figured that would be a good starting point for learning. Thanks, Jeff

    Read the article

  • Calculations on the iteration count in for loop in Ruby 1.8.7

    - by user1805035
    I was playing around with Ruby and Latex to create a color coding set. I'm more than a novice with C++, but haven't looked at Ruby until now. So, still learning a lot of coding. I have the following block of code below. When attempting to run this, band1 = 1e+02. I've tried band1 = (BigDecimal(i) * 100).to_f thinking maybe there was some odd floating point issue. This is just me trying anything though as an integer multiplied by an integer should create an integer, if I'm still thinking correctly. I've tried a variety of other things as well (things that I can do in C++, but this ain't C++), but to no avail. (1..9).each do |i| #Band 1 (0..9).each do |j| #Band 2 (0..11).each do |k| #Band 3 #Band 3 Start #these are the colors of the resistor bands b1 = $c_band12[i] b2 = $c_band12[j] b3 = $c_band3[k] b4 = "Gold" oms = ((i*100) + (j*10)) * $mult[k] band1 = i*100 band2 = j band3 = $mult[k] end end end Not sure what I'm missing. Should I be using .each_with_index through these iterations? I've tried this: (1..9).each_with_index {|i, indexi| #Band 1 (0..9).each_with_index {|j, indexj| #Band 2 (0..11).each_with_index {|k, indexk| #Band 3 #Band 3 Start #these are the colors of the resistor bands b1 = $c_band12[i] b2 = $c_band12[j] b3 = $c_band3[k] b4 = "Gold" oms = ((i*100) + (j*10)) * $mult[k] band1 = indexk * 100 and I get the same answer. I can't see why 1*100 should equate to such a large number? Thank you, AT

    Read the article

  • Game design flaw, need help investigating

    - by Snake
    I am not sure if I will be able to get help here but I would give it a shot. The problem is I dont know where the problem is. I have a cards game, in which when you "human" play by dragging a card, then at the end of card being dragged, a handler using postExecute is called with delay of 0.5 sec to start the next player in turn (which is a bot) The bot chooses the color and plays it and at the end of the animation (the card moving to the middle) a handler is started for the next bot and so on. Once the play reaches again to the human player, it waits for his touchs to drag the crad and start the cycle again. The problem that in production, sometimes I am getting errors. The error is resulting in somehow messing up the sequence which ends up with players having more cards than others. After investigation, I found that the transition from human to bot is the problem. Somehow, the transition is happening twice (meaning handler calling post execute twice and the bot is playing twice and everything is messed up). Its been mutliple months and I can't reproduce it (to fix it) and I cna't figure out why this is happeneing? ANY IDEA how I can go after it? How can I get more info about or how can I solve something like that? any pointer would help me

    Read the article

  • How can I display a form for set configuration before the main form?

    - by Hamidm
    hello, in my project i have two form's(form1,form2), form1 is configuration form. i want to show Form1 and when we click Button1 then show Form2 and free(Release) Form1. how can to i do this? i use this code. but this project start and then exit automatically.A Friend said because the application message loop never start, and application terminates because main form does not exist. how i can to solve this problem? uses Unit2; {$R *.dfm} procedure TForm1.Button1Click(Sender: TObject); begin Application.CreateForm(TForm2, Form2); Release; end; /// program Project1; uses Forms, Unit1 in 'Unit1.pas' {Form1}, Unit2 in 'Unit2.pas' {Form2}; {$R *.res} begin Application.Initialize; Application.MainFormOnTaskbar := True; Form1:= TForm1.Create(Application); Application.Run; end.

    Read the article

  • Array values changing unexpectedly

    - by Lizard
    I am using cakephp 1.2 and I have an array that appears to have a value change even though that variable is not being manipulated. Below is the code to that is causing me trouble. PLEASE NOTE - UPDATE Changing the variable name makes no difference to the outcome, The values get changed somewhere between the two print_r calls, and I can't see why the $this-find would do this . echo "Start of findCountByString()"; print_r($myArr); $test = $this->find('count', array( 'conditions' => $conditions, 'joins' => array('LEFT JOIN `articles_entities` AS ArticleEntity ON `ArticleEntity`.`article_id` = `Article`.`id`'), 'group' => 'Article.id' )); echo "End of findCountByString()"; print_r($myArr); I am getting the following output: Start of findCountByString() Array ( [0] => 4bdb1d96-c680-4c2c-aae7-104c39d70629 [1] => 4bdb1d6a-9e38-479d-9ad4-105c39d70629 [2] => 4bdb1b55-35f0-4d22-ab38-104e39d70629 [3] => 4bdb25f4-34d4-46ea-bcb6-104f39d70629 ) End of findCountByString() Array ( [0] => 4bdb1d96-c680-4c2c-aae7-104c39d70629 [1] => 4bdb1d6a-9e38-479d-9ad4-105c39d70629 [2] => 4bdb1b55-35f0-4d22-ab38-104e39d70629 [3] => '4bdb25f4-34d4-46ea-bcb6-104f39d70629' # This is now in inverted commas ) The the value in my array have changed, and I don't know why? Any suggestions?

    Read the article

  • java threads don't see shared boolean changes

    - by andymur
    Here the code class Aux implements Runnable { private Boolean isOn = false; private String statusMessage; private final Object lock; public Aux(String message, Object lock) { this.lock = lock; this.statusMessage = message; } @Override public void run() { for (;;) { synchronized (lock) { if (isOn && "left".equals(this.statusMessage)) { isOn = false; System.out.println(statusMessage); } else if (!isOn && "right".equals(this.statusMessage)) { isOn = true; System.out.println(statusMessage); } if ("left".equals(this.statusMessage)) { System.out.println("left " + isOn); } } } } } public class Question { public static void main(String [] args) { Object lock = new Object(); new Thread(new Aux("left", lock)).start(); new Thread(new Aux("right", lock)).start(); } } In this code I expect to see: left, right, left right and so on, but when Thread with "left" message changes isOn to false, Thread with "right" message don't see it and I get ("right true" and "left false" console messages), left thread don't get isOn in true, but right Thread can't change it cause it always see old isOn value (true). When i add volatile modifier to isOn nothing changes, but if I change isOn to some class with boolean field and change this field then threads are see changes and it works fine Thanks in advance.

    Read the article

  • jQuery variable and object caching

    - by niksy
    This is something that has been bugging me some time and every time I found myself using different solution for this. So, I have link in my document which on click creates new element with some ID. <a href="#" id="test-link">Link</a> For the purpose of easier reusing, I would like to store that new elements ID in a variable which is jQuery object var test = $('#test'); On click I append that new element on body, new element is DIV $('body').append('<div id="test"/>'); And here goes the main "problem" - if I test this new elements length with test.length it first returns 0 and later 1. But, when I test it with $('#test').length it returns 1 from the start. I suppose it is some caching mechanism and I was wondering is there better, all-around solution which will allow to store elements in variables in the start for later repurpose and in the same time work with dynamically created elements. Live, delegate, something else? What I do sometimes is create string and add it to jQuery object but I think this is just avoiding the real issue. Also, using .find() inside another jQuery object. Thanks in advance.

    Read the article

  • How do I pause main() until all other threads have died?

    - by thechiman
    In my program, I am creating several threads in the main() method. The last line in the main method is a call to System.out.println(), which I don't want to call until all the threads have died. I have tried calling Thread.join() on each thread however that blocks each thread so that they execute sequentially instead of in parallel. Is there a way to block the main() thread until all other threads have finished executing? Here is the relevant part of my code: public static void main(String[] args) { //some other initialization code //Make array of Thread objects Thread[] racecars = new Thread[numberOfRaceCars]; //Fill array with RaceCar objects for(int i=0; i<numberOfRaceCars; i++) { racecars[i] = new RaceCar(laps, args[i]); } //Call start() on each Thread for(int i=0; i<numberOfRaceCars; i++) { racecars[i].start(); try { racecars[i].join(); //This is where I tried to using join() //It just blocks all other threads until the current //thread finishes. } catch(InterruptedException e) { e.printStackTrace(); } } //This is the line I want to execute after all other Threads have finished System.out.println("It's Over!"); } Thanks for the help guys! Eric

    Read the article

  • How to improve my software project's speed?

    - by Blitzkr1eg
    I'm doing a school software project with my class mates in Java. We store the info on a remote db. When we start the application we pull all the information from the database and transform it into objects to use in our application (using java sql statemens). In the application we edit some of these objects and then when we exit the application we save or update information in the database using Hibernate. As you see we dont use Hibernate for pulling in information, we use it just for saving and updating. We have 2, but very similar problems. The loading of object(when we start the app) and the saving of objects(with Hibernate) in the db(when closing the app) is taking too much time. And our project its not a huge enterprise application, its a quite small app, we just manage some students, teachers, homeworks and tests. So our db is also very very small. How could we increase performance ? later edit: if we use a local database it runs very quick, it just runs slow on remote databases

    Read the article

  • Form doesn't resize smoothly with a timer event

    - by BDotA
    I have a grid control at the bottom of my form and it can be shown or hidden if user wants to show/hide it. So one way was to well use AutoSize of the form and change the Visuble property of that grid to true or false,... But I thought let's make it a little cooler! so I wanted the form to resize a little more slowly, like a garage door! So I dropped a Timer on the form and started increasing the height of the form little by little while the timer ticks... so something like this when user says show/hide the grid: timer1.Enabled = true; timer1.Start(); and something like this on the timer_click event: this.Height = this.Height + 5; if(this.Height -10 > ErrorsGrid.Bottom ) timer1.Stop(); It kind of works but still not perfect. For example it lags at the very beginning, stop a like a second and then start moving it...So now with this idea in mind what alterations do you suggest I should do to make this thing look and work better?

    Read the article

  • F# why my recursion is faster than Seq.exists?

    - by user38397
    I am pretty new to F#. I'm trying to understand how I can get a fast code in F#. For this, I tried to write two methods (IsPrime1 and IsPrime2) for benchmarking. My code is: // Learn more about F# at http://fsharp.net open System open System.Diagnostics #light let isDivisible n d = n % d = 0 let IsPrime1 n = Array.init (n-2) ((+) 2) |> Array.exists (isDivisible n) |> not let rec hasDivisor n d = match d with | x when x < n -> (n % x = 0) || (hasDivisor n (d+1)) | _ -> false let IsPrime2 n = hasDivisor n 2 |> not let SumOfPrimes max = [|2..max|] |> Array.filter IsPrime1 |> Array.sum let maxVal = 20000 let s = new Stopwatch() s.Start() let valOfSum = SumOfPrimes maxVal s.Stop() Console.WriteLine valOfSum Console.WriteLine("IsPrime1: {0}", s.ElapsedMilliseconds) ////////////////////////////////// s.Reset() s.Start() let SumOfPrimes2 max = [|2..max|] |> Array.filter IsPrime2 |> Array.sum let valOfSum2 = SumOfPrimes2 maxVal s.Stop() Console.WriteLine valOfSum2 Console.WriteLine("IsPrime2: {0}", s.ElapsedMilliseconds) Console.ReadKey() IsPrime1 takes 760 ms while IsPrime2 takes 260ms for the same result. What's going on here and how I can make my code even faster?

    Read the article

  • getting (this) at different function

    - by twen_ta
    I have couple of input fields with the class "link". All of them should start the jqueryUI dialog so this is why I bind the method to a class and not an single id. The difficulty is now that i can't use the (this) in line 12, because that gives me the identity of the dialog and not the input element. As I am an beginner I don't know how to pass a variable to this event with the element of the input field. What I want to archive is that the dialog should start from the input field and should write the result back to that input field. 1. // this is the click event for the input-field class called "link" 2. $('.link') 3. .button() 4. .click(function() { 5. $('#dialog-form').dialog('open'); 6. 7. }); 8. 9. //this is an excerpt from the opened dialog box and the write back to the input field 10. $("#dialog-form").dialog({ 11. if (bValid) { 12. $('.link').val('' + 14. name.val() + ''); 15. $(this).dialog('close'); 16. } 17. });

    Read the article

  • bluetooth BluetoothSocket.connect() thread. how to close this thread

    - by Hia
    I am trying to make an android app that makes connection with bluetooth device. It works fine but when I call BluetoothSocket.connect() and it is not able to connect to devise its blocking. The thread and does not throw any exception. So when I try to close application while connect() is running its not responding. How can I cancel it? Used BluetoothSocket.close() in ... but still its not working for me. protected void simpleComm(Integer port) { // The documents tell us to cancel the discovery process. try { Method m = mmDevice.getClass().getMethod("createRfcommSocket", new Class[] { int.class }); mmSocket = (BluetoothSocket) m.invoke(mmDevice, port); mmSocket.connect(); // <== blocks untill is not connected Log.d(TAG, " connection success==="); }catch(Exception e){ if (!abort) { connectionFailed(); // Close the socket try { mmSocket.close(); // Start the service over to restart listening mode BluetoothService.this.start(); } catch (IOException e2) { Log.e(TAG,"unable to close() socket during connection failure", e2); } } return; } } public void cancel() { try { abort = true; mmSocket.close(); } catch (IOException e) { Log.e(TAG, "close() of connect socket failed", e); } }

    Read the article

  • run two thread at the same time in java

    - by user1805005
    i have used timertask to schedule my java program. now when the run method of timertask is in process, i want to run two threads which run at the same time and do different functions. here is my code.. please help me.. import java.util.Calendar; import java.util.Date; import java.util.Timer; import java.util.TimerTask; public class timercheck extends TimerTask{ // my first thread Thread t1 = new Thread(){ public void run(){ for(int i = 1;i <= 10;i++) { System.out.println(i); } } }; // my second thread Thread t2 = new Thread(){ public void run(){ for(int i = 11;i <= 20;i++) { System.out.println(i); } } }; public static void main(String[] args){ long ONCE_PER_DAY = 1000*60*60*24; Calendar calendar = Calendar.getInstance(); calendar.set(Calendar.HOUR_OF_DAY, 12); calendar.set(Calendar.MINUTE, 05); calendar.set(Calendar.SECOND, 00); Date time = calendar.getTime(); TimerTask check = new timercheck(); Timer timer = new Timer(); timer.scheduleAtFixedRate(check, time ,ONCE_PER_DAY); } @Override // run method of timer task public void run() { t1.start(); t2.start(); } }

    Read the article

  • Q1 2010 New Feature: Paging with RadGridView for Silverlight and WPF

    We are glad to announce that the Q1 2010 Release has added another weapon to RadGridViews growing arsenal of features. This is the brand new RadDataPager control which provides the user interface for paging through a collection of data. The good news is that RadDataPager can be used to page any collection. It does not depend on RadGridView in any way, so you will be free to use it with the rest of your ItemsControls if you chose to do so. Before you read on, you might want to download the samples solution that I have attached. It contains a sample project for every scenario that I will discuss later on. Looking at the code while reading will make things much easier for you. There is something for everyone among the 10 Visual Studio projects that are included in the solution. So go and grab it. I. Paging essentials The single most important piece of software concerning paging in Silverlight is the System.ComponentModel.IPagedCollectionView interface. Those of you who are on the WPF front need not worry though. As you might already know, Teleriks Silverlight and WPF controls is share the same code-base. Since WPF does not contain a similar interface, Telerik has provided its own Telerik.Windows.Data.IPagedCollectionView. The IPagedCollectionView interface contains several important members which are used by RadGridView to perform the actual paging. Silverlight provides a default implementation of this interface which, naturally, is called PagedCollectionView. You should definitely take a look at its source code in case you are interested in what is going on under the hood. But this is not a prerequisite for our discussion. The WPF default implementation of the interface is Teleriks QueryableCollectionView which, among many other interfaces, implements IPagedCollectionView. II. No Paging In order to gradually build up my case, I will start with a very simple example that lacks paging whatsoever. It might sound stupid, but this will help us build on top of this paging-devoid example. Let us imagine that we have the simplest possible scenario. That is a simple IEnumerable and an ItemsControl that shows its contents. This will look like this: No Paging IEnumerable itemsSource = Enumerable.Range(0, 1000); this.itemsControl.ItemsSource = itemsSource; XAML <Border Grid.Row="0" BorderBrush="Black" BorderThickness="1" Margin="5">     <ListBox Name="itemsControl"/> </Border> <Border Grid.Row="1" BorderBrush="Black" BorderThickness="1" Margin="5">     <TextBlock Text="No Paging"/> </Border> Nothing special for now. Just some data displayed in a ListBox. The two sample projects in the solution that I have attached are: NoPaging_WPF NoPaging_SL3 With every next sample those two project will evolve in some way or another. III. Paging simple collections The single most important property of RadDataPager is its Source property. This is where you pass in your collection of data for paging. More often than not your collection will not be an IPagedCollectionView. It will either be a simple List<T>, or an ObservableCollection<T>, or anything that is simply IEnumerable. Unless you had paging in mind when you designed your project, it is almost certain that your data source will not be pageable out of the box. So what are the options? III. 1. Wrapping the simple collection in an IPagedCollectionView If you look at the constructors of PagedCollectionView and QueryableCollectionView you will notice that you can pass in a simple IEnumerable as a parameter. Those two classes will wrap it and provide paging capabilities over your original data. In fact, this is what RadGridView does internally. It wraps your original collection in an QueryableCollectionView in order to easily perform many useful tasks such as filtering, sorting, and others, but in our case the most important one is paging. So let us start our series of examples with the most simplistic one. Imagine that you have a simple IEnumerable which is the source for an ItemsControl. Here is how to wrap it in order to enable paging: Silverlight IEnumerable itemsSource = Enumerable.Range(0, 1000); var pagedSource = new PagedCollectionView(itemsSource); this.radDataPager.Source = pagedSource; this.itemsControl.ItemsSource = pagedSource; WPF IEnumerable itemsSource = Enumerable.Range(0, 1000); var pagedSource = new QueryableCollectionView(itemsSource); this.radDataPager.Source = pagedSource; this.itemsControl.ItemsSource = pagedSource; XAML <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <ListBox Name="itemsControl"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> This will do the trick. It is quite simple, isnt it? The two sample projects in the solution that I have attached are: PagingSimpleCollectionWithWrapping_WPF PagingSimpleCollectionWithWrapping_SL3 III. 2. Binding to RadDataPager.PagedSource In case you do not like this approach there is a better one. When you assign an IEnumerable as the Source of a RadDataPager it will automatically wrap it in a QueryableCollectionView and expose it through its PagedSource property. From then on, you can attach any number of ItemsControls to the PagedSource and they will be automatically paged. Here is how to do this entirely in XAML: Using RadDataPager.PagedSource <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1" Margin="5">     <ListBox Name="itemsControl"              ItemsSource="{Binding PagedSource, ElementName=radDataPager}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding ItemsSource}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> The two sample projects in the solution that I have attached are: PagingSimpleCollectionWithPagedSource_WPF PagingSimpleCollectionWithPagedSource_SL3 IV. Paging collections implementing IPagedCollectionView Those of you who are using WCF RIA Services should feel very lucky. After a quick look with Reflector or the debugger we can see that the DomainDataSource.Data property is in fact an instance of the DomainDataSourceView class. This class implements a handful of useful interfaces: ICollectionView IEnumerable INotifyCollectionChanged IEditableCollectionView IPagedCollectionView INotifyPropertyChanged Luckily, IPagedCollectionView is among them which lets you do the whole paging in the server. So lets do this. We will add a DomainDataSource control to our page/window and connect the items control and the pager to it. Here is how to do this: MainPage <riaControls:DomainDataSource x:Name="invoicesDataSource"                               AutoLoad="True"                               QueryName="GetInvoicesQuery">     <riaControls:DomainDataSource.DomainContext>         <services:ChinookDomainContext/>     </riaControls:DomainDataSource.DomainContext> </riaControls:DomainDataSource> <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <ListBox Name="itemsControl"              ItemsSource="{Binding Data, ElementName=invoicesDataSource}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding Data, ElementName=invoicesDataSource}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> By the way, you can replace the ListBox from the above code snippet with any other ItemsControl. It can be RadGridView, it can be the MS DataGrid, you name it. Essentially, RadDataPager is sending paging commands to the the DomainDataSource.Data. It does not care who, what, or how many different controls are bound to this same Data property of the DomainDataSource control. So if you would like to experiment with this, you can throw in any number of other ItemsControls next to the ListBox, bind them in the same manner, and all of them will be paged by our single RadDataPager. Furthermore, you can throw in any number of RadDataPagers and bind them to the same property. Then when you page with any one of them will automatically update all of the rest. The whole picture is simply beautiful and we can do all of this thanks to WCF RIA Services. The two sample projects (Silverlight only) in the solution that I have attached are: PagingIPagedCollectionView PagingIPagedCollectionView.Web IV. Paging RadGridView While you can replace the ListBox in any of the above examples with a RadGridView, RadGridView offers something extra. Similar to the DomainDataSource.Data property, the RadGridView.Items collection implements the IPagedCollectionView interface. So you are already thinking: Then why not bind the Source property of RadDataPager to RadGridView.Items? Well thats exactly what you can do and you will start paging RadGridView out-of-the-box. It is as simple as that, no code-behind is involved: MainPage <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1" Margin="5">     <telerikGrid:RadGridView Name="radGridView"                              ItemsSource="{Binding ItemsSource}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding Items, ElementName=radGridView}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> The two sample projects in the solution that I have attached are: PagingRadGridView_SL3 PagingRadGridView_WPF With this last example I think I have covered every possible paging combination. In case you would like to see an example of something that I have not covered, please let me know. Also, make sure you check out those great online examples: WCF RIA Services with DomainDataSource Paging Configurator Endless Paging Paging Any Collection Paging RadGridView Happy Paging! Download Full Source Code Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Visual Studio 2010 and .NET 4 Released

    - by ScottGu
    The final release of Visual Studio 2010 and .NET 4 is now available. Download and Install Today MSDN subscribers, as well as WebsiteSpark/BizSpark/DreamSpark members, can now download the final releases of Visual Studio 2010 and TFS 2010 through the MSDN subscribers download center.  If you are not an MSDN Subscriber, you can download free 90-day trial editions of Visual Studio 2010.  Or you can can download the free Visual Studio express editions of Visual Web Developer 2010, Visual Basic 2010, Visual C# 2010 and Visual C++.  These express editions are available completely for free (and never time out).  If you are looking for an easy way to setup a new machine for web-development you can automate installing ASP.NET 4, ASP.NET MVC 2, IIS, SQL Server Express and Visual Web Developer 2010 Express really quickly with the Microsoft Web Platform Installer (just click the install button on the page). What is new with VS 2010 and .NET 4 Today’s release is a big one – and brings with it a ton of new feature and capabilities. One of the things we tried hard to focus on with this release was to invest heavily in making existing applications, projects and developer experiences better.  What this means is that you don’t need to read 1000+ page books or spend time learning major new concepts in order to take advantage of the release.  There are literally thousands of improvements (both big and small) that make you more productive and successful without having to learn big new concepts in order to start using them.  Below is just a small sampling of some of the improvements with this release: Visual Studio 2010 IDE  Visual Studio 2010 now supports multiple-monitors (enabling much better use of screen real-estate).  It has new code Intellisense support that makes it easier to find and use classes and methods. It has improved code navigation support for searching code-bases and seeing how code is called and used.  It has new code visualization support that allows you to see the relationships across projects and classes within projects, as well as to automatically generate sequence diagrams to chart execution flow.  The editor now supports HTML and JavaScript snippet support as well as improved JavaScript intellisense. The VS 2010 Debugger and Profiling support is now much, much richer and enables new features like Intellitrace (aka Historical Debugging), debugging of Crash/Dump files, and better parallel debugging.  VS 2010’s multi-targeting support is now much richer, and enables you to use VS 2010 to target .NET 2, .NET 3, .NET 3.5 and .NET 4 applications.  And the infamous Add Reference dialog now loads much faster. TFS 2010 is now easy to setup (you can now install the server in under 10 minutes) and enables great source-control, bug/work-item tracking, and continuous integration support.  Testing (both automated and manual) is now much, much richer.  And VS 2010 Premium and Ultimate provide much richer architecture and design tooling support. VB and C# Language Features VB and C# in VS 2010 both contain a bunch of new features and capabilities.  VB adds new support for automatic properties, collection initializers, and implicit line continuation support among many other features.  C# adds support for optional parameters and named arguments, a new dynamic keyword, co-variance and contra-variance, and among many other features. ASP.NET 4 and ASP.NET MVC 2 With ASP.NET 4, Web Forms controls now render clean, semantically correct, and CSS friendly HTML markup. Built-in URL routing functionality allows you to expose clean, search engine friendly, URLs and increase the traffic to your Website.  ViewState within applications can now be more easily controlled and made smaller.  ASP.NET Dynamic Data support has been expanded.  More controls, including rich charting and data controls, are now built-into ASP.NET 4 and enable you to build applications even faster.  New starter project templates now make it easier to get going with new projects.  SEO enhancements make it easier to drive traffic to your public facing sites.  And web.config files are now clean and simple. ASP.NET MVC 2 is now built-into VS 2010 and ASP.NET 4, and provides a great way to build web sites and applications using a model-view-controller based pattern. ASP.NET MVC 2 adds features to easily enable client and server validation logic, provides new strongly-typed HTML and UI-scaffolding helper methods.  It also enables more modular/reusable applications.  The new <%: %> syntax in ASP.NET makes it easier to HTML encode output.  Visual Studio 2010 also now includes better tooling support for unit testing and TDD.  In particular, “Consume first intellisense” and “generate from usage" support within VS 2010 make it easier to write your unit tests first, and then drive your implementation from them. Deploying ASP.NET applications gets a lot easier with this release. You can now publish your Websites and applications to a staging or production server from within Visual Studio itself. Visual Studio 2010 makes it easy to transfer all your files, code, configuration, database schema and data in one complete package. VS 2010 also makes it easy to manage separate web.config configuration files settings depending upon whether you are in debug, release, staging or production modes. WPF 4 and Silverlight 4 WPF 4 includes a ton of new improvements and capabilities including more built-in controls, richer graphics features (cached composition, pixel shader 3 support, layoutrounding, and animation easing functions), a much improved text stack (with crisper text rendering, custom dictionary support, and selection and caret brush options).  WPF 4 also includes a bunch of support to enable you to take advantage of new Windows 7 features – including multi-touch and Windows 7 shell integration. Silverlight 4 will launch this week as well.  You can watch my Silverlight 4 launch keynote streamed live Tuesday (April 13th) at 8am Pacific Time.  Silverlight 4 includes a ton of new capabilities – including a bunch for making it possible to build great business applications and out of the browser applications.  I’ll be doing a separate blog post later this week (once it is live on the web) that talks more about its capabilities. Visual Studio 2010 now includes great tooling support for both WPF and Silverlight.  The new VS 2010 WPF and Silverlight designer makes it much easier to build client applications as well as build great line of business solutions, as well as integrate and bind with data.  Tooling support for Silverlight 4 with the final release of Visual Studio 2010 will be available when Silverlight 4 releases to the web this week. SharePoint and Azure Visual Studio 2010 now includes built-in support for building SharePoint applications.  You can now create, edit, build, and debug SharePoint applications directly within Visual Studio 2010.  You can also now use SharePoint with TFS 2010. Support for creating Azure-hosted applications is also now included with VS 2010 – allowing you to build ASP.NET and WCF based applications and host them within the cloud. Data Access Data access has a lot of improvements coming to it with .NET 4.  Entity Framework 4 includes a ton of new features and capabilities – including support for model first and POCO development, default support for lazy loading, built-in support for pluralization/singularization of table/property names within the VS 2010 designer, full support for all the LINQ operators, the ability to optionally expose foreign keys on model objects (useful for some stateless web scenarios), disconnected API support to better handle N-Tier and stateless web scenarios, and T4 template customization support within VS 2010 to allow you to customize and automate how code is generated for you by the data designer.  In addition to improvements with the Entity Framework, LINQ to SQL with .NET 4 also includes a bunch of nice improvements.  WCF and Workflow WCF includes a bunch of great new capabilities – including better REST, activation and configuration support.  WCF Data Services (formerly known as Astoria) and WCF RIA Services also now enable you to easily expose and work with data from remote clients. Windows Workflow is now much faster, includes flowchart services, and now makes it easier to make custom services than before.  More details can be found here. CLR and Core .NET Library Improvements .NET 4 includes the new CLR 4 engine – which includes a lot of nice performance and feature improvements.  CLR 4 engine now runs side-by-side in-process with older versions of the CLR – allowing you to use two different versions of .NET within the same process.  It also includes improved COM interop support.  The .NET 4 base class libraries (BCL) include a bunch of nice additions and refinements.  In particular, the .NET 4 BCL now includes new parallel programming support that makes it much easier to build applications that take advantage of multiple CPUs and cores on a computer.  This work dove-tails nicely with the new VS 2010 parallel debugger (making it much easier to debug parallel applications), as well as the new F# functional language support now included in the VS 2010 IDE.  .NET 4 also now also has the Dynamic Language Runtime (DLR) library built-in – which makes it easier to use dynamic language functionality with .NET.  MEF – a really cool library that enables rich extensibility – is also now built-into .NET 4 and included as part of the base class libraries.  .NET 4 Client Profile The download size of the .NET 4 redist is now much smaller than it was before (the x86 full .NET 4 package is about 36MB).  We also now have a .NET 4 Client Profile package which is a pure sub-set of the full .NET that can be used to streamline client application installs. C++ VS 2010 includes a bunch of great improvements for C++ development.  This includes better C++ Intellisense support, MSBuild support for projects, improved parallel debugging and profiler support, MFC improvements, and a number of language features and compiler optimizations. My VS 2010 and .NET 4 Blog Series I’ve been cranking away on a blog series the last few months that highlights many of the new VS 2010 and .NET 4 improvements.  The good news is that I have about 20 in-depth posts already written.  The bad news (for me) is that I have about 200 more to go until I’m done!  I’m going to try and keep adding a few more each week over the next few months to discuss the new improvements and how best to take advantage of them. Below is a list of the already written ones that you can check out today: Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 Stay tuned to my blog as I post more.  Also check out this page which links to a bunch of great articles and videos done by others. VS 2010 Installation Notes If you have installed a previous version of VS 2010 on your machine (either the beta or the RC) you must first uninstall it before installing the final VS 2010 release.  I also recommend uninstalling .NET 4 betas (including both the client and full .NET 4 installs) as well as the other installs that come with VS 2010 (e.g. ASP.NET MVC 2 preview builds, etc).  The uninstalls of the betas/RCs will clean up all the old state on your machine – after which you can install the final VS 2010 version and should have everything just work (this is what I’ve done on all of my machines and I haven’t had any problems). The VS 2010 and .NET 4 installs add a bunch of new managed assemblies to your machine.  Some of these will be “NGEN’d” to native code during the actual install process (making them run fast).  To avoid adding too much time to VS setup, though, we don’t NGEN all assemblies immediately – and instead will NGEN the rest in the background when your machine is idle.  Until it finishes NGENing the assemblies they will be JIT’d to native code the first time they are used in a process – which for large assemblies can sometimes cause a slight performance hit. If you run into this you can manually force all assemblies to be NGEN’d to native code immediately (and not just wait till the machine is idle) by launching the Visual Studio command line prompt from the Windows Start Menu (Microsoft Visual Studio 2010->Visual Studio Tools->Visual Studio Command Prompt).  Within the command prompt type “Ngen executequeueditems” – this will cause everything to be NGEN’d immediately. How to Buy Visual Studio 2010 You can can download and use the free Visual Studio express editions of Visual Web Developer 2010, Visual Basic 2010, Visual C# 2010 and Visual C++.  These express editions are available completely for free (and never time out). You can buy a new copy of VS 2010 Professional that includes a 1 year subscription to MSDN Essentials for $799.  MSDN Essentials includes a developer license of Windows 7 Ultimate, Windows Server 2008 R2 Enterprise, SQL Server 2008 DataCenter R2, and 20 hours of Azure hosting time.  Subscribers also have access to MSDN’s Online Concierge, and Priority Support in MSDN Forums. Upgrade prices from previous releases of Visual Studio are also available.  Existing Visual Studio 2005/2008 Standard customers can upgrade to Visual Studio 2010 Professional for a special $299 retail price until October.  You can take advantage of this VS Standard->Professional upgrade promotion here. Web developers who build applications for others, and who are either independent developers or who work for companies with less than 10 employees, can also optionally take advantage of the Microsoft WebSiteSpark program.  This program gives you three copies of Visual Studio 2010 Professional, 1 copy of Expression Studio, and 4 CPU licenses of both Windows 2008 R2 Web Server and SQL 2008 Web Edition that you can use to both develop and deploy applications with at no cost for 3 years.  At the end of the 3 years there is no obligation to buy anything.  You can sign-up for WebSiteSpark today in under 5 minutes – and immediately have access to the products to download. Summary Today’s release is a big one – and has a bunch of improvements for pretty much every developer.  Thank you everyone who provided feedback, suggestions and reported bugs throughout the development process – we couldn’t have delivered it without you.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • An easy way to create Side by Side registrationless COM Manifests with Visual Studio

    - by Rick Strahl
    Here's something I didn't find out until today: You can use Visual Studio to easily create registrationless COM manifest files for you with just a couple of small steps. Registrationless COM lets you use COM component without them being registered in the registry. This means it's possible to deploy COM components along with another application using plain xcopy semantics. To be sure it's rarely quite that easy - you need to watch out for dependencies - but if you know you have COM components that are light weight and have no or known dependencies it's easy to get everything into a single folder and off you go. Registrationless COM works via manifest files which carry the same name as the executable plus a .manifest extension (ie. yourapp.exe.manifest) I'm going to use a Visual FoxPro COM object as an example and create a simple Windows Forms app that calls the component - without that component being registered. Let's take a walk down memory lane… Create a COM Component I start by creating a FoxPro COM component because that's what I know and am working with here in my legacy environment. You can use VB classic or C++ ATL object if that's more to your liking. Here's a real simple Fox one: DEFINE CLASS SimpleServer as Session OLEPUBLIC FUNCTION HelloWorld(lcName) RETURN "Hello " + lcName ENDDEFINE Compile it into a DLL COM component with: BUILD MTDLL simpleserver FROM simpleserver RECOMPILE And to make sure it works test it quickly from Visual FoxPro: server = CREATEOBJECT("simpleServer.simpleserver") MESSAGEBOX( server.HelloWorld("Rick") ) Using Visual Studio to create a Manifest File for a COM Component Next open Visual Studio and create a new executable project - a Console App or WinForms or WPF application will all do. Go to the References Node Select Add Reference Use the Browse tab and find your compiled DLL to import  Next you'll see your assembly in the project. Right click on the reference and select Properties Click on the Isolated DropDown and select True Compile and that's all there's to it. Visual Studio will create a App.exe.manifest file right alongside your application's EXE. The manifest file created looks like this: xml version="1.0" encoding="utf-8"? assembly xsi:schemaLocation="urn:schemas-microsoft-com:asm.v1 assembly.adaptive.xsd" manifestVersion="1.0" xmlns:asmv1="urn:schemas-microsoft-com:asm.v1" xmlns:asmv2="urn:schemas-microsoft-com:asm.v2" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:co.v1="urn:schemas-microsoft-com:clickonce.v1" xmlns:co.v2="urn:schemas-microsoft-com:clickonce.v2" xmlns="urn:schemas-microsoft-com:asm.v1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / file name="simpleserver.DLL" asmv2:size="27293" hash xmlns="urn:schemas-microsoft-com:asm.v2" dsig:Transforms dsig:Transform Algorithm="urn:schemas-microsoft-com:HashTransforms.Identity" / dsig:Transforms dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" / dsig:DigestValuepuq+ua20bbidGOWhPOxfquztBCU=dsig:DigestValue hash typelib tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}" version="1.0" helpdir="" resourceid="0" flags="HASDISKIMAGE" / comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file assembly Now let's finish our super complex console app to test with: using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication1 {     class Program     {         static voidMain(string[] args)         { Type type = Type.GetTypeFromProgID("simpleserver.simpleserver",true); dynamic server = Activator.CreateInstance(type); Console.WriteLine(server.HelloWorld("rick")); Console.ReadLine(); } } } Now run the Console Application… As expected that should work. And why not? The COM component is still registered, right? :-) Nothing tricky about that. Let's unregister the COM component and then re-run and see what happens. Go to the Command Prompt Change to the folder where the DLL is installed Unregister with: RegSvr32 -u simpleserver.dll      To be sure that the COM component no longer works, check it out with the same test you used earlier (ie. o = CREATEOBJECT("SimpleServer.SimpleServer") in your development environment or VBScript etc.). Make sure you run the EXE and you don't re-compile the application or else Visual Studio will complain that it can't find the COM component in the registry while compiling. In fact now that we have our .manifest file you can remove the COM object from the project. When you run run the EXE from Windows Explorer or a command prompt to avoid the recompile. Watch out for embedded Manifest Files Now recompile your .NET project and run it… and it will most likely fail! The problem is that .NET applications by default embeds a manifest file into the compiled EXE application which results in the externally created manifest file being completely ignored. Only one manifest can be applied at a time and the compiled manifest takes precedency. Uh, thanks Visual Studio - not very helpful… Note that if you use another development tool like Visual FoxPro to create your EXE this won't be an issue as long as the tool doesn't automatically add a manifest file. Creating a Visual FoxPro EXE for example will work immediately with the generated manifest file as is. If you are using .NET and Visual Studio you have a couple of options of getting around this: Remove the embedded manifest file Copy the contents of the generated manifest file into a project manifest file and compile that in To remove an embedded manifest in a Visual Studio project: Open the Project Properties (Alt-Enter on project node) Go down to Resources | Manifest and select | Create Application without a Manifest   You can now add use the external manifest file and it will actually be respected when the app runs. The other option is to let Visual Studio create the manifest file on disk and then explicitly add the manifest file into the project. Notice on the dialog above I did this for app.exe.manifest and the manifest actually shows up in the list. If I select this file it will be compiled into the EXE and be used in lieu of any external files and that works as well. Remove the simpleserver.dll reference so you can compile your code and run the application. Now it should work without COM registration of the component. Personally I prefer external manifests because they can be modified after the fact - compiled manifests are evil in my mind because they are immutable - once they are there they can't be overriden or changed. So I prefer an external manifest. However, if you are absolutely sure nothing needs to change and you don't want anybody messing with your manifest, you can also embed it. The option to either is there. Watch for Manifest Caching While working trying to get this to work I ran into some problems at first. Specifically when it wasn't working at first (due to the embedded schema) I played with various different manifest layouts in different files etc.. There are a number of different ways to actually represent manifest files including offloading to separate folder (more on that later). A few times I made deliberate errors in the schema file and I found that regardless of what I did once the app failed or worked no amount of changing of the manifest file would make it behave differently. It appears that Windows is caching the manifest data for a given EXE or DLL. It takes a restart or a recompile of either the EXE or the DLL to clear the caching. Recompile your servers in order to see manifest changes unless there's an outright failure of an invalid manifest file. If the app starts the manifest is being read and caches immediately. This can be very confusing especially if you don't know that it's happening. I found myself always recompiling the exe after each run and before making any changes to the manifest file. Don't forget about Runtimes of COM Objects In the example I used above I used a Visual FoxPro COM component. Visual FoxPro is a runtime based environment so if I'm going to distribute an application that uses a FoxPro COM object the runtimes need to be distributed as well. The same is true of classic Visual Basic applications. Assuming that you don't know whether the runtimes are installed on the target machines make sure to install all the additional files in the EXE's directory alongside the COM DLL. In the case of Visual FoxPro the target folder should contain: The EXE  App.exe The Manifest file (unless it's compiled in) App.exe.manifest The COM object DLL (simpleserver.dll) Visual FoxPro Runtimes: VFP9t.dll (or VFP9r.dll for non-multithreaded dlls), vfp9rENU.dll, msvcr71.dll All these files should be in the same folder. Debugging Manifest load Errors If you for some reason get your manifest loading wrong there are a couple of useful tools available - SxSTrace and SxSParse. These two tools can be a huge help in debugging manifest loading errors. Put the following into a batch file (SxS_Trace.bat for example): sxstrace Trace -logfile:sxs.bin sxstrace Parse -logfile:sxs.bin -outfile:sxs.txt Then start the batch file before running your EXE. Make sure there's no caching happening as described in the previous section. For example, if I go into the manifest file and explicitly break the CLSID and/or ProgID I get a detailed report on where the EXE is looking for the manifest and what it's reading. Eventually the trace gives me an error like this: INFO: Parsing Manifest File C:\wwapps\Conf\SideBySide\Code\app.EXE.     INFO: Manifest Definition Identity is App.exe,processorArchitecture="x86",type="win32",version="1.0.0.0".     ERROR: Line 13: The value {AAaf2c2811-0657-4264-a1f5-06d033a969ff} of attribute clsid in element comClass is invalid. ERROR: Activation Context generation failed. End Activation Context Generation. pinpointing nicely where the error lies. Pay special attention to the various attributes - they have to match exactly in the different sections of the manifest file(s). Multiple COM Objects The manifest file that Visual Studio creates is actually quite more complex than is required for basic registrationless COM object invokation. The manifest file can be simplified a lot actually by stripping off various namespaces and removing the type library references altogether. Here's an example of a simplified manifest file that actually includes references to 2 COM servers: xml version="1.0" encoding="utf-8"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / file name="simpleserver.DLL" comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file file name = "sidebysidedeploy.dll" comClass clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}" progid="sidebysidedeploy.SidebysidedeployServer" description="SidebySideDeploy Server" threadingModel="apartment" / file assembly Simple enough right? Routing to separate Manifest Files and Folders In the examples above all files ended up in the application's root folder - all the DLLs, support files and runtimes. Sometimes that's not so desirable and you can actually create separate manifest files. The easiest way to do this is to create a manifest file that 'routes' to another manifest file in a separate folder. Basically you create a new 'assembly identity' via a named id. You can then create a folder and another manifest with the id plus .manifest that points at the actual file. In this example I create: App.exe.manifest A folder called App.deploy A manifest file in App.deploy All DLLs and runtimes in App.deploy Let's start with that master manifest file. This file only holds a reference to another manifest file: App.exe.manifest xml version="1.0" encoding="UTF-8" standalone="yes"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.exe" version="1.0.0.0" processorArchitecture="x86" type="win32" / dependency dependentAssembly assemblyIdentity name="App.deploy" version="1.0.0.0" type="win32" / dependentAssembly dependency assembly   Note this file only contains a dependency to App.deploy which is another manifest id. I can then create App.deploy.manifest in the current folder or in an App.deploy folder. In this case I'll create App.deploy and in it copy the DLLs and support runtimes. I then create App.deploy.manifest. App.deploy.manifest xml version="1.0" encoding="UTF-8" standalone="yes"? assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" assemblyIdentity name="App.deploy" type="win32" version="1.0.0.0" / file name="simpleserver.DLL" comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}" threadingModel="Apartment" progid="simpleserver.SimpleServer" description="simpleserver.SimpleServer" / file file name="sidebysidedeploy.dll" comClass clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}" threadingModel="Apartment" progid="sidebysidedeploy.SidebysidedeployServer" description="SidebySideDeploy Server" / file assembly   In this manifest file I then host my COM DLLs and any support runtimes. This is quite useful if you have lots of DLLs you are referencing or if you need to have separate configuration and application files that are associated with the COM object. This way the operation of your main application and the COM objects it interacts with is somewhat separated. You can see the two folders here:   Routing Manifests to different Folders In theory registrationless COM should be pretty easy in painless - you've seen the configuration manifest files and it certainly doesn't look very complicated, right? But the devil's in the details. The ActivationContext API (SxS - side by side activation) is very intolerant of small errors in the XML or formatting of the keys, so be really careful when setting up components, especially if you are manually editing these files. If you do run into trouble SxsTrace/SxsParse are a huge help to track down the problems. And remember that if you do have problems that you'll need to recompile your EXEs or DLLs for the SxS APIs to refresh themselves properly. All of this gets even more fun if you want to do registrationless COM inside of IIS :-) But I'll leave that for another blog post…© Rick Strahl, West Wind Technologies, 2005-2011Posted in COM  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • Jolicloud is a Nifty New OS for Your Netbook

    - by Matthew Guay
    Want to breathe new life into your netbook?  Here’s a quick look at Jolicloud, a unique new Linux based OS that lets you use your netbook in a whole new way. Netbooks have been an interesting category of computers.  When they were first released, most netbooks came with a stripped down Linux based operating system designed to let you easily access the internet first and foremost.  Consumers wanted more from their netbooks, so full OSes such as Windows XP and Ubuntu became the standard on netbooks.  Microsoft worked hard to get Windows 7 working great on netbooks, and today most netbooks run Windows 7 great.  But the Linux community hasn’t stood still either, and Jolicloud is proof of that.  Jolicloud is a unique OS designed to bring the best of both webapps and standard programs to your netbook.   Keep reading to see if this is the perfect netbook OS for you. Getting Started Installing Jolicloud on your netbook is easy thanks to a the Jolicloud Express installer for Windows.  Since many netbooks run Windows by default, this makes it easy to install Jolicloud.  Plus, your Windows install is left untouched, so you can still easily access all your Windows files and programs. Download and run the roughly 700Mb installer (link below) just as a normal installer in Windows. This will first extract the needed files. Click Get started to install Jolicloud on your netbook. Enter a username, password, and nickname for your computer.  Please note that the username must be all lowercase, and the nickname should not contain spaces or special characters.   Now you can review the default installation settings.  By default it will take up 39Gb and install on your C:\ drive in English.  If you wish to change this, click Change. We chose to install it on the D: drive on this netbook, as its harddrive was already partitioned into two parts.  Click Save when your settings are all correct, and then click Next in the previous window. Jolicloud will prepare for the installation.  This took about 5 minutes in our test.  Click Next when this is finished. Click Restart now to install and run Jolicloud. When your netbook reboots, it will initialize the Jolicloud setup. It will then automatically finish the installation.  Just sit back and wait; there’s nothing for you to do right now.  The installation took about 20 minutes in our test. Jolicloud will automatically reboot when the setup is finished. Once it’s rebooted, you’re ready to go!  Enter the username, then the password, that you chose earlier when you were installing Jolicloud from Windows. Welcome to your Jolicloud desktop! Hardware Support We installed Jolicloud on a Samsung N150 netbook with an Atom N450 processor, 1Gb Ram, 250Gb harddrive, and WiFi b/g/n with Bluetooth.  Amazingly, once Jolicloud was installed, everything was ready to use.  No drivers to install, no settings to hassle with, it was all installed and set up perfectly.  Power settings worked great, and closing the netbook put it to sleep just like in Windows. WiFi drivers have typically been difficult to find and install on Linux, but Jolicloud had our netbook’s wifi working immediately.  To get online, simply click the Wireless icon on the top right, and select the wireless network you want to connect to. Jolicloud will let you know when it is signed on. Wired Lan networking was also seamless; simply connect your cable and you’re ready to go.  The webcam and touchpad also worked perfectly directly.  The only thing missing was multitouch; this touchpad has two finger scroll, pinch zoom, and other nice multitouch features in Windows, but in Julicloud it only functioned as a standard touchpad.  It did have tap to click activated by default, as well as right-side scrolling, which is nice. Jolicloud also supported our video card without any extra work.  The native resolution was already selected, and the only problem we had with the screen was that there was no apparent way to change the brightness.  This is not a major problem, but would be nice to have.  The Samsung N150 has Intel GMA3150 integrated graphics, and Jolicloud promises 1080p HD video on it.  It did playback 720p H.264 video flawlessly without installing anything extra, but it stuttered on full 1080p HD (which is the exact same as this netbook’s video playback in Windows 7 – 720p works great, but it stutters on 1080p).  We would be excited to see full HD on this netbook, but 720p is definitely fine for most stuff.   Jolicloud supports a wide range of netbooks, and based on our experience we would expect it to work as good on any supported hardware.  Check out the list of supported netbooks to see if your netbook is supported; if not, it still may work but you may have to install special drivers. Jolicloud’s performance was very similar to Windows 7 on our netbook.  It boots in about 30 seconds, and apps load fairly quickly.  In general, we couldn’t tell much difference in performance between Jolicloud and Windows 7, though this isn’t a problem since Windows 7 runs great on the current generation of netbooks. Using Jolicloud Ready to start putting Jolicloud to use?  Your fresh Jolicloud install you can run several built-in apps, such as Firefox, a calculator, and the chat client Pidgin.  It also has a media player and file viewer installed, so you can play MP3s or MPG videos, or read PDF ebooks without installing anything extra.  It also has Flash player installed so you can watch videos online easily. You can also directly access all of your files from the right side of your home screen.  You can even access your Windows files; in our test, the 116.9 GB Media was C: from Windows.  Select it to browse and open any file you had saved in Windows. You may need to enter your password to access it. Once you’re authenticated it, you’ll see all of your Windows files and folders.  Your User files (Documents, Music, Videos, etc.) will be in the Users folder. And, you can easily add files from removable media such as USB flash drives and memory cards.  Jolicloud recognized a flash drive we tested with no trouble at all. Add new apps But, the best part about Jolicloud is that it makes it very easy to install new apps.  Click the Get Started button on your homescreen. You’ll first need to create an account.  You can then use this same account on another netbook if you wish, and your settings will automatically be synced between the two. You can either signup using your Facebook account, …or you can sign up the traditional way with your email address, name, and password.  If you sign up this way, you will need to confirm your email address before your account will be finished. Now, choose your netbook model from the list, and enter a name for your computer. And that’s it!  You’ll now see the Jolicloud dashboard, which will show you updates and notifications from friends who also use Jolicloud. Click the App directory to find new apps for your netbook.  Here you will find a variety of webapps, such as Gmail, along with native applications, such as Skype, that you can install on your netbook.  Simply click the Install button on the right to add the app to your netbook. You will be prompted to enter your system password, and then the app will install without any further input.   Once an app is installed, a check mark will appear beside its name.  You can remove it by clicking the Remove button, and it will uninstall seamlessly. Webapps, such as Gmail, actually run in in a Chrome-powered window that lets the webapp run full screen.  This gives the webapps a native feel, but actually they’re just running the same as they would in a standard web browser.   The Jolicloud Interface Most apps run maximized, and there is no way to run them smaller.  This in general works good, since with small screens most apps need to run full-screen anyhow. Smaller apps, such as a calculator or the Pidgin chat client, run in a window just like they do on other operating systems. You can switch to another app that’s running by selecting it’s icon on the top left, or you can go back to the home screen by clicking the home screen.  If you’re finished with an program, simply click the red X button on the top right of the window when you’re running it. Or, you can switch between programs using standard keyboard shortcuts such as Alt-tab. The default page on the home screen is the favorites page, and all of your other programs are orginized in their own sections on the left hand side.  But, if you want to add one of these to your favorites page, simply right-click on it and select Add to Favorites. When you’re done for the day, you can simply close your netbook to put it to sleep.  Or, if you want to shut down, just press the Quit button on the bottom right of the home screen and then select Shut Down. Booting Jolicloud When you install Jolicloud, it will set itself as the default operating system.  Now, when you boot your netbook, it will show you a list of installed operating systems.  You can select either Windows or Jolicloud, but if you don’t make a selection it will boot into Jolicloud after waiting 10 seconds. If you’d perfer to boot into Windows by default, you can easily change this.  First, boot your netbook in to Windows.  Open the start menu, right-click on the Computer button, and select Properties.   Click the “Advanced system settings” link on the left side. Click the Settings button in the Startup and Recovery section. Now, select Windows as the default operating system, and click Ok.  Your netbook will now boot into Windows by default, but will give you 10 seconds to choose to boot into Jolicloud when you start your computer. Or, if you decided you don’t want Jolicloud, you can easily uninstall it from within Windows. Please note that this will also remove any files you may have saved in Jolicloud, so be sure to copy them to your Windows drive before uninstalling. To uninstall Jolicloud from within Windows, open Control Panel, and select Uninstall a Program. Scroll down to select Jolicloud, and click Uninstall/Change. Click Yes to confirm that you want to uninstall Jolicloud. After a few moments, it will let you know that Jolicloud has been uninstalled.  You’re netbook is now back the same as it was before you installed Jolicloud, with only Windows installed. Closing Whether you’re wanting to replace your current OS on your netbook or would simply like to try out a fresh new Linux version on your netbook, Jolicloud is a great option for you.  We were very impressed by it’s solid hardware support and the ease of installing new apps in Jolicloud.  Rather than simply giving us a standard OS, Jolicloud offers a unique way to use your netbook with native programs and webapps.  And whether you’re an IT pro or are a new computer user, Jolicloud was easy enough to use that anyone can do it.  Give it a try, and let us know what your favorite netbook OS is! Link Download Jolicloud for your netbook Similar Articles Productive Geek Tips How To Change XSplash Themes in Ubuntu 9.10Verify the Integrity of Windows Vista System FilesMonitor Multiple Logs in a Single Shell with MultiTail for LinuxHide Some or All of the GUI Bars in FirefoxAsk the Readers: Do You Use a Laptop, Desktop, or Both? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Stop In The Name Of Love (Firefox addon) Chitika iPad Labs Gives Live iPad Sale Stats Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • Error in installing ZTE AC2738 on ubuntu 3.0.0-12-generic

    - by Netro
    I am getting this error ,struct usb_serial_driver has no member named shutdown. I am installing on 64bit ubuntu 3.0.0-12-generic ... Beginning Verify CD ... ... Verify CD Succeed! ... Beginning Copy Install Package Files ... ... will take a long time, waiting 5 seconds, please ... Copy Install Package Files Succeed! ... 'ztemtApp' previous version not found. and install now Beginning install ... ... Current linux release version is 'Ubuntu' ... Checking 'App' process ... Checking old installation ... Installing ... Current Path is : . : /tmp/ztemt_datacard/Linux 1. Checking Previous Version ... 2. Copying Data Bin ... ... will take a few seconds, please waiting ... /tmp/ztemt_datacard/Linux 3. Auto Load Usb Driver Module ... Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service acpid restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop acpid ; start acpid. The restart(8) utility is also available. acpid stop/waiting acpid start/running, process 11802 4. Changing pppd Options ... 5. Changing File Permission ... 6. Deleting Qt lib When Local QT Vertion > V4.4.0 ... ... Package 'libqtgui4' exist ... QT_VERSION = 4 7. Deleting process id file: EVDOApp.pid ... 8. Making USB Serial Driver Module : ztemt.ko ... ... will take a few seconds, please waiting ... make -C /lib/modules/3.0.0-12-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory `/usr/src/linux-headers-3.0.0-12-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘destroy_serial’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:159:14: error: ‘struct usb_serial_driver’ has no member named ‘shutdown’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:165:18: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_open’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:241:36: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:246:8: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:251:6: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:253:10: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: warning: passing argument 1 of ‘serial->type->open’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: warning: passing argument 2 of ‘serial->type->open’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:265:3: note: expected ‘struct usb_serial_port *’ but argument is of type ‘struct file *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:270:20: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:276:6: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:278:6: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:279:20: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_close’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:355:18: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:357:10: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:358:21: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:371:8: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:372:10: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:375:3: error: too many arguments to function ‘port->serial->type->close’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:377:11: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:378:12: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:379:9: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:380:8: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:386:20: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_write’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:407:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: warning: passing argument 1 of ‘port->serial->type->write’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: warning: passing argument 2 of ‘port->serial->type->write’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: note: expected ‘struct usb_serial_port *’ but argument is of type ‘const unsigned char *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: warning: passing argument 3 of ‘port->serial->type->write’ makes pointer from integer without a cast [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: note: expected ‘const unsigned char *’ but argument is of type ‘int’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:413:2: error: too few arguments to function ‘port->serial->type->write’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_write_room’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:429:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:435:2: warning: passing argument 1 of ‘port->serial->type->write_room’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:435:2: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_chars_in_buffer’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:451:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:457:2: warning: passing argument 1 of ‘port->serial->type->chars_in_buffer’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:457:2: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_throttle’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:472:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:479:3: warning: passing argument 1 of ‘port->serial->type->throttle’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:479:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_unthrottle’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:491:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:498:3: warning: passing argument 1 of ‘port->serial->type->unthrottle’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:498:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_ioctl’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:511:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: warning: passing argument 1 of ‘port->serial->type->ioctl’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: warning: passing argument 2 of ‘port->serial->type->ioctl’ makes integer from pointer without a cast [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: note: expected ‘unsigned int’ but argument is of type ‘struct file *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:518:3: error: too many arguments to function ‘port->serial->type->ioctl’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_set_termios’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:535:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: warning: passing argument 1 of ‘port->serial->type->set_termios’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: warning: passing argument 2 of ‘port->serial->type->set_termios’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: note: expected ‘struct usb_serial_port *’ but argument is of type ‘struct termios *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:542:3: error: too few arguments to function ‘port->serial->type->set_termios’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_break’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:554:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:561:3: warning: passing argument 1 of ‘port->serial->type->break_ctl’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:561:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_tiocmget’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:629:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:635:3: warning: passing argument 1 of ‘port->serial->type->tiocmget’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:635:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:635:3: error: too many arguments to function ‘port->serial->type->tiocmget’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘serial_tiocmset’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:651:11: error: ‘struct usb_serial_port’ has no member named ‘open_count’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: warning: passing argument 1 of ‘port->serial->type->tiocmset’ from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: note: expected ‘struct tty_struct *’ but argument is of type ‘struct usb_serial_port *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: warning: passing argument 2 of ‘port->serial->type->tiocmset’ makes integer from pointer without a cast [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: note: expected ‘unsigned int’ but argument is of type ‘struct file *’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:657:3: error: too many arguments to function ‘port->serial->type->tiocmset’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_port_work’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:697:12: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘port_release’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:709:2: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_probe’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:858:2: error: implicit declaration of function ‘lock_kernel’ [-Werror=implicit-function-declaration] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:861:3: error: implicit declaration of function ‘unlock_kernel’ [-Werror=implicit-function-declaration] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1034:3: error: ‘struct usb_serial_port’ has no member named ‘mutex’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1182:23: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1182:51: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1183:3: error: ‘struct device’ has no member named ‘bus_id’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_disconnect’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1257:13: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1258:21: error: ‘struct usb_serial_port’ has no member named ‘tty’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: At top level: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1280:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1280:2: warning: (near initialization for ‘serial_ops.ioctl’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1281:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1281:2: warning: (near initialization for ‘serial_ops.set_termios’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1284:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1284:2: warning: (near initialization for ‘serial_ops.break_ctl’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1286:2: error: unknown field ‘read_proc’ specified in initializer /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1286:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1286:2: warning: (near initialization for ‘serial_ops.ioctl’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1287:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1287:2: warning: (near initialization for ‘serial_ops.tiocmget’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1288:2: warning: initialization from incompatible pointer type [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1288:2: warning: (near initialization for ‘serial_ops.tiocmset’) [enabled by default] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘usb_serial_init’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1352:2: error: implicit declaration of function ‘info’ [-Werror=implicit-function-declaration] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c: In function ‘fixup_generic’: /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:2: error: ‘struct usb_serial_driver’ has no member named ‘shutdown’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:2: error: ‘struct usb_serial_driver’ has no member named ‘shutdown’ /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:1: error: ‘usb_serial_generic_shutdown’ undeclared (first use in this function) /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:1406:1: note: each undeclared identifier is reported only once for each function it appears in cc1: some warnings being treated as errors make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.0.0-12-generic' make: *** [modules] Error 2 Install finished! Any suggestion? Update: from installation document , In some special cases, the setup package can’t automatically compile the driver, so you need change the configurations yourself and manually compile the driver. Method: enter the directory: /usr/local/bin/ztemtApp/zteusbserial, and find the corresponding kernel version of current system. I can see 2.6.27 2.6.28 2.6.29 2.6.30 2.6.31 2.6.32 2.6.33 2.6.34 2.6.35 2.6.36 2.6.37 2.6.38 2.6.39 below2.6.2 in the directory. Which version shall I pick up for installation? Readme file says, we have to do this to insert ztemt.ko in kernel.

    Read the article

< Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >