Search Results

Search found 19796 results on 792 pages for 'bit twiddler'.

Page 185/792 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Collaborate10 &ndash; THEconference

    - by jean-pierre.dijcks
    After spending a few days in Mandalay Bay's THEHotel, I guess I now call everything THE... Seriously, they even tag their toilet paper with THEtp... I guess the brand builders in Vegas thought that once you are on to something you keep on doing it, and granted it is a nice hotel with nice rooms. THEanalytics Most of my collab10 experience was in a room called Reef C, where the BIWA bootcamp was held. Two solid days of BI, Warehousing and Analytics organized by the BIWA SIG at IOUG. Didn't get to see all sessions, but what struck me was the high interest in Analytics. Marty Gubar's OLAP session was full and he did some very nice things with the OLAP option. The cool bit was that he actually gets all the advanced calculations in OLAP to show up in OBI EE without any effort. It was nice to see that the idea from OWB where you generate an RPD is now also in AWM. I think it makes life so much simpler to generate these RPD's from your data model. Even if the end RPD needs some tweaking, it is all a lot less effort to get something going. You can see this stuff for yourself in this demo (click here). OBI EE uses just SQL to get to the calculations, and so, if you prefer APEX, you can build you application there and get the same nice calculations in an APEX application. Marty also showed the Simba MDX driver used with Excel. I guess we should call that THEcoolone... and it is very slick and wonderfully useful for all of you who actually know Excel. The nice thing is that you leverage pure Excel for all operations (no plug-ins). That means no new tools to learn, no new controls, all just pure Excel. THEdatabasemachine Got some very good questions in my "what makes Exadata fast" session and overall, the interest in Exadata is overwhelming. One of the things that I did try to do in my session is to get people to think in new patterns rather than in patterns based on Oracle 9i running on some random hardware configuration. We talked a little bit about the often over-indexing and how everyone has to unlearn all of that on Exadata. The main thing however is that everyone needs to get used to the shear size of some of the components in a Database machine V2. 5TB of flash cache is a lot of very fast data storage, half a TB of memory gets quite interesting as well. So what I did there was really focus on some of the content in these earlier posts on Upward ILM and In-Memory processing. In short, I do believe the these newer media point out a trend. In-memory and other fast media will get cheaper and will see more use. Some of that we do automatically by adding new functionality, but in some cases I think the end user of the system needs to start thinking about how to leverage all this new hardware. I think most people got very excited about these new capabilities and opportunities. THEcoolkids One of the cool things about the BIWA track was the hand-on track. Very cool to see big crowds for both OLAP and OWB hands-on. Also quite nice to see that the folks at RittmanMead spent so much time on preparing for that session. While all of them put down cool stuff, none was more cool that seeing Data Mining on an Apple iPAD... it all just looks great on an iPAD! Very disappointing to see that Mark Rittman still wasn't showing OWB on his iPAD ;-) THEend All in all this was a great set of sessions in the BIWA track. Lots of value to our guests (we hope) and we hope they all come again next year!

    Read the article

  • SharePoint 2010 release date - is it that important?

    - by CharlesLee
    There has been lots of excitement in the SharePoint community over the last few days as Microsoft have announced the official release date of SharePoint 2010. May 12th is the date for your diaries (RTM in April.) The twittersphere has been telling everyone for the last few days about this news and there is much excitement. The major conferences this year all seem to have a SharePoint 2010 focus and some are entirely focussed on the new product (e.g. SharePoint Evolution Conference.)  Now by all accounts Microsoft have plugged some significant functionality gaps that exist in WSS 3.0 and MOSS 2007 and provided some exciting new functionality.  You don't need me to tell you about these as the MVPs (and other community members) are doing a sterling job, after all that is why Microsoft has MVPs in the first place. Lets get real for a second though as there is a significant investment involved in moving to SharePoint 2010:  Firstly you need 64 bit architecture across the board, now for some environments that is no inconsequential hurdle, that's a pretty significant roadblock.   The development farm, test farm and UAT farm are all going to require the same infrastructure upgrades. To take advantage of the tooling for SP2010 you will need to upgrade to Visual Studio 2010 and your development team is going to require 64 bit hardware/OS too.  I would not recommend installing SP 2010 in client installation mode (i.e. for Windows 7) on your developer machines, I would use this for demo machines only. Something that lots of people seem to forget in all their whooping and hollering about the new release is that there is a large amount of end user training going to be required as the browser UI has now adopted the omnipotent ribbon interface and there are other new and more complicated features. SharePoint Designer has also entirely changed in both look and feel and some significant feature changes have taken place. Lest we should forget that some companies have not long upgraded to MOSS 2007 and are yet to see a significant ROI for that project. And the reticence that most companies feel about implementing v1 Microsoft products.  This is only the surface of the deeper issues which would be involved in any upgrade process, so I guess I share a small part of the concern voiced by Mark Miller of EndUserSharePoint.com.  Is SharePoint 2010 relevant? I don't share this sentiment in its entirety as I firmly believe that all companies should be looking at SharePoint 2010 from day one, however most large scale existing implementations of MOSS 2007 are going to be several years away from a serious upgrade project.  So should the conference organisers and the SharePoint community as a whole be a little more understanding of the real world issues?  It's easy to get carried away in the excitement of a new product and new tools to play with but there needs to be a focus on the real world issues that most people are facing day to day and at the moment and for the short term future (at the very least the next 12 months) that is fairly and squarely in the WSS 2.0/3.0 and SPS 2003/MOSS 2007 camps. Don't get me wrong, I am very very excited about getting to grips with SharePoint 2010 in the real world and I cannot wait for my first real project to come along, but for now I am just being realistic about the reality for most people who work with SharePoint. I have been spending a lot of time on www.sharepointoverflow.com recently as there is a community of people building up who are committed to answering the real world questions that folks are dealing with every day.  I urge you to take a look and either ask or answer some questions direct from the front line of the SharePoint world.

    Read the article

  • Silverlight Recruiting Application Part 5 - Jobs Module / View

    Now we starting getting into a more code-heavy portion of this series, thankfully though this means the groundwork is all set for the most part and after adding the modules we will have a complete application that can be provided with full source. The Jobs module will have two concerns- adding and maintaining jobs that can then be broadcast out to the website. How they are displayed on the site will be handled by our admin system (which will just poll from this common database), so we aren't too concerned with that, but rather with getting the information into the system and allowing the backend administration/HR users to keep things up to date. Since there is a fair bit of information that we want to display, we're going to move editing to a separate view so we can get all that information in an easy-to-use spot. With all the files created for this module, the project looks something like this: And now... on to the code. XAML for the Job Posting View All we really need for the Job Posting View is a RadGridView and a few buttons. This will let us both show off records and perform operations on the records without much hassle. That XAML is going to look something like this: 01.<Grid x:Name="LayoutRoot" 02.Background="White"> 03.<Grid.RowDefinitions> 04.<RowDefinition Height="30" /> 05.<RowDefinition /> 06.</Grid.RowDefinitions> 07.<StackPanel Orientation="Horizontal"> 08.<Button x:Name="xAddRecordButton" 09.Content="Add Job" 10.Width="120" 11.cal:Click.Command="{Binding AddRecord}" 12.telerik:StyleManager.Theme="Windows7" /> 13.<Button x:Name="xEditRecordButton" 14.Content="Edit Job" 15.Width="120" 16.cal:Click.Command="{Binding EditRecord}" 17.telerik:StyleManager.Theme="Windows7" /> 18.</StackPanel> 19.<telerikGrid:RadGridView x:Name="xJobsGrid" 20.Grid.Row="1" 21.IsReadOnly="True" 22.AutoGenerateColumns="False" 23.ColumnWidth="*" 24.RowDetailsVisibilityMode="VisibleWhenSelected" 25.ItemsSource="{Binding MyJobs}" 26.SelectedItem="{Binding SelectedJob, Mode=TwoWay}" 27.command:SelectedItemChangedEventClass.Command="{Binding SelectedItemChanged}"> 28.<telerikGrid:RadGridView.Columns> 29.<telerikGrid:GridViewDataColumn Header="Job Title" 30.DataMemberBinding="{Binding JobTitle}" 31.UniqueName="JobTitle" /> 32.<telerikGrid:GridViewDataColumn Header="Location" 33.DataMemberBinding="{Binding Location}" 34.UniqueName="Location" /> 35.<telerikGrid:GridViewDataColumn Header="Resume Required" 36.DataMemberBinding="{Binding NeedsResume}" 37.UniqueName="NeedsResume" /> 38.<telerikGrid:GridViewDataColumn Header="CV Required" 39.DataMemberBinding="{Binding NeedsCV}" 40.UniqueName="NeedsCV" /> 41.<telerikGrid:GridViewDataColumn Header="Overview Required" 42.DataMemberBinding="{Binding NeedsOverview}" 43.UniqueName="NeedsOverview" /> 44.<telerikGrid:GridViewDataColumn Header="Active" 45.DataMemberBinding="{Binding IsActive}" 46.UniqueName="IsActive" /> 47.</telerikGrid:RadGridView.Columns> 48.</telerikGrid:RadGridView> 49.</Grid> I'll explain what's happening here by line numbers: Lines 11 and 16: Using the same type of click commands as we saw in the Menu module, we tie the button clicks to delegate commands in the viewmodel. Line 25: The source for the jobs will be a collection in the viewmodel. Line 26: We also bind the selected item to a public property from the viewmodel for use in code. Line 27: We've turned the event into a command so we can handle it via code in the viewmodel. So those first three probably make sense to you as far as Silverlight/WPF binding magic is concerned, but for line 27... This actually comes from something I read onDamien Schenkelman's blog back in the day for creating an attached behavior from any event. So, any time you see me using command:Whatever.Command, the backing for it is actually something like this: SelectedItemChangedEventBehavior.cs: 01.public class SelectedItemChangedEventBehavior : CommandBehaviorBase<Telerik.Windows.Controls.DataControl> 02.{ 03.public SelectedItemChangedEventBehavior(DataControl element) 04.: base(element) 05.{ 06.element.SelectionChanged += new EventHandler<SelectionChangeEventArgs>(element_SelectionChanged); 07.} 08.void element_SelectionChanged(object sender, SelectionChangeEventArgs e) 09.{ 10.// We'll only ever allow single selection, so will only need item index 0 11.base.CommandParameter = e.AddedItems[0]; 12.base.ExecuteCommand(); 13.} 14.} SelectedItemChangedEventClass.cs: 01.public class SelectedItemChangedEventClass 02.{ 03.#region The Command Stuff 04.public static ICommand GetCommand(DependencyObject obj) 05.{ 06.return (ICommand)obj.GetValue(CommandProperty); 07.} 08.public static void SetCommand(DependencyObject obj, ICommand value) 09.{ 10.obj.SetValue(CommandProperty, value); 11.} 12.public static readonly DependencyProperty CommandProperty = 13.DependencyProperty.RegisterAttached("Command", typeof(ICommand), 14.typeof(SelectedItemChangedEventClass), new PropertyMetadata(OnSetCommandCallback)); 15.public static void OnSetCommandCallback(DependencyObject dependencyObject, DependencyPropertyChangedEventArgs e) 16.{ 17.DataControl element = dependencyObject as DataControl; 18.if (element != null) 19.{ 20.SelectedItemChangedEventBehavior behavior = GetOrCreateBehavior(element); 21.behavior.Command = e.NewValue as ICommand; 22.} 23.} 24.#endregion 25.public static SelectedItemChangedEventBehavior GetOrCreateBehavior(DataControl element) 26.{ 27.SelectedItemChangedEventBehavior behavior = element.GetValue(SelectedItemChangedEventBehaviorProperty) as SelectedItemChangedEventBehavior; 28.if (behavior == null) 29.{ 30.behavior = new SelectedItemChangedEventBehavior(element); 31.element.SetValue(SelectedItemChangedEventBehaviorProperty, behavior); 32.} 33.return behavior; 34.} 35.public static SelectedItemChangedEventBehavior GetSelectedItemChangedEventBehavior(DependencyObject obj) 36.{ 37.return (SelectedItemChangedEventBehavior)obj.GetValue(SelectedItemChangedEventBehaviorProperty); 38.} 39.public static void SetSelectedItemChangedEventBehavior(DependencyObject obj, SelectedItemChangedEventBehavior value) 40.{ 41.obj.SetValue(SelectedItemChangedEventBehaviorProperty, value); 42.} 43.public static readonly DependencyProperty SelectedItemChangedEventBehaviorProperty = 44.DependencyProperty.RegisterAttached("SelectedItemChangedEventBehavior", 45.typeof(SelectedItemChangedEventBehavior), typeof(SelectedItemChangedEventClass), null); 46.} These end up looking very similar from command to command, but in a nutshell you create a command based on any event, determine what the parameter for it will be, then execute. It attaches via XAML and ties to a DelegateCommand in the viewmodel, so you get the full event experience (since some controls get a bit event-rich for added functionality). Simple enough, right? Viewmodel for the Job Posting View The Viewmodel is going to need to handle all events going back and forth, maintaining interactions with the data we are using, and both publishing and subscribing to events. Rather than breaking this into tons of little pieces, I'll give you a nice view of the entire viewmodel and then hit up the important points line-by-line: 001.public class JobPostingViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005.public DelegateCommand<object> AddRecord { get; set; } 006.public DelegateCommand<object> EditRecord { get; set; } 007.public DelegateCommand<object> SelectedItemChanged { get; set; } 008.public RecruitingContext context; 009.private QueryableCollectionView _myJobs; 010.public QueryableCollectionView MyJobs 011.{ 012.get { return _myJobs; } 013.} 014.private QueryableCollectionView _selectionJobActionHistory; 015.public QueryableCollectionView SelectedJobActionHistory 016.{ 017.get { return _selectionJobActionHistory; } 018.} 019.private JobPosting _selectedJob; 020.public JobPosting SelectedJob 021.{ 022.get { return _selectedJob; } 023.set 024.{ 025.if (value != _selectedJob) 026.{ 027._selectedJob = value; 028.NotifyChanged("SelectedJob"); 029.} 030.} 031.} 032.public SubscriptionToken editToken = new SubscriptionToken(); 033.public SubscriptionToken addToken = new SubscriptionToken(); 034.public JobPostingViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 035.{ 036.// set Unity items 037.this.eventAggregator = eventAgg; 038.this.regionManager = regionmanager; 039.// load our context 040.context = new RecruitingContext(); 041.this._myJobs = new QueryableCollectionView(context.JobPostings); 042.context.Load(context.GetJobPostingsQuery()); 043.// set command events 044.this.AddRecord = new DelegateCommand<object>(this.AddNewRecord); 045.this.EditRecord = new DelegateCommand<object>(this.EditExistingRecord); 046.this.SelectedItemChanged = new DelegateCommand<object>(this.SelectedRecordChanged); 047.SetSubscriptions(); 048.} 049.#region DelegateCommands from View 050.public void AddNewRecord(object obj) 051.{ 052.this.eventAggregator.GetEvent<AddJobEvent>().Publish(true); 053.} 054.public void EditExistingRecord(object obj) 055.{ 056.if (_selectedJob == null) 057.{ 058.this.eventAggregator.GetEvent<NotifyUserEvent>().Publish("No job selected."); 059.} 060.else 061.{ 062.this._myJobs.EditItem(this._selectedJob); 063.this.eventAggregator.GetEvent<EditJobEvent>().Publish(this._selectedJob); 064.} 065.} 066.public void SelectedRecordChanged(object obj) 067.{ 068.if (obj.GetType() == typeof(ActionHistory)) 069.{ 070.// event bubbles up so we don't catch items from the ActionHistory grid 071.} 072.else 073.{ 074.JobPosting job = obj as JobPosting; 075.GrabHistory(job.PostingID); 076.} 077.} 078.#endregion 079.#region Subscription Declaration and Events 080.public void SetSubscriptions() 081.{ 082.EditJobCompleteEvent editComplete = eventAggregator.GetEvent<EditJobCompleteEvent>(); 083.if (editToken != null) 084.editComplete.Unsubscribe(editToken); 085.editToken = editComplete.Subscribe(this.EditCompleteEventHandler); 086.AddJobCompleteEvent addComplete = eventAggregator.GetEvent<AddJobCompleteEvent>(); 087.if (addToken != null) 088.addComplete.Unsubscribe(addToken); 089.addToken = addComplete.Subscribe(this.AddCompleteEventHandler); 090.} 091.public void EditCompleteEventHandler(bool complete) 092.{ 093.if (complete) 094.{ 095.JobPosting thisJob = _myJobs.CurrentEditItem as JobPosting; 096.this._myJobs.CommitEdit(); 097.this.context.SubmitChanges((s) => 098.{ 099.ActionHistory myAction = new ActionHistory(); 100.myAction.PostingID = thisJob.PostingID; 101.myAction.Description = String.Format("Job '{0}' has been edited by {1}", thisJob.JobTitle, "default user"); 102.myAction.TimeStamp = DateTime.Now; 103.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 104.} 105., null); 106.} 107.else 108.{ 109.this._myJobs.CancelEdit(); 110.} 111.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 112.} 113.public void AddCompleteEventHandler(JobPosting job) 114.{ 115.if (job == null) 116.{ 117.// do nothing, new job add cancelled 118.} 119.else 120.{ 121.this.context.JobPostings.Add(job); 122.this.context.SubmitChanges((s) => 123.{ 124.ActionHistory myAction = new ActionHistory(); 125.myAction.PostingID = job.PostingID; 126.myAction.Description = String.Format("Job '{0}' has been added by {1}", job.JobTitle, "default user"); 127.myAction.TimeStamp = DateTime.Now; 128.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 129.} 130., null); 131.} 132.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 133.} 134.#endregion 135.public void GrabHistory(int postID) 136.{ 137.context.ActionHistories.Clear(); 138._selectionJobActionHistory = new QueryableCollectionView(context.ActionHistories); 139.context.Load(context.GetHistoryForJobQuery(postID)); 140.} Taking it from the top, we're injecting an Event Aggregator and Region Manager for use down the road and also have the public DelegateCommands (just like in the Menu module). We also grab a reference to our context, which we'll obviously need for data, then set up a few fields with public properties tied to them. We're also setting subscription tokens, which we have not yet seen but I will get into below. The AddNewRecord (50) and EditExistingRecord (54) methods should speak for themselves for functionality, the one thing of note is we're sending events off to the Event Aggregator which some module, somewhere will take care of. Since these aren't entirely relying on one another, the Jobs View doesn't care if anyone is listening, but it will publish AddJobEvent (52), NotifyUserEvent (58) and EditJobEvent (63)regardless. Don't mind the GrabHistory() method so much, that is just grabbing history items (visibly being created in the SubmitChanges callbacks), and adding them to the database. Every action will trigger a history event, so we'll know who modified what and when, just in case. ;) So where are we at? Well, if we click to Add a job, we publish an event, if we edit a job, we publish an event with the selected record (attained through the magic of binding). Where is this all going though? To the Viewmodel, of course! XAML for the AddEditJobView This is pretty straightforward except for one thing, noted below: 001.<Grid x:Name="LayoutRoot" 002.Background="White"> 003.<Grid x:Name="xEditGrid" 004.Margin="10" 005.validationHelper:ValidationScope.Errors="{Binding Errors}"> 006.<Grid.Background> 007.<LinearGradientBrush EndPoint="0.5,1" 008.StartPoint="0.5,0"> 009.<GradientStop Color="#FFC7C7C7" 010.Offset="0" /> 011.<GradientStop Color="#FFF6F3F3" 012.Offset="1" /> 013.</LinearGradientBrush> 014.</Grid.Background> 015.<Grid.RowDefinitions> 016.<RowDefinition Height="40" /> 017.<RowDefinition Height="40" /> 018.<RowDefinition Height="40" /> 019.<RowDefinition Height="100" /> 020.<RowDefinition Height="100" /> 021.<RowDefinition Height="100" /> 022.<RowDefinition Height="40" /> 023.<RowDefinition Height="40" /> 024.<RowDefinition Height="40" /> 025.</Grid.RowDefinitions> 026.<Grid.ColumnDefinitions> 027.<ColumnDefinition Width="150" /> 028.<ColumnDefinition Width="150" /> 029.<ColumnDefinition Width="300" /> 030.<ColumnDefinition Width="100" /> 031.</Grid.ColumnDefinitions> 032.<!-- Title --> 033.<TextBlock Margin="8" 034.Text="{Binding AddEditString}" 035.TextWrapping="Wrap" 036.Grid.Column="1" 037.Grid.ColumnSpan="2" 038.FontSize="16" /> 039.<!-- Data entry area--> 040. 041.<TextBlock Margin="8,0,0,0" 042.Style="{StaticResource LabelTxb}" 043.Grid.Row="1" 044.Text="Job Title" 045.VerticalAlignment="Center" /> 046.<TextBox x:Name="xJobTitleTB" 047.Margin="0,8" 048.Grid.Column="1" 049.Grid.Row="1" 050.Text="{Binding activeJob.JobTitle, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 051.Grid.ColumnSpan="2" /> 052.<TextBlock Margin="8,0,0,0" 053.Grid.Row="2" 054.Text="Location" 055.d:LayoutOverrides="Height" 056.VerticalAlignment="Center" /> 057.<TextBox x:Name="xLocationTB" 058.Margin="0,8" 059.Grid.Column="1" 060.Grid.Row="2" 061.Text="{Binding activeJob.Location, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 062.Grid.ColumnSpan="2" /> 063. 064.<TextBlock Margin="8,11,8,0" 065.Grid.Row="3" 066.Text="Description" 067.TextWrapping="Wrap" 068.VerticalAlignment="Top" /> 069. 070.<TextBox x:Name="xDescriptionTB" 071.Height="84" 072.TextWrapping="Wrap" 073.ScrollViewer.VerticalScrollBarVisibility="Auto" 074.Grid.Column="1" 075.Grid.Row="3" 076.Text="{Binding activeJob.Description, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 077.Grid.ColumnSpan="2" /> 078.<TextBlock Margin="8,11,8,0" 079.Grid.Row="4" 080.Text="Requirements" 081.TextWrapping="Wrap" 082.VerticalAlignment="Top" /> 083. 084.<TextBox x:Name="xRequirementsTB" 085.Height="84" 086.TextWrapping="Wrap" 087.ScrollViewer.VerticalScrollBarVisibility="Auto" 088.Grid.Column="1" 089.Grid.Row="4" 090.Text="{Binding activeJob.Requirements, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 091.Grid.ColumnSpan="2" /> 092.<TextBlock Margin="8,11,8,0" 093.Grid.Row="5" 094.Text="Qualifications" 095.TextWrapping="Wrap" 096.VerticalAlignment="Top" /> 097. 098.<TextBox x:Name="xQualificationsTB" 099.Height="84" 100.TextWrapping="Wrap" 101.ScrollViewer.VerticalScrollBarVisibility="Auto" 102.Grid.Column="1" 103.Grid.Row="5" 104.Text="{Binding activeJob.Qualifications, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 105.Grid.ColumnSpan="2" /> 106.<!-- Requirements Checkboxes--> 107. 108.<CheckBox x:Name="xResumeRequiredCB" Margin="8,8,8,15" 109.Content="Resume Required" 110.Grid.Row="6" 111.Grid.ColumnSpan="2" 112.IsChecked="{Binding activeJob.NeedsResume, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 113. 114.<CheckBox x:Name="xCoverletterRequiredCB" Margin="8,8,8,15" 115.Content="Cover Letter Required" 116.Grid.Column="2" 117.Grid.Row="6" 118.IsChecked="{Binding activeJob.NeedsCV, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 119. 120.<CheckBox x:Name="xOverviewRequiredCB" Margin="8,8,8,15" 121.Content="Overview Required" 122.Grid.Row="7" 123.Grid.ColumnSpan="2" 124.IsChecked="{Binding activeJob.NeedsOverview, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 125. 126.<CheckBox x:Name="xJobActiveCB" Margin="8,8,8,15" 127.Content="Job is Active" 128.Grid.Column="2" 129.Grid.Row="7" 130.IsChecked="{Binding activeJob.IsActive, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 131. 132.<!-- Buttons --> 133. 134.<Button x:Name="xAddEditButton" Margin="8,8,0,10" 135.Content="{Binding AddEditButtonString}" 136.cal:Click.Command="{Binding AddEditCommand}" 137.Grid.Column="2" 138.Grid.Row="8" 139.HorizontalAlignment="Left" 140.Width="125" 141.telerik:StyleManager.Theme="Windows7" /> 142. 143.<Button x:Name="xCancelButton" HorizontalAlignment="Right" 144.Content="Cancel" 145.cal:Click.Command="{Binding CancelCommand}" 146.Margin="0,8,8,10" 147.Width="125" 148.Grid.Column="2" 149.Grid.Row="8" 150.telerik:StyleManager.Theme="Windows7" /> 151.</Grid> 152.</Grid> The 'validationHelper:ValidationScope' line may seem odd. This is a handy little trick for catching current and would-be validation errors when working in this whole setup. This all comes from an approach found on theJoy Of Code blog, although it looks like the story for this will be changing slightly with new advances in SL4/WCF RIA Services, so this section can definitely get an overhaul a little down the road. The code is the fun part of all this, so let us see what's happening under the hood. Viewmodel for the AddEditJobView We are going to see some of the same things happening here, so I'll skip over the repeat info and get right to the good stuff: 001.public class AddEditJobViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005. 006.public RecruitingContext context; 007. 008.private JobPosting _activeJob; 009.public JobPosting activeJob 010.{ 011.get { return _activeJob; } 012.set 013.{ 014.if (_activeJob != value) 015.{ 016._activeJob = value; 017.NotifyChanged("activeJob"); 018.} 019.} 020.} 021. 022.public bool isNewJob; 023. 024.private string _addEditString; 025.public string AddEditString 026.{ 027.get { return _addEditString; } 028.set 029.{ 030.if (_addEditString != value) 031.{ 032._addEditString = value; 033.NotifyChanged("AddEditString"); 034.} 035.} 036.} 037. 038.private string _addEditButtonString; 039.public string AddEditButtonString 040.{ 041.get { return _addEditButtonString; } 042.set 043.{ 044.if (_addEditButtonString != value) 045.{ 046._addEditButtonString = value; 047.NotifyChanged("AddEditButtonString"); 048.} 049.} 050.} 051. 052.public SubscriptionToken addJobToken = new SubscriptionToken(); 053.public SubscriptionToken editJobToken = new SubscriptionToken(); 054. 055.public DelegateCommand<object> AddEditCommand { get; set; } 056.public DelegateCommand<object> CancelCommand { get; set; } 057. 058.private ObservableCollection<ValidationError> _errors = new ObservableCollection<ValidationError>(); 059.public ObservableCollection<ValidationError> Errors 060.{ 061.get { return _errors; } 062.} 063. 064.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 065.public ObservableCollection<ValidationResult> ValResults 066.{ 067.get { return this._valResults; } 068.} 069. 070.public AddEditJobViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 071.{ 072.// set Unity items 073.this.eventAggregator = eventAgg; 074.this.regionManager = regionmanager; 075. 076.context = new RecruitingContext(); 077. 078.AddEditCommand = new DelegateCommand<object>(this.AddEditJobCommand); 079.CancelCommand = new DelegateCommand<object>(this.CancelAddEditCommand); 080. 081.SetSubscriptions(); 082.} 083. 084.#region Subscription Declaration and Events 085. 086.public void SetSubscriptions() 087.{ 088.AddJobEvent addJob = this.eventAggregator.GetEvent<AddJobEvent>(); 089. 090.if (addJobToken != null) 091.addJob.Unsubscribe(addJobToken); 092. 093.addJobToken = addJob.Subscribe(this.AddJobEventHandler); 094. 095.EditJobEvent editJob = this.eventAggregator.GetEvent<EditJobEvent>(); 096. 097.if (editJobToken != null) 098.editJob.Unsubscribe(editJobToken); 099. 100.editJobToken = editJob.Subscribe(this.EditJobEventHandler); 101.} 102. 103.public void AddJobEventHandler(bool isNew) 104.{ 105.this.activeJob = null; 106.this.activeJob = new JobPosting(); 107.this.activeJob.IsActive = true; // We assume that we want a new job to go up immediately 108.this.isNewJob = true; 109.this.AddEditString = "Add New Job Posting"; 110.this.AddEditButtonString = "Add Job"; 111. 112.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 113.} 114. 115.public void EditJobEventHandler(JobPosting editJob) 116.{ 117.this.activeJob = null; 118.this.activeJob = editJob; 119.this.isNewJob = false; 120.this.AddEditString = "Edit Job Posting"; 121.this.AddEditButtonString = "Edit Job"; 122. 123.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 124.} 125. 126.#endregion 127. 128.#region DelegateCommands from View 129. 130.public void AddEditJobCommand(object obj) 131.{ 132.if (this.Errors.Count > 0) 133.{ 134.List<string> errorMessages = new List<string>(); 135. 136.foreach (var valR in this.Errors) 137.{ 138.errorMessages.Add(valR.Exception.Message); 139.} 140. 141.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 142. 143.} 144.else if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 145.{ 146.List<string> errorMessages = new List<string>(); 147. 148.foreach (var valR in this._valResults) 149.{ 150.errorMessages.Add(valR.ErrorMessage); 151.} 152. 153.this._valResults.Clear(); 154. 155.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 156.} 157.else 158.{ 159.if (this.isNewJob) 160.{ 161.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(this.activeJob); 162.} 163.else 164.{ 165.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(true); 166.} 167.} 168.} 169. 170.public void CancelAddEditCommand(object obj) 171.{ 172.if (this.isNewJob) 173.{ 174.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(null); 175.} 176.else 177.{ 178.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(false); 179.} 180.} 181. 182.#endregion 183.} 184.} We start seeing something new on line 103- the AddJobEventHandler will create a new job and set that to the activeJob item on the ViewModel. When this is all set, the view calls that familiar MakeMeActive method to activate itself. I made a bit of a management call on making views self-activate like this, but I figured it works for one reason. As I create this application, views may not exist that I have in mind, so after a view receives its 'ping' from being subscribed to an event, it prepares whatever it needs to do and then goes active. This way if I don't have 'edit' hooked up, I can click as the day is long on the main view and won't get lost in an empty region. Total personal preference here. :) Everything else should again be pretty straightforward, although I do a bit of validation checking in the AddEditJobCommand, which can either fire off an event back to the main view/viewmodel if everything is a success or sent a list of errors to our notification module, which pops open a RadWindow with the alerts if any exist. As a bonus side note, here's what my WCF RIA Services metadata looks like for handling all of the validation: private JobPostingMetadata() { } [StringLength(2500, ErrorMessage = "Description should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage = "Description is required.")] public string Description; [Required(ErrorMessage="Active Status is Required")] public bool IsActive; [StringLength(100, ErrorMessage = "Posting title must be more than 3 but less than 100 characters.", MinimumLength = 3)] [Required(ErrorMessage = "Job Title is required.")] public bool JobTitle; [Required] public string Location; public bool NeedsCV; public bool NeedsOverview; public bool NeedsResume; public int PostingID; [Required(ErrorMessage="Qualifications are required.")] [StringLength(2500, ErrorMessage="Qualifications should be more than one and less than 2500 characters.", MinimumLength=1)] public string Qualifications; [StringLength(2500, ErrorMessage = "Requirements should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage="Requirements are required.")] public string Requirements;   The RecruitCB Alternative See all that Xaml I pasted above? Those are now two pieces sitting in the JobsView.xaml file now. The only real difference is that the xEditGrid now sits in the same place as xJobsGrid, with visibility swapping out between the two for a quick switch. I also took out all the cal: and command: command references and replaced Button events with clicks and the Grid selection command replaced with a SelectedItemChanged event. Also, at the bottom of the xEditGrid after the last button, I add a ValidationSummary (with Visibility=Collapsed) to catch any errors that are popping up. Simple as can be, and leads to this being the single code-behind file: 001.public partial class JobsView : UserControl 002.{ 003.public RecruitingContext context; 004.public JobPosting activeJob; 005.public bool isNew; 006.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 007.public ObservableCollection<ValidationResult> ValResults 008.{ 009.get { return this._valResults; } 010.} 011.public JobsView() 012.{ 013.InitializeComponent(); 014.this.Loaded += new RoutedEventHandler(JobsView_Loaded); 015.} 016.void JobsView_Loaded(object sender, RoutedEventArgs e) 017.{ 018.context = new RecruitingContext(); 019.xJobsGrid.ItemsSource = context.JobPostings; 020.context.Load(context.GetJobPostingsQuery()); 021.} 022.private void xAddRecordButton_Click(object sender, RoutedEventArgs e) 023.{ 024.activeJob = new JobPosting(); 025.isNew = true; 026.xAddEditTitle.Text = "Add a Job Posting"; 027.xAddEditButton.Content = "Add"; 028.xEditGrid.DataContext = activeJob; 029.HideJobsGrid(); 030.} 031.private void xEditRecordButton_Click(object sender, RoutedEventArgs e) 032.{ 033.activeJob = xJobsGrid.SelectedItem as JobPosting; 034.isNew = false; 035.xAddEditTitle.Text = "Edit a Job Posting"; 036.xAddEditButton.Content = "Edit"; 037.xEditGrid.DataContext = activeJob; 038.HideJobsGrid(); 039.} 040.private void xAddEditButton_Click(object sender, RoutedEventArgs e) 041.{ 042.if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 043.{ 044.List<string> errorMessages = new List<string>(); 045.foreach (var valR in this._valResults) 046.{ 047.errorMessages.Add(valR.ErrorMessage); 048.} 049.this._valResults.Clear(); 050.ShowErrors(errorMessages); 051.} 052.else if (xSummary.Errors.Count > 0) 053.{ 054.List<string> errorMessages = new List<string>(); 055.foreach (var err in xSummary.Errors) 056.{ 057.errorMessages.Add(err.Message); 058.} 059.ShowErrors(errorMessages); 060.} 061.else 062.{ 063.if (this.isNew) 064.{ 065.context.JobPostings.Add(activeJob); 066.context.SubmitChanges((s) => 067.{ 068.ActionHistory thisAction = new ActionHistory(); 069.thisAction.PostingID = activeJob.PostingID; 070.thisAction.Description = String.Format("Job '{0}' has been edited by {1}", activeJob.JobTitle, "default user"); 071.thisAction.TimeStamp = DateTime.Now; 072.context.ActionHistories.Add(thisAction); 073.context.SubmitChanges(); 074.}, null); 075.} 076.else 077.{ 078.context.SubmitChanges((s) => 079.{ 080.ActionHistory thisAction = new ActionHistory(); 081.thisAction.PostingID = activeJob.PostingID; 082.thisAction.Description = String.Format("Job '{0}' has been added by {1}", activeJob.JobTitle, "default user"); 083.thisAction.TimeStamp = DateTime.Now; 084.context.ActionHistories.Add(thisAction); 085.context.SubmitChanges(); 086.}, null); 087.} 088.ShowJobsGrid(); 089.} 090.} 091.private void xCancelButton_Click(object sender, RoutedEventArgs e) 092.{ 093.ShowJobsGrid(); 094.} 095.private void ShowJobsGrid() 096.{ 097.xAddEditRecordButtonPanel.Visibility = Visibility.Visible; 098.xEditGrid.Visibility = Visibility.Collapsed; 099.xJobsGrid.Visibility = Visibility.Visible; 100.} 101.private void HideJobsGrid() 102.{ 103.xAddEditRecordButtonPanel.Visibility = Visibility.Collapsed; 104.xJobsGrid.Visibility = Visibility.Collapsed; 105.xEditGrid.Visibility = Visibility.Visible; 106.} 107.private void ShowErrors(List<string> errorList) 108.{ 109.string nm = "Errors received: \n"; 110.foreach (string anerror in errorList) 111.nm += anerror + "\n"; 112.RadWindow.Alert(nm); 113.} 114.} The first 39 lines should be pretty familiar, not doing anything too unorthodox to get this up and running. Once we hit the xAddEditButton_Click on line 40, we're still doing pretty much the same things except instead of checking the ValidationHelper errors, we both run a check on the current activeJob object as well as check the ValidationSummary errors list. Once that is set, we again use the callback of context.SubmitChanges (lines 68 and 78) to create an ActionHistory which we will use to track these items down the line. That's all? Essentially... yes. If you look back through this post, most of the code and adventures we have taken were just to get things working in the MVVM/Prism setup. Since I have the whole 'module' self-contained in a single JobView+code-behind setup, I don't have to worry about things like sending events off into space for someone to pick up, communicating through an Infrastructure project, or even re-inventing events to be used with attached behaviors. Everything just kinda works, and again with much less code. Here's a picture of the MVVM and Code-behind versions on the Jobs and AddEdit views, but since the functionality is the same in both apps you still cannot tell them apart (for two-strike): Looking ahead, the Applicants module is effectively the same thing as the Jobs module, so most of the code is being cut-and-pasted back and forth with minor tweaks here and there. So that one is being taken care of by me behind the scenes. Next time, we get into a new world of fun- the interview scheduling module, which will pull from available jobs and applicants for each interview being scheduled, tying everything together with RadScheduler to the rescue. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Sun Fire X4800 M2 Posts World Record x86 SPECjEnterprise2010 Result

    - by Brian
    Oracle's Sun Fire X4800 M2 using the Intel Xeon E7-8870 processor and Sun Fire X4470 M2 using the Intel Xeon E7-4870 processor, produced a world record single application server SPECjEnterprise2010 benchmark result of 27,150.05 SPECjEnterprise2010 EjOPS. The Sun Fire X4800 M2 server ran the application tier and the Sun Fire X4470 M2 server was used for the database tier. The Sun Fire X4800 M2 server demonstrated 63% better performance compared to IBM P780 server result of 16,646.34 SPECjEnterprise2010 EjOPS. The Sun Fire X4800 M2 server demonstrated 4% better performance than the Cisco UCS B440 M2 result, both results used the same number of processors. This result used Oracle WebLogic Server 12c, Java HotSpot(TM) 64-Bit Server 1.7.0_02, and Oracle Database 11g. This result was produced using Oracle Linux. Performance Landscape Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results. The table below compares against the best results from IBM and Cisco. SPECjEnterprise2010 Performance Chart as of 3/12/2012 Submitter EjOPS* Application Server Database Server Oracle 27,150.05 1x Sun Fire X4800 M2 8x 2.4 GHz Intel Xeon E7-8870 Oracle WebLogic 12c 1x Sun Fire X4470 M2 4x 2.4 GHz Intel Xeon E7-4870 Oracle Database 11g (11.2.0.2) Cisco 26,118.67 2x UCS B440 M2 Blade Server 4x 2.4 GHz Intel Xeon E7-4870 Oracle WebLogic 11g (10.3.5) 1x UCS C460 M2 Blade Server 4x 2.4 GHz Intel Xeon E7-4870 Oracle Database 11g (11.2.0.2) IBM 16,646.34 1x IBM Power 780 8x 3.86 GHz POWER 7 WebSphere Application Server V7 1x IBM Power 750 Express 4x 3.55 GHz POWER 7 IBM DB2 9.7 Workgroup Server Edition FP3a * SPECjEnterprise2010 EjOPS, bigger is better. Configuration Summary Application Server: 1 x Sun Fire X4800 M2 8 x 2.4 GHz Intel Xeon processor E7-8870 256 GB memory 4 x 10 GbE NIC 2 x FC HBA Oracle Linux 5 Update 6 Oracle WebLogic Server 11g Release 1 (10.3.5) Java HotSpot(TM) 64-Bit Server VM on Linux, version 1.7.0_02 (Java SE 7 Update 2) Database Server: 1 x Sun Fire X4470 M2 4 x 2.4 GHz Intel Xeon E7-4870 512 GB memory 4 x 10 GbE NIC 2 x FC HBA 2 x Sun StorageTek 2540 M2 4 x Sun Fire X4270 M2 4 x Sun Storage F5100 Flash Array Oracle Linux 5 Update 6 Oracle Database 11g Enterprise Edition Release 11.2.0.2 Benchmark Description SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems. The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages, Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans. The SPECjEnterprise2010 benchmark heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network. The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS"). This metric is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is no price/performance metric in this benchmark. Key Points and Best Practices Sixteen Oracle WebLogic server instances were started using numactl, binding 2 instances per chip. Eight Oracle database listener processes were started, binding 2 instances per chip using taskset. Additional tuning information is in the report at http://spec.org. See Also Oracle Press Release -- SPECjEnterprise2010 Results Page Sun Fire X4800 M2 Server oracle.com OTN Sun Fire X4270 M2 Server oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Oracle Linux oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN WebLogic Suite oracle.com OTN Disclosure Statement SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Sun Fire X4800 M2, 27,150.05 SPECjEnterprise2010 EjOPS; IBM Power 780, 16,646.34 SPECjEnterprise2010 EjOPS; Cisco UCS B440 M2, 26,118.67 SPECjEnterprise2010 EjOPS. Results from www.spec.org as of 3/27/2012.

    Read the article

  • Beginner Geek: Scan a Document or Picture in Windows 7

    - by Mysticgeek
    There may come a time when you want to digitize your priceless old pictures, or need to scan a receipts and documents for your company. Today we look at how to scan a picture or document in Windows 7. Scanning Your Document In this example we’re using an HP PSC 1500 All-In-One printer connected to a Windows 7 Home Premium 32-bit system. Different scanners will vary, however the process is essentially the same. The scanning process has changed a bit since the XP days. To scan a document in Windows 7, place the document or picture in the scanner, click on Start, and go to Devices and Printers.   When the Devices and Printers window opens, find your scanning device and double-click on it to get the manufacturers Printer Actions menu. For our HP PSC 1500 we have a few different options like printing, device setup, and scanner actions. Here we’ll click on the Scan a document or photo hyperlink. The New Scan window opens and from here you can adjust the quality of the scanned image and choose the output file type. Then click the Preview button to get an idea of what the image will look like.   If you’re not happy with the preview, then you can go back and make any adjustments to the quality of the document or photo. Once everything looks good, click on the Scan button. The scanning process will start. The amount of time it takes will depend on your scanner type, and the quality of the settings you choose. The higher the quality…the more time it will take. You will have the option to tag the picture if you want to… Now you can view your scanned document or photo inside Windows Photo Viewer. If you’re happy with the look of the document, you can send it off in an email, put it on an network drive, FTP it… whatever you need to do with it. Another method is to place the document of photo you wish to scan in the scanner, open up Devices and Printers, then right-click on the scanning device and select Start Scan from the context menu. This should bypass the manufacturer screen and go directly into the New Scan window, where you can start the scan process. From the Context Menu you can also choose Scan Properties. This will allow you to test the scanner if you’re having problems with it and change some of its settings. Or you can choose Scan Profiles which allows you to use pre-selected settings, create your own, or set one as the default. Although scanning documents and photos isn’t a common occurrence as it was a few years ago, Windows 7 still includes the feature. When you need to scan a document or photo in Windows 7, this should get you started. Similar Articles Productive Geek Tips Easily Rotate Pictures In Word 2007Beginner Geek: Delete User Accounts in Windows 7Customize Your Welcome Picture Choices in Windows VistaSecure Computing: Detect and Eliminate Malware Using Windows DefenderMark Your Document As Final in Word 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes

    Read the article

  • Installing SharePoint 2010 in one machine with built in database

    - by sreejukg
    It is very easy to deploy SharePoint 2010 in a single server using the built-in database. Normally one need to choose such installation for evaluation purposes. When installing with default settings, setup installs Microsoft SQL server 2008 express database along with SharePoint. After installing SharePoint, you need to run SharePoint products and technology configuration wizard which will install central admin website and creates the configuration database and content database for SharePoint sites. Limitations 1. You can not perform this installation on a domain controller 2. The maximum size for express edition database is 4 GB SharePoint 2010 only supports 64 bit operating systems. The installation steps are for windows server r2 64 bit enterprise edition. Installation steps The first screen for the installation is as follows As a first step you need to install the s/w prerequisites. Click on the corresponding link Click next, here you have to agree on the license terms. Select the checkbox and then click next. The installation will starts. The progress will be updated in the screen. This may take some time as during this process, there are some components needs to be downloaded from internet. Make sure you are connected to the internet, then only the installation will become a success. If any error occurs, it will display the error, you need to configure in order to continue. If everything ok you will receive the following success page. Click finish to exit the installation window. Now from the first screen, select Install SharePoint server. This will install SharePoint and SQL server 2008 express edition. First you need to enter the product key for SharePoint. Enter the product key and clicks continue. Now you need to accept the license agreement. Select the checkbox and click on continue. Select the installation type you want.   Now click on the standalone button. In production scenario, you need to select the server farm installation. This article only cover the first option, installing server farm is not in the scope of this article. Once you click on the standalone, the installation starts and you can view the progress as below. If any error occurred during installation, you will get the details and link to the log file. Refer log file and fix the corresponding issue and then start the installation again. If installation completes without any error, you will see the below screen. Make sure you selected the check box “Run the SharePoint products Configuration Wizard now” and click close. The SharePoint products configuration wizard starts. Click next; you will get the following warning Click yes and the configuration steps starts. You can view the progress for each step. Once completed the below screen appears to the user. Click finish to complete the installation. Now SharePoint installation is completed. You can navigate to SharePoint central administration website from the administrative tools and start building your portal. Good luck

    Read the article

  • Adobe Photoshop CS5 vs Photoshop CS5 extended

    - by Edward
    Adobe Photoshop has been an industry standard for most web designers & photographers worldwide. Photoshop CS5 has made photography editing much more refined and the composition process has become much easier than ever before.  To study the advantage of Photoshop CS5 extended over Photoshop CS5 we have written this comparison article, with both a Designer’s & Photographer’s perspective. Hopefully it shall help you in your buying/upgrade decision. Photoshop CS5 Photoshop CS5 has refining feature with powerful photography tools. It made editing process easy as fewer steps are involved to remove noise, add grain, create vignettes, correct lens distortions, sharpen, and create HDR images. It has quick image correction and color and tone control for professional purpose. Intelligent image editing and enhancement , extraordinary advanced compositing has made it a better tool than earlier versions for photographers. It allows users to accelerate workflow with fast performance on 64-bit Windows® and Mac hardware systems and smoother interactions due to more GPU-accelerated features. It also boasts of a state-of-the-art processing with Adobe Photoshop Camera Raw 6 and helps to maximize creative impact. It provides for tremendous precision and freedom. It allows user to easily select intricate image elements, such as hair and create realistic painting effects. It also allows to remove any image element and see the space fill in almost magically. It has easy access to core editing and streamlined work flow and flexible work ambience. It has creative tools and contents. Photoshop CS5 Extended Photoshop CS5 extended is quite innovative and has incorporated 3D elements to 2D artwork directly within digital imaging application, which enables user to do an easy on-ramp to 3D image creation. It also provides for 3D editing. It has intelligent image editing and enhancement. It offers advance composing and has extraordinary painting and drawing toolset. It provides for video and animation designing. It helps to work with specialized images for architecture, manufacturing, engineering, science, and medicine. Where CS5 extended scores over CS5 CS5 extended has many features, which were not included in CS5. These features make it score more over CS5. These features are: Technology for creating 3D extrusion 3D material library and picker Field depth for 3D 3D merging and scene composition improvements 3D workflow improvement Customization of 3D features Image based light source Shadow catcher for shadow creation Enhanced ray tracer Context sensitive widgets, which allows easy control of objects, lights and cameras. Overlays for materials and mesh boundaries Photoshop CS5 extended is far better than CS5 as it incorporates all the features of CS5 and have more advanced features. It allows 3D creation and editing and has other advanced tools to make it better. Redefining the Image-Editing Experience  : A Photographer’s point of View Photoshop CS5 delivers amazing features and creative options so even new users can perform advanced image manipulations and compositions. Breath taking image intelligence behind Content-Aware Fill magically removes any image detail or object, examines the surroundings and seamlessly fills in the space left behind. Lighting, tone and noise of the surrounding area can be matched. New Refine Edge makes nearly-impossible image selections possible. Masking was never easier, the toughest types of edges, such as hair and foliage seem easier to fix. To sum up following are few advantages of CS5 extended over previous versions 64-bit processing Content Aware Fill Refine Edge, “makes nearly-impossible image selections impossible” HDR Pro, including ghost artifact removal and HDR toning, which gives the look of HDR with a single exposure New brush options Improved image management with enhanced Adobe Bridge Lens corrections Improved black-and-white conversions Puppet Warp: Precisely reposition or warp any image element Adobe Camera Raw 6 Upgrade Buy Online Pricing and Availability Adobe Photoshop CS5 and CS5 Extended are available through Adobe Authorized Resellers & the Adobe Store. Estimated street price for Adobe Photoshop CS5 is US$699 and US$999 for Photoshop CS5 Extended. Upgrade pricing and volume licensing are also available. Related posts:10 Free Alternatives for Adobe Photoshop Software Web based Alternatives to Photoshop 15 Useful Adobe Illustrator Tutorials For Designers

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • Adventures in MVVM &ndash; ViewModel Location and Creation

    - by Brian Genisio's House Of Bilz
    More Adventures in MVVM In this post, I am going to explore how I prefer to attach ViewModels to my Views.  I have published the code to my ViewModelSupport project on CodePlex in case you'd like to see how it works along with some examples.  Some History My approach to View-First ViewModel creation has evolved over time.  I have constructed ViewModels in code-behind.  I have instantiated ViewModels in the resources sectoin of the view. I have used Prism to resolve ViewModels via Dependency Injection. I have created attached properties that use Dependency Injection containers underneath.  Of all these approaches, I continue to find issues either in composability, blendability or maintainability.  Laurent Bugnion came up with a pretty good approach in MVVM Light Toolkit with his ViewModelLocator, but as John Papa points out, it has maintenance issues.  John paired up with Glen Block to make the ViewModelLocator more generic by using MEF to compose ViewModels.  It is a great approach, but I don’t like baking in specific resolution technologies into the ViewModelSupport project. I bring these people up, not to name drop, but to give them credit for the place I finally landed in my journey to resolve ViewModels.  I have come up with my own version of the ViewModelLocator that is both generic and container agnostic.  The solution is blendable, configurable and simple to use.  Use any resolution mechanism you want: MEF, Unity, Ninject, Activator.Create, Lookup Tables, new, whatever. How to use the locator 1. Create a class to contain your resolution configuration: public class YourViewModelResolver: IViewModelResolver { private YourFavoriteContainer container = new YourFavoriteContainer(); public YourViewModelResolver() { // Configure your container } public object Resolve(string viewModelName) { return container.Resolve(viewModelName); } } Examples of doing this are on CodePlex for MEF, Unity and Activator.CreateInstance. 2. Create your ViewModelLocator with your custom resolver in App.xaml: <VMS:ViewModelLocator x:Key="ViewModelLocator"> <VMS:ViewModelLocator.Resolver> <local:YourViewModelResolver /> </VMS:ViewModelLocator.Resolver> </VMS:ViewModelLocator> 3. Hook up your data context whenever you want a ViewModel (WPF): <Border DataContext="{Binding YourViewModelName, Source={StaticResource ViewModelLocator}}"> This example uses dynamic properties on the ViewModelLocator and passes the name to your resolver to figure out how to compose it. 4. What about Silverlight? Good question.  You can't bind to dynamic properties in Silverlight 4 (crossing my fingers for Silverlight 5), but you CAN use string indexing: <Border DataContext="{Binding [YourViewModelName], Source={StaticResource ViewModelLocator}}"> But, as John Papa points out in his article, there is a silly bug in Silverlight 4 (as of this writing) that will call into the indexer 6 times when it binds.  While this is little more than a nuisance when getting most properties, it can be much more of an issue when you are resolving ViewModels six times.  If this gets in your way, the solution (as pointed out by John), is to use an IndexConverter (instantiated in App.xaml and also included in the project): <Border DataContext="{Binding Source={StaticResource ViewModelLocator}, Converter={StaticResource IndexConverter}, ConverterParameter=YourViewModelName}"> It is a bit uglier than the WPF version (this method will also work in WPF if you prefer), but it is still not all that bad.  Conclusion This approach works really well (I suppose I am a bit biased).  It allows for composability from any mechanisim you choose.  It is blendable (consider serving up different objects in Design Mode if you wish... or different constructors… whatever makes sense to you).  It works in Cider.  It is configurable.  It is flexible.  It is the best way I have found to manage View-First ViewModel hookups.  Thanks to the guys mentioned in this article for getting me to something I love using.  Enjoy.

    Read the article

  • Code is not the best way to draw

    - by Bertrand Le Roy
    It should be quite obvious: drawing requires constant visual feedback. Why is it then that we still draw with code in so many situations? Of course it’s because the low-level APIs always come first, and design tools are built after and on top of those. Existing design tools also don’t typically include complex UI elements such as buttons. When we launched our Touch Display module for Netduino Go!, we naturally built APIs that made it easy to draw on the screen from code, but very soon, we felt the limitations and tedium of drawing in code. In particular, any modification requires a modification of the code, followed by compilation and deployment. When trying to set-up buttons at pixel precision, the process is not optimal. On the other hand, code is irreplaceable as a way to automate repetitive tasks. While tools like Illustrator have ways to repeat graphical elements, they do so in a way that is a little alien and counter-intuitive to my developer mind. From these reflections, I knew that I wanted a design tool that would be structurally code-centric but that would still enable immediate feedback and mouse adjustments. While thinking about the best way to achieve this goal, I saw this fantastic video by Bret Victor: The key to the magic in all these demos is permanent execution of the code being edited. Whenever a parameter is being modified, everything is re-executed immediately so that the impact of the modification is instantaneously visible. If you do this all the time, the code and the result of its execution fuse in the mind of the user into dual representations of a single object. All mental barriers disappear. It’s like magic. The tool I built, Nutshell, is just another implementation of this principle. It manipulates a list of graphical operations on the screen. Each operation has a nice editor, and translates into a bit of code. Any modification to the parameters of the operation will modify the bit of generated code and trigger a re-execution of the whole program. This happens so fast that it feels like the drawing reacts instantaneously to all changes. The order of the operations is also the order in which the code gets executed. So if you want to bring objects to the front, move them down in the list, and up if you want to move them to the back: But where it gets really fun is when you start applying code constructs such as loops to the design tool. The elements that you put inside of a loop can use the loop counter in expressions, enabling crazy scenarios while retaining the real-time edition features. When you’re done building, you can just deploy the code to the device and see it run in its native environment: This works thanks to two code generators. The first code generator is building JavaScript that is executed in the browser to build the canvas view in the web page hosting the tool. The second code generator is building the C# code that will run on the Netduino Go! microcontroller and that will drive the display module. The possibilities are fascinating, even if you don’t care about driving small touch screens from microcontrollers: it is now possible, within a reasonable budget, to build specialized design tools for very vertical applications. Direct feedback is a powerful ally in many domains. Code generation driven by visual designers has become more approachable than ever thanks to extraordinary JavaScript libraries and to the powerful development platform that modern browsers provide. I encourage you to tinker with Nutshell and let it open your eyes to new possibilities that you may not have considered before. It’s open source. And of course, my company, Nwazet, can help you develop your own custom browser-based direct feedback design tools. This is real visual programming…

    Read the article

  • Creating my first F# program in my new &ldquo;Expert F# Book&rdquo;

    - by MarkPearl
    So I have a brief hour or so that I can dedicate today to reading my F# book. It’s a public holiday and my wife’s birthday and I have a ton of assignments for UNISA that I need to complete – but I just had to try something in F#. So I read chapter 1 – pretty much an introduction to the rest of the book – it looks good so far. Then I get to chapter 2, called “Getting Started with F# and .NET”. Great, there is a code sample on the first page of the chapter. So I open up VS2010 and create a new F# console project and type in the code which was meant to analyze a string for duplicate words… #light let wordCount text = let words = Split [' '] text let wordset = Set.ofList words let nWords = words.Length let nDups = words.Length - wordSet.Count (nWords, nDups) let showWordCount text = let nWords,nDups = wordCount text printfn "--> %d words in text" nWords printfn "--> %d duplicate words" nDups   So… bad start - VS does not like the “Split” method. It gives me an error message “The value constructor ‘Split’ is not defined”. It also doesn’t like wordSet.Count telling me that the “namespace or module ‘wordSet’ is not defined”. ??? So a bit of googling and it turns out that there was a bit of shuffling of libraries between the CTP of F# and the Beta 2 of F#. To have access to the Split function you need to download the F# PowerPack and hen reference it in your code… I download and install the powerpack and then add the reference to FSharp.Core and FSharp.PowerPack in my project. Still no luck! Some more googling and I get the suggestions I got were something like this…#r "FSharp.PowerPack.dll";; #r "FSharp.PowerPack.Compatibility.dll";; So I add the code above to the top of my Program.fs file and still no joy… I now get an error message saying… Error    1    #r directives may only occur in F# script files (extensions .fsx or .fsscript). Either move this code to a script file, add a '-r' compiler option for this reference or delimit the directive with '#if INTERACTIVE'/'#endif'. So what does that mean? If I put the code straight into the F# interactive it works – but I want to be able to use it in a project. The C# equivalent I would think would be the “Using” keyword. The #r doesn’t seem like it should be in the FSharp code. So I try what the compiler suggests by doing the following…#if INTERACTIVE #r "FSharp.PowerPack.dll";; #r "FSharp.PowerPack.Compatibility.dll";; #endif No luck, the Split method is still not recognized. So wait a second, it mentioned something about FSharp.PowerPack.Compatibility.dll – I haven’t added this as a reference to my project so I add it and remove the two lines of #r code. Partial success – the Split method is now recognized and not underlined, but wordSet.Count is still not working. I look at my code again and it was a case error – the original wordset was mistyped comapred to the wordSet. Some case correction and the compiler is no longer complaining. So the code now seems to work… listed below…#light let wordCount text = let words = String.split [' '] text let wordSet = Set.ofList words let nWords = words.Length let nDups = words.Length - wordSet.Count (nWords, nDups) let showWordCount text = let nWords,nDups = wordCount text printfn "--> %d words in text" nWords printfn "--> %d duplicate words" nDups  So recap – if I wanted to use the interactive compiler then I need to put the #r code. In my mind this is the equivalent of me adding the the references to my project. If however I want to use the powerpack in a project – I just need to make sure that the correct references are there. I feel like a noob once again!

    Read the article

  • Autoscaling in a modern world&hellip;. last chapter

    - by Steve Loethen
    As we all know as coders, things like logging are never important.  Our code will work right the first time.  So, you can understand my surprise when the first time I deployed the autoscaling worker role to the actual Azure fabric, it did not scale.  I mean, it worked on my machine.  How dare the datacenter argue with that.  So, how did I track down the problem?  (turns out, it was not so much code as lack of the right certificate)  When I ran it local in the developer fabric, I was able to see a wealth of information.  Lots of periodic status info every time the autoscalar came around to check on my rules and decide to act or not.  But that information was not making it to Azure storage.  The diagnostics were not being transferred to where I could easily see and use them to track down why things were not being cooperative.  After a bit of digging, I discover the problem.  You need to add a bit of extra configuration code to get the correct information stored for you.  I added the following to my app.config: Code Snippet <system.diagnostics>     <sources>         <source name="Autoscaling General"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>         <source name="Autoscaling Updates"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>     </sources>     <switches>       <add name="SourceSwitch"           value="Verbose, Information, Warning, Error, Critical" />     </switches>     <sharedListeners>       <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiag"/>     </sharedListeners>     <trace>       <listeners>         <add             type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">           <filter type="" />         </add>       </listeners>     </trace>   </system.diagnostics> Suddenly all the rich tracing info I needed was filling up my storage account.  After a few cycles of trying to attempting to scale, I identified the cert problem, uploaded a correct certificate, and away it went.  I hope this was helpful.

    Read the article

  • Secret Agent Man

    - by Bil Simser
    Just a quick one this morning as we all get started in the week. Something that comes into play (sometimes in a big way) is the user agent string your browser gives off. So for example using the User-Agent field in the request header, you can determine what browser the user is running and act accordingly.Internet Explorer 9 modified the UA string slightly so just in case you're looking for it here are the user agent strings for IE9 (in various modes):Internet Explorer 9 Mode: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)Internet Explorer 8 Mode: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 7 Mode: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 9 (Compatibility Mode): Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)A couple of things to note here:This was from a 64-bit Windows 7 client so that might account for the WOW64 in the agent string (I don't have a 32-bit client to test from)Various applications and platforms add to the UA string just like they do in previous IE releases. So for example you can see I have various .NET versions installed as well as Zune. You can take advantage of this by querying the UA string for compatibilities and present options accordingly to the end user.As applications will continue to add and modify this string you'll want to query the string for parts not the entire string. For example if you want to detect if you're coming from IE running  on a Windows Phone 7 just look for "iemobile" in the user agent stringHappy hacking!

    Read the article

  • How to inhibit suspend temporarily?

    - by Zorn
    I have searched around a bit for this and can't seem to find anything helpful. I have my PC running Ubuntu 12.10 set up to suspend after 30 minutes of inactivity. I don't want to change that, it works great most of the time. What I do want to do is disable the automatic suspend if a particular application is running. How can I do this? The closest thing I've found so far is to add a shell script in /usr/lib/pm-utils/sleep.d which checks if the application is running and returns 1 to indicate that suspend should be prevented. But it looks like the system then gives up on suspending automatically, instead of trying again after another 30 minutes. (As far as I can tell, if I move the mouse, that restarts the timer again.) It's quite likely the application will finish after a couple of hours, and I'd rather my PC then suspended automatically if I'm not using it at that point. (So I don't want to add a call to pm-suspend when the application finishes.) Is this possible? Any advice would be appreciated. Cheers. EDIT: As I noted in one of the comments below, what I actually wanted was to inhibit suspend when my PC was serving files over NFS; I just wanted to focus on the "suspend" part of the question because I already had an idea how to solve the NFS part. Using the 'xdotool' idea given in one of the answers, I have come up with the following script which I run from cron every few minutes. It's not ideal because it stops the screensaver kicking in as well, but it does work. I need to have a look at why 'caffeine' doesn't correctly re-enable suspend later on, then I could probably do better. Anyway, this does seem to work, so I'm including it here in case anyone else is interested. #!/bin/bash # If the output of this function changes between two successive runs of this # script, we inhibit auto-suspend. function check_activity() { /usr/sbin/nfsstat --server --list } # Prevent the automatic suspend from kicking in. function inhibit_suspend() { # Slightly jiggle the mouse pointer about; we do a small step and # reverse step to try to stop this being annoying to anyone using the # PC. TODO: This isn't ideal, apart from being a bit hacky it stops # the screensaver kicking in as well, when all we want is to stop # the PC suspending. Can 'caffeine' help? export DISPLAY=:0.0 xdotool mousemove_relative --sync -- 1 1 xdotool mousemove_relative --sync -- -1 -1 } LOG="$HOME/log/nfs-suspend-blocker.log" ACTIVITYFILE1="$HOME/tmp/nfs-suspend-blocker.current" ACTIVITYFILE2="$HOME/tmp/nfs-suspend-blocker.previous" echo "Started run at $(date)" >> "$LOG" if [ ! -f "$ACTIVITYFILE1" ]; then check_activity > "$ACTIVITYFILE1" exit 0; fi /bin/mv "$ACTIVITYFILE1" "$ACTIVITYFILE2" check_activity > "$ACTIVITYFILE1" if cmp --quiet "$ACTIVITYFILE1" "$ACTIVITYFILE2"; then echo "No activity detected since last run" >> "$LOG" else echo "Activity detected since last run; inhibiting suspend" >> "$LOG" inhibit_suspend fi

    Read the article

  • Dual boot Ubuntu and Windows 7, I Can only boot ubuntu through recovery mode

    - by Alec
    I want to become a new user of Ubuntu, however this problem is preventing me. I have/had Window 7 professional on my computer. I recently looked into getting linux. I discovered dual-booting and decided to give it a try. First I created a bootable flash drive with ubuntu 12.10 64 bit. I then followed the instructions on: https://help.ubuntu.com/community/WindowsDualBoot after I finished going through the setup, my computer rebooted. After the reboot I was able to select Ubuntu, advanced options for Ubuntu, 2 memory tests, and windows 7 (loader). So I chose Windows ( honestly i was more concerned that i still had everything on windows at this point). I then rebooted again and selected Ubuntu. When i selected Ubuntu, the background screen of Grub (the crimson/burgandy color) stayed for a few seconds then the screen went black: video here http://www.youtube.com/watch?v=6kKcG4sT7Lg&feature=plcp I tried again with the same results. so i redid the ubuntu install differently using http://www.liberiangeek.net/2012/10/dual-booting-windows-7-and-ubuntu-12-10-quantal-quetzal/. After rebooting the same thing happened. After that i was stumped, so i figured it could hurt to experiment. after all i backed up my windows 7 stuff, and i have the software disk. I tried booting in recovery mode under "advanced options for Ubuntu" and sure enough, after selecting continue to normal reboot it worked. So i updated and everything but when i rebooted it still wouldn't boot under Ubuntu. It would always boot after recovery mode. So i try installing 12.10 32 bit Ubuntu. the same problem keeps happening. I can still get to Ubuntu through recovery mode. so i went online and tried using the terminal (in ubuntu that i booted through recovery mode) when i was using it i discovered that "Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected" kept showing up. also i noticed a notification in the top right corner that looked like a do not enter sign. it said "an error occured, please run package manager from the right-click menu or apt-get in a terminal to see what is wrong. the error message was: 'ror in sitecustomize;set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected traceback (most recent called last): File "/usr/bin/lsb_release EOFError: EOF read where not expected 39;0' this usually means that your installed packages have unmet dependencies" Naturally i assumed this was what was causing my boot problems. I downloaded synaptic and updated everything and the error went away. but my boot problem was still a problem. so i go online find some things that have worked for others, like this Try to do this (in your terminal: sudo nano /etc/default/grub Look for: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" Change it too : GRUB_CMDLINE_LINUX_DEFAULT="quiet" And update Grub: sudo update-grub This should fix stuff.) I did this and i still have the problem. sorry for the excessive explanation, please help.

    Read the article

  • Weblogic 10.3.4 (PS3) nodemanager wont start?

    - by angelo.santagata
    Hi all, well Im back from Australia and one of the things which happened was Oracle announced the PS3 release of oracles SOA & Webcenter products have been released. Now I normally use pre-installed images but I always like to install the products at least once that way I get to see its installation caveats.. Here’s one. Installation on Windows 7 64bit, 64bit JVM, generic weblogic Server installer. All worked fine, EXCEPT I cant start the node manager, I get the following error <08-Feb-2011 17:16:48> <INFO> <Loading domains file: D:\products\wls1034\WLSERV~1.3\common\NODEMA~1\nodemanager.domains> <08-Feb-2011 17:16:48> <SEVERE> <Fatal error in node manager server> weblogic.nodemanager.common.ConfigException: Native version is enabled but nodemanager native library could not be loaded     at weblogic.nodemanager.server.NMServerConfig.initProcessControl(NMServerConfig.java:249)     at weblogic.nodemanager.server.NMServerConfig.<init>(NMServerConfig.java:190)     at weblogic.nodemanager.server.NMServer.init(NMServer.java:182)     at weblogic.nodemanager.server.NMServer.<init>(NMServer.java:148)     at weblogic.nodemanager.server.NMServer.main(NMServer.java:390)     at weblogic.NodeManager.main(NodeManager.java:31) Caused by: java.lang.UnsatisfiedLinkError: D:\products\wls1034\wlserver_10.3\server\native\win\32\nodemanager.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform     at java.lang.ClassLoader$NativeLibrary.load(Native Method)     at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)     at java.lang.Runtime.loadLibrary0(Runtime.java:823)     at java.lang.System.loadLibrary(System.java:1028)     at weblogic.nodemanager.util.WindowsProcessControl.<init>(WindowsProcessControl.java:17)     at weblogic.nodemanager.util.ProcessControlFactory.getProcessControl(ProcessControlFactory.java:24)     at weblogic.nodemanager.server.NMServerConfig.initProcessControl(NMServerConfig.java:247)     ... 5 more Ok it appears that the node manager has gotten confused and thinks this is a 32bit install of Weblogic Server whereas it is the 64bit install.. Might have been something I did, or didnt do, on installation (e.g. –d64 on the jvm command line), however the workaround is pretty easy. 1. Create a file called nodemanager.properties in %WL_HOME%\common\nodemanager on my machine it was D:\products\wls1034\wlserver_10.3\common\nodemanager 2. Add the following line to it NativeVersionEnabled=false 3. And start it up!, this will force it not to use .DLL files and use emulation/non native methods instead..  

    Read the article

  • problem with network-manager-pptp

    - by Riuzaki90
    I've a problema with the VPA CAble connection of my university... on the website of the university there's a .sh file that set all the variables of the connection in ETC/PPP/PEERS and another .sh file that call the connection...I'm on ubuntu 11.10 and when I run the setup.sh I have this error: impossible to find network-manager-pptp these are the two file that I had talk about: #!/bin/bash echo "Creazione della connessione in corso attendere........." apt-get update apt-get install pptp-linux network-manager-pptp echo -n "Digitare la propria Username: " read USERNAME echo -n "Digitare la propria Password: " read PASSWORD pptpsetup --create UNICAL_Campus_Access --server 160.97.73.253 --username $USERNAME --password $PASSWORD echo 'pty "pptp 160.97.73.253 --nolaunchpppd"' >/etc/ppp/peers/UNICAL_Campus_Access echo 'require-mppe-128' >>/etc/ppp/peers/UNICAL_Campus_Access echo 'file /etc/ppp/options.pptp'>>/etc/ppp/peers/UNICAL_Campus_Access echo 'name '$USERNAME''>>/etc/ppp/peers/UNICAL_Campus_Access echo 'remotename PPTP'>>/etc/ppp/peers/UNICAL_Campus_Access echo 'ipparam UNICAL_Campus_Access'>>/etc/ppp/peers/UNICAL_Campus_Access echo $USERNAME' PPTP '$PASSWORD' *'>>/etc/ppp/chap-secrets rm /etc/ppp/options.pptp echo '###############################################################################'>/etc/ppp/options.pptp echo '# $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $'>>/etc/ppp/options.pptp echo '#'>>/etc/ppp/options.pptp echo '# Sample PPTP PPP options file /etc/ppp/options.pptp'>>/etc/ppp/options.pptp echo '# Options used by PPP when a connection is made by a PPTP client.'>>/etc/ppp/options.pptp echo '# This file can be referred to by an /etc/ppp/peers file for the tunnel.'>>/etc/ppp/options.pptp echo '# Changes are effective on the next connection. See "man pppd".'>>/etc/ppp/options.pptp echo '#'>>/etc/ppp/options.pptp echo '# You are expected to change this file to suit your system. As'>>/etc/ppp/options.pptp echo '# packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/'>>/etc/ppp/options.pptp echo '# and the kernel MPPE module available from the CVS repository also on'>>/etc/ppp/options.pptp echo '# http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe.'>>/etc/ppp/options.pptp echo '###############################################################################'>>/etc/ppp/options.pptp echo '# Lock the port'>>/etc/ppp/options.pptp echo 'lock'>>/etc/ppp/options.pptp echo '# Authentication'>>/etc/ppp/options.pptp echo '# We do not need the tunnel server to authenticate itself'>>/etc/ppp/options.pptp echo 'noauth'>>/etc/ppp/options.pptp echo '#We won"t do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2'>>/etc/ppp/options.pptp echo '#(you may need to remove these refusals if the server is not using MPPE)'>>/etc/ppp/options.pptp echo 'refuse-pap'>>/etc/ppp/options.pptp echo 'refuse-eap'>>/etc/ppp/options.pptp echo 'refuse-chap'>>/etc/ppp/options.pptp echo 'refuse-mschap'>>/etc/ppp/options.pptp echo '# Compression Turn off compression protocols we know won"t be used'>>/etc/ppp/options.pptp echo 'nobsdcomp'>>/etc/ppp/options.pptp echo 'nodeflate'>>/etc/ppp/options.pptp echo '# Encryption'>>/etc/ppp/options.pptp echo '# (There have been multiple versions of PPP with encryption support,'>>/etc/ppp/options.pptp echo '# choose with of the following sections you will use. Note that MPPE'>>/etc/ppp/options.pptp echo '# requires the use of MSCHAP-V2 during authentication)'>>/etc/ppp/options.pptp echo '# http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras'>>/etc/ppp/options.pptp echo '# ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o'>>/etc/ppp/options.pptp echo '#{{{'>>/etc/ppp/options.pptp echo '# Require MPPE 128-bit encryption'>>/etc/ppp/options.pptp echo '#require-mppe-128'>>/etc/ppp/options.pptp echo '#}}}'>>/etc/ppp/options.pptp echo '# http://polbox.com/h/hs001/ fork from PPP project by Jan Dubiec'>>/etc/ppp/options.pptp echo '#ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o'>>/etc/ppp/options.pptp echo '#{{{'>>/etc/ppp/options.pptp echo '# Require MPPE 128-bit encryption'>>/etc/ppp/options.pptp echo '#mppe required,stateless'>>/etc/ppp/options.pptp echo '# }}}'>>/etc/ppp/options.pptp echo "setup di 'UNICAL Campus Access' terminato correttamente" echo "per connettersi eseguire lo script 'UNICAL_Campus_Access.sh' " and the second: #!/bin/bash echo "Connessione alla Rete del Centro Residenziale in corso attendere........." modprobe ppp_mppe pppd call UNICAL_Campus_Access sleep 30 tail -n 8 /var/log/messages echo "Connessione Stabilita" echo -n "Per terminare la connessione premere invio (in alternativa eseguire il commando 'killall pppd'):----> " read CONN killall pppd echo "Connessione terminata" I've correctly installed network-manager-pptp to the latest version...help?

    Read the article

  • OS Development. Only Few Particular Questions

    - by Total Anime Immersion
    I am new to this site as a member but have consulted its answers quite a lot of times. Besides my questions regarding OS Development hasn't been answered in any forum. In OS Dev. we make a bootloader. The org point is 7C00H. Why so? Why not 0000h? What are the last two signatures in the bootloader used for? People on every forum have answered that it is important for the system to recognize it as a bootable media. But I want a specific answer. What do each of those signatures do. I have the basic concept of a kernel. Point is.. it relates to different files required in a system. It sort of binds up everything that is individually developed. Now the thing is that that I have floating ideas in my mind regarding different aspects like keyboard, mouse, etc.. how do I put them all together? Which should I start with first? If possible please provide a step by step procedure of the startups of the kernel. Suppose I have developed my language entirely in C and Assembly. Now questions is will exe files work on my system.. if it doesn't then I have to create my own files and publish them. Which is a bad idea.. next step would be for me to go for a compiler for a language which I have designed myself. Now the point is.. How do I implement the compiler into my OS? After all this my final question is that.. How do you go about multitasking and multithreading? and I don't want to use int 21h as its dos specific.. how do I go about making files, renaming them, etc. and all assembly books teach 16 but programming.. how do i go about doing 32 bit or 64 bit with the knowledge I have.. if the basics and instructions are the same.. I don't mind.. but how do i go about otherwise? Don't tell me to give up the idea because I WON'T. And don't tell me it's too complex because I have a sharp knowledge of working of a system, C, Java, Assembly, C++ and python, C#, visual basic.. and not just basics but full fledged api developments.. but I really want to go deep into the systems part.. so I want professional help.. And I have gone through many OS project files but I want help particularly from this site as there are people with knowledge depth who can guide me the right way. And please don't suggest any books above 20$ and they should be available on flipkart as amazon charges massively for shipping and I prefer free shipping from flipkart.

    Read the article

  • Fake It Easy On Yourself

    - by Lee Brandt
    I have been using Rhino.Mocks pretty much since I started being a mockist-type tester. I have been very happy with it for the most part, but a year or so ago, I got a glimpse of some tests using Moq. I thought the little bit I saw was very compelling. For a long time, I had been using: 1: var _repository = MockRepository.GenerateMock<IRepository>(); 2: _repository.Expect(repo=>repo.SomeCall()).Return(SomeValue); 3: var _controller = new SomeKindaController(_repository); 4:  5: ... some exercising code 6: _repository.AssertWasCalled(repo => repo.SomeCall()); I was happy with that syntax. I didn’t go looking for something else, but what I saw was: 1: var _repository = new Mock(); And I thought, “That looks really nice!” The code was very expressive and easier to read that the Rhino.Mocks syntax. I have gotten so used to the Rhino.Mocks syntax that it made complete sense to me, but to developers I was mentoring in mocking, it was sometimes to obtuse. SO I thought I would write some tests using Moq as my mocking tool. But I discovered something ugly once I got into it. The way Mocks are created makes Moq very easy to read, but that only gives you a Mock not the object itself, which is what you’ll need to pass to the exercising code. So this is what it ends up looking like: 1: var _repository = new Mock<IRepository>(); 2: _repository.SetUp(repo=>repo.SomeCall).Returns(SomeValue); 3: var _controller = new SomeKindaController(_repository.Object); 4: .. some exercizing code 5: _repository.Verify(repo => repo.SomeCall()); Two things jump out at me: 1) when I set up my mocked calls, do I set it on the Mock or the Mock’s “object”? and 2) What am I verifying on SomeCall? Just that it was called? that it is available to call? Dealing with 2 objects, a “Mock” and an “Object” made me have to consider naming conventions. Should I always call the mock _repositoryMock and the object _repository? So I went back to Rhino.Mocks. It is the most widely used framework, and show other how to use it is easier because there is one natural object to use, the _repository. Then I came across a blog post from Patrik Hägne, and that led me to a post about FakeItEasy. I went to the Google Code site and when I saw the syntax, I got very excited. Then I read the wiki page where Patrik stated why he wrote FakeItEasy, and it mirrored my own experience. So I began to play with it a bit. So far, I am sold. the syntax is VERY easy to read and the fluent interface is super discoverable. It basically looks like this: 1: var _repository = A.Fake<IRepository>(); 2: a.CallTo(repo=>repo.SomeMethod()).Returns(SomeValue); 3: var _controller = new SomeKindaController(_repository); 4: ... some exercising code 5: A.CallTo(() => _repository.SOmeMethod()).MustHaveHappened(); Very nice. But is it mature? It’s only been around a couple of years, so will I be giving up some thing that I use a lot because it hasn’t been implemented yet? I doesn’t seem so. As I read more examples and posts from Patrik, he has some pretty complex scenarios. He even has support for VB.NET! So if you are looking for a mocking framework that looks and feels very natural, try out FakeItEasy!

    Read the article

  • HPCM 11.1.2.x - Outline Optimisation for Calculation Performance

    - by Jane Story
    When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times. There are several things that you may wish to consider. Because at least one dense dimension for an application is required to deploy from HPCM to Essbase, “Measures” and “AllocationType”, as the only required dimensions in an HPCM application, are created dense by default. However, for optimisation reasons, you may wish to consider changing this default dense/sparse configuration. In general, calculation scripts in HPCM execute best when they are targeting destinations with one or more dense dimensions. Therefore, consider your largest target stage i.e. the stage with the most assignment destinations and choose that as a dense dimension. When optimising an outline in this way, it is not possible to have a dense dimension in every target stage and so testing with the dense/sparse settings in every stage is the key to finding the best configuration for each individual application. It is not possible to change the dense/sparse setting of individual cloned dimensions from EPMA. When a dimension that is to be repeated in multiple stages, and therefore cloned, is defined in EPMA, every instance of that dimension has the same storage setting. However, such manual changes may not be preserved in all cases. Please see below for full explanation. However, once the application has been deployed from EPMA to HPCM and from HPCM to Essbase, it is possible to make the dense/sparse changes to a cloned dimension directly in Essbase. This can be done by editing the properties of the outline in Essbase Administration Services (EAS) and manually changing the dense/sparse settings of individual dimensions. There are two methods of deployment from HPCM to Essbase from 11.1.2.1. There is a “replace” deploy method and an “update” deploy method: “Replace” will delete the Essbase application and replace it. If this method is chosen, then any changes made directly on the Essbase outline will be lost. If you use the update deploy method (with or without archiving and reloading data), then the Essbase outline, including any manual changes you have made (i.e. changes to dense/sparse settings of the cloned dimensions), will be preserved. Notes If you are using the calculation optimisation technique mentioned in a previous blog to calculate multiple POVs (https://blogs.oracle.com/pa/entry/hpcm_11_1_2_optimising) and you are calculating all members of that POV dimension (e.g. all months in the Period dimension) then you could consider making that dimension dense. Always review Block sizes after all changes! The maximum block size recommended in the Essbase Database Administrator’s Guide is 100k for 32 bit Essbase and 200k for 64 bit Essbase. However, calculations may perform better with a larger than recommended block size provided that sufficient memory is available on the Essbase server. Test different configurations to determine the most optimal solution for your HPCM application. Please note that this blog article covers HPCM outline optimisation only. Additional performance tuning can be achieved by methodically testing database settings i.e data cache, index cache and/or commit block settings. For more information on Essbase tuning best practices, please review these items in the Essbase Database Administrators Guide. For additional information on the commit block setting, please see the previous PA blog article https://blogs.oracle.com/pa/entry/essbase_11_1_2_commit

    Read the article

  • SQL SERVER – OVER clause with FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 – ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING

    - by pinaldave
    Yesterday I had discussed two analytical functions FIRST_VALUE and LAST_VALUE. After reading the blog post I received very interesting question. “Don’t you think there is bug in your first example where FIRST_VALUE is remain same but the LAST_VALUE is changing every line. I think the LAST_VALUE should be the highest value in the windows or set of result.” I find this question very interesting because this is very commonly made mistake. No there is no bug in the code. I think what we need is a bit more explanation. Let me attempt that first. Before you do that I suggest you read yesterday’s blog post as this question is related to that blog post. Now let’s have fun following query: USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: As per the reader’s question the value of the LAST_VALUE function should be always 114 and not increasing as the rows are increased. Let me re-write the above code once again with bit extra T-SQL Syntax. Please pay special attention to the ROW clause which I have added in the above syntax. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Now once again check the result of the above query. The result of both the query is same because in OVER clause the default ROWS selection is always UNBOUNDED PRECEDING AND CURRENT ROW. If you want the maximum value of the windows with OVER clause you need to change the syntax to UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING for ROW clause. Now run following query and pay special attention to ROW clause again. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Here is the resultset of the above query which is what questioner was asking. So in simple word, there is no bug but there is additional syntax needed to add to get your desired answer. The same logic also applies to PARTITION BY clause when used. Here is quick example of how we can further partition the query by SalesOrderDetailID with this new functions. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us windowed resultset on SalesOrderDetailsID as well give us FIRST and LAST value for the windowed resultset. There are lots to discuss for this two functions and we have just explored tip of the iceberg. In future post I will discover it further deep. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Creating a branch for every Sprint

    - by Martin Hinshelwood
    There are a lot of developers using version control these days, but a feature of version control called branching is very poorly understood and remains unused by most developers in favour of Labels. Most developers think that branching is hard and complicated. Its not! What is hard and complicated is a bad branching strategy. Just like a bad software architecture a bad branch architecture, or one that is not adhered to can prove fatal to a project. We I was at Aggreko we had a fairly successful Feature branching strategy (although the developers hated it) that meant that we could have multiple feature teams working at the same time without impacting each other. Now, this had to be carefully orchestrated as it was a Business Intelligence team and many of the BI artefacts do not lend themselves to merging. Today at SSW I am working on a Scrum team delivering a product that will be used by many hundreds of developers. SSW SQL Deploy takes much of the pain out of upgrading production databases when you are not using the Database projects in Visual Studio. With Scrum each Scrum Team works for a fixed period of time on a single sprint. You can have one or more Scrum Teams involved in delivering a product, but all the work must be merged and tested, ready to be shown to the Product Owner at the the Sprint Review meeting at the end of the current Sprint. So, what does this mean for a branching strategy? We have been using a “Main” (sometimes called “Trunk”) line and doing a branch for each sprint. It’s like Feature Branching, but with only ONE feature in operation at any one time, so no conflicts Figure: DEV folder containing the Development branches.   I know that some folks advocate applying a Label at the start of each Sprint and then rolling back if you need to, but I have always preferred the security of a branch. Like: being able to create a release from Main that has Sprint3 code even while Sprint4 is being worked on. being sure I can always create a stable build on request. Being able to guarantee a version (labels are not auditable) Be able to abandon the sprint without having to delete the code (rare I know, but would be a mess if it happened) Being able to see the flow of change sets through to a safe release It helps you find invalid dependencies when merging to Main as there may be some file that is in everyone’s Sprint branch, but never got checked in. (We had this at the merge of Sprint2) If you are always operating in this way as a standard it makes it easier to then add more scrum teams in the future. Muscle memory of this way of working. Don’t Like: Additional DB space for the branches Baseless merging between sprint branches when changes are directly ported Note: I do not think we will ever attempt this! Maybe a bit tougher to see the history between sprint branches since the changes go up through Main and down to another sprint branch Note: What you would have to do is see which Sprint the changes were made in and then check the history he same file in that Sprint. A little bit of added complexity that you would have to do anyway with multiple teams. Over time, you can end up with a lot of old unused sprint branches. Perhaps destroy with /keephistory can help in this case. Note: We ALWAYS delete the Sprint branch after it has been merged into Main. That is the theory anyway, and as you can see from the images Sprint2 has already been deleted. Why take the chance of having a problem rolling back or wanting to keep some of the code, when you can just abandon a branch and start a new one? It just seems easier and less painful to use a branch to me! What do you think?   Technorati Tags: TFS,TFS2010,Software Development,ALM,Branching

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • Access Music from Amie Street in Boxee

    - by Mysticgeek
    One of our favorite sites for discovering new music is Amie Street. Today we take a look at the Amie Street app for Boxee that allows you to access your favorite tunes from the Boxee interface. Amie Street is a cool site that allows you to discover a lot of cool music from independent artists. What makes Amie Street unique is that most of the music starts out free, then the price goes up incrementally as its popularity grows. The Amie Street App for Boxee let’s you access music and playlists you’ve created on the site, with more features are on the way. For this example we’re using the mouse and keyboard, but of course you can also get to each section using your remote if you have one. Or you can turn your iPod touch into a Boxee remote too. Amie Street in Boxee To access the Amie Street app, launch Boxee and click on Apps from the main menu. Under the Search Sidebar type in Amie Street and select it from the results field.   Then you can add it to the My Apps section…and double-click on the icon. Click on Start to begin using it. You’ll be be presented with a Welcome screen where you can sign into your account. If you don’t have an account yet, there is also an option to go to the Amie Street site and create one. Enter in your account credentials… Now you’ll be able to access your Library, Playlists, Search for new tunes, and check out your Recommended bands and artists. Hover the pointer over an album to get a bit more info about it such as the music genre. You’ll be able to play the songs from the playlists you created on the Amie Street site. You can browse through the history of the music you’ve played as well. Not all the features of this app seem to work as you’d expect them to, and some of the features are not yet available like the Browse feature.   Conclusion At the time of this writing we weren’t able to purchase music or get additional information about the artists. As development continues on Boxee and this app, you can expect more of a full user experience and the ability to purchase music. Even though some of the features are a bit buggy or not available, if you’re a Boxee user and a fan of Amie Street, this is cool app to add to your collection. Download Boxee for Windows, Mac, and Ubuntu Learn more about Amie Street on Boxee Similar Articles Productive Geek Tips Amie Street Downloader Makes Purchasing Music EasierFind Free or Cheap Indie Music at Amie StreetIntegrate Boxee with Media Center in Windows 7Using Pandora in BoxeeGetting Started with Boxee TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Watch World Cup Online On These Sites Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like)

    Read the article

  • Implementing features in an Entity System

    - by Bane
    After asking two questions on Entity Systems (1, 2), and reading some articles on them, I think that I understand them much better than before. But, I still have some uncertainties, and mainly they are about building a Particle Emitter, an Input system, and a Camera. I obviously still have some problems understanding Entity Systems, and they might apply to a whole other range of objects, but I chose these three because they are very different concepts and should cover a pretty big ground, and help me understand Entity Systems and how to handle problems like these myself, as they come along. I am building an engine in Javascript, and I've implemented most of the core features, which include: input handling, flexible animation system, particle emitter, math classes and functions, scene handling, a camera and a render, and a whole bunch of other things that engines usually support. Then, I read Byte56's answer that got me interested into making the engine into an Entity System one. It would still remain an HTML5 game engine with the basic Scene philosophy, but it should support dynamic creation of entities from components. These are some of the definitions from the previous questions, updated: An Entity is an identifier. It doesn't have any data, it's not an object, it's a simple id that represents an index in the Scene's list of all entities (which I actually plan to implement as a component matrix). A Component is a data holder, but with methods that can operate on that data. The best example is a Vector2D, or a "Position" component. It has data: x and y, but also some methods that make operating on the data a bit easier: add(), normalize(), and so on. A System is something that can operate on a set of entities that meet the certain requirements, usually they (the entities) need to have a specified (by the system itself) set of components to be operated upon. The system is the "logic" part, the "algorithm" part, all the functionality supplied by components is purely for easier data management. The problem that I have now is fitting my old engine concept into this new programming paradigm. Lets start with the simplest one, a Camera. The camera has a position property (Vector2D), a rotation property and some methods for centering it around a point. Each frame, it is fed to a renderer, along with a scene, and all the objects are translated according to it's position. Then the scene is rendered. How could I represent this kind of an object in an Entity System? Would the camera be an entity or simply a component? A combination (see my answer)? Another issues that is bothering me is implementing a Particle Emitter. For what exactly I mean by that, you can check out my video of it: http://youtu.be/BObargIMQsE. The problem I have with this is, again, what should be what. I'm pretty sure that particles themselves shouldn't be entities, as I want to support 10k+ of them, and creating that much entities would be a heavy blow on my performance, I believe. Or maybe not? Depends on the implementation, but anyone with experience: please, do answer. The last bit I wan't to talk about, which is also bugging me the most, is how input should be handled. In my current version of the engine, there is a class called Input. It's a handler that subscribes to browser's events, such as keypresses, and mouse position changes, and also it maintains an internal state. Then, the player class has a react() method, which accepts an input object as an argument. The advantage of this is that the input object could be serialized into JSON and then shared over the network, allowing for smooth multiplayer simulations. But how does this translate into an Entity System?

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >