Search Results

Search found 66801 results on 2673 pages for 'near real time'.

Page 215/2673 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • How to set focus for CustCombBox in a CellEditingTemplate when entering page at the first time(MVVM

    - by Shamin
    PreparingCellForEdit="dg_PreparingCellForEdit" BeginningEdit="dg_BeginningEdit" <data:DataGridTemplateColumn MinWidth="300"> <data:DataGridTemplateColumn.HeaderStyle> <Style TargetType="primitives:DataGridColumnHeader" BasedOn="{StaticResource FOTDataGridColumnHeaderStyle}"> <Setter Property="ContentTemplate"> <Setter.Value> <DataTemplate> <TextBlock Text="{Binding CancelReasonText2,Source={StaticResource LabelResource}}" Style="{StaticResource TextBlockLabelStandardStyle}"/> </DataTemplate> </Setter.Value> </Setter> </Style> </data:DataGridTemplateColumn.HeaderStyle> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding CancelReason.CancelCodeDescription}" Style="{StaticResource TextBlockLabelStandardStyle}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> <data:DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <input:AutoCompleteBox x:Name="cBoxCancelReason" FilterMode="StartsWith" IsDropDownOpen="True" SelectedItem="{Binding CancelReason, Mode=TwoWay}" ItemsSource="{Binding CancelCodes}" ValueMemberPath="CancelCodeDescription" > <input:AutoCompleteBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding CancelCodeDescription}" Style="{StaticResource TextBlockLabelStandardStyle}"/> </DataTemplate> </input:AutoCompleteBox.ItemTemplate> </input:AutoCompleteBox> </DataTemplate> </data:DataGridTemplateColumn.CellEditingTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> ---CodeBind public partial class CancelFlightView : UserControl,ICancelFlightView { private data.CancelCode DefaultCancelCode { get { data.CancelCode code = new data.CancelCode(); code.CancelCd = "-1"; code.CancelCodeDescription = "-- Select Cancel Reason --"; return code; } } public CancelFlightView() { InitializeComponent(); this.dg.LoadingRow += new EventHandler<DataGridRowEventArgs>(dg_LoadingRow); //this.Loaded += new RoutedEventHandler(CancelFlightView_Loaded); } void dg_LoadingRow(object sender, DataGridRowEventArgs e) { CheckBox checkBox = (CheckBox)dg.Columns[0].GetCellContent(e.Row); if (checkBox.IsChecked.Value) { FrameworkElement obj = (FrameworkElement)dg.Columns[1].GetCellContent(e.Row); System.Windows.Browser.HtmlPage.Plugin.Focus(); DataGridCell cellEdit = (DataGridCell)obj.Parent; cellEdit.Focus(); dg.BeginEdit(); } } //private void UserControl_Loaded(object sender, RoutedEventArgs e) //{ // if (DataContext != null) // { // CancelFlightViewModel viewModel = (CancelFlightViewModel)DataContext; // viewModel.View = this; // viewModel.Grid = dg; // //viewModel.InitFocus(); // } //} //void CancelFlightView_Loaded(object sender, RoutedEventArgs e) //{ // if (dg.SelectedItem != null) // { // CheckBox checkBox = (CheckBox)dg.Columns[0].GetCellContent(dg.SelectedItem); // if (checkBox.IsChecked.Value) // { // DataGridCell cellEdit = ((DataGridCell)((System.Windows.Controls.Primitives.DataGridCellsPresenter)((DataGridCell)checkBox.Parent).Parent).Children[1]); // dg.CurrentColumn = dg.Columns[1]; // System.Windows.Browser.HtmlPage.Plugin.Focus(); // cellEdit.Focus(); // dg.BeginEdit(); // } // } //} public CancelFlightView(CancelFlightViewModel viewModel):this() { ViewModel = viewModel; } private void dg_PreparingCellForEdit(object sender, DataGridPreparingCellForEditEventArgs e) { object obj = dg.Columns[1].GetCellContent(e.Row); if (obj != null && obj.GetType() == typeof(AutoCompleteBox)) { AutoCompleteBox cBoxCancelReason = (AutoCompleteBox)obj; System.Windows.Browser.HtmlPage.Plugin.Focus(); cBoxCancelReason.Focus(); } } private void CustomComboBox_SelectionChanged(object sender, SelectionChangedEventArgs e) { } private void dg_BeginningEdit(object sender, DataGridBeginningEditEventArgs e) { } private void chkFlight_Click(object sender, RoutedEventArgs e) { CheckBox chkTemp = sender as CheckBox; if (!chkTemp.IsChecked.Value) { } else { DataGridCell cellEdit = ((DataGridCell)((System.Windows.Controls.Primitives.DataGridCellsPresenter)((DataGridCell)chkTemp.Parent).Parent).Children[1]); dg.CurrentColumn = dg.Columns[1]; cellEdit.Focus(); dg.BeginEdit(); } } private void LayoutRoot_KeyUp(object sender, KeyEventArgs e) { //if (e.Key == Key.Enter) //{ //} } #region ICancelFlightView Members public CancelFlightViewModel ViewModel { get { return DataContext as CancelFlightViewModel; } set { DataContext = value; } } #endregion } Now, when user click CheckBox, I can set focus on CustCombBox, but I can't set focus on Whose checkBox.IsChecked.Value = true when page is opened for the first time. is it possible on MVVM pattern? Looking forward your reply, thanks very much.

    Read the article

  • How can I make a div expand on click with only one open at a time?

    - by imHavoc
    As shown in here: http://www.learningjquery.com/2007/03/accordion-madness. But I need help to edit it so that it will work for my circumstances. Sample HTML <div class="row even"> <div class="info"> <div class="class">CK1</div> <div class="teacher_chinese">??</div> <div class="teacher_english">Teacher Name</div> <div class="assistant_chinese">??</div> <div class="assistant_english">Assistant Name</div> <div class="room">Room 00</div> <div class="book"></div> </div> <div class="chapters"> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=1"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=2"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=3"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=4"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=5"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=6"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=7"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=8"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=9"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=10"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=11"><span class="chapter">??</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=12"><span class="chapter">??</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=13"><span class="chapter">??</span></a> </div> </div> JQUERY [Work In Progress] $(document).ready(function() { $('div#table_cantonese .chapters').hide(); $('div#table_cantonese .book').click(function() { var $nextDiv = $(this).next(); var $visibleSiblings = $nextDiv.siblings('div:visible'); if ($visibleSiblings.length ) { $visibleSiblings.slideUp('fast', function() { $nextDiv.slideToggle('fast'); }); } else { $nextDiv.slideToggle('fast'); } }); }); So when the end-user click on div.book, div.chapters will expand. And only one div.chapters will be shown at a time. So if a div.chapters is already open, then it will close the open one first before animating the one the user clicked on.

    Read the article

  • Play! Framework 1.2.4 --- C3P0 settings to avoid Communications link failure do to idle time

    - by HelpMeStackOverflowMyOnlyHope
    I'm trying to customize my C3P0 settings to avoid the error shown at the bottom of this post. It was suggested at this url --- http://make-it-open.blogspot.com/2008/12/sql-error-0-sqlstate-08s01.html --- to adjust the settings as follows: In hibernate.cfg.xml, write <property name="c3p0.min_size">5</property> <property name="c3p0.max_size">20</property> <property name="c3p0.timeout">1800</property> <property name="c3p0.max_statements">50</property> Then create "c3p0.properties" in your root classpath folder and write c3p0.testConnectionOnCheckout=true c3p0.acquireRetryDelay=1000 c3p0.acquireRetryAttempts=1 I've tried to make those adjustments following the direction of the Play! Framework documentation, where they say use "db.pool..." as follows: db.pool.timeout=1800 db.pool.maxSize=15 db.pool.minSize=5 db.pool.initialSize=5 db.pool.acquireRetryAttempts=1 db.pool.preferredTestQuery=SELECT 1 db.pool.testConnectionOnCheckout=true db.pool.acquireRetryDelay=1000 db.pool.maxStatements=50 Are those settings not going to work? Should I be trying to set them in a different way? With those settings I still get the error shown below, that is due to to long of a idle time. Complete Stack Trace of Error: 23:00:44,932 WARN ~ SQL Error: 0, SQLState: 08S01 2012-04-13T23:00:44+00:00 app[web.1]: 23:00:44,932 ERROR ~ Communications link failure 2012-04-13T23:00:44+00:00 app[web.1]: 2012-04-13T23:00:44+00:00 app[web.1]: The last packet successfully received from the server was 274,847 milliseconds ago. The last packet sent successfully to the server was 7 milliseconds ago. 2012-04-13T23:00:44+00:00 app[web.1]: 23:00:44,934 ERROR ~ Why the driver complains here? 2012-04-13T23:00:44+00:00 app[web.1]: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.Connection was implicitly closed by the driver. 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.Util.handleNewInstance(Util.java:407) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.Util.getInstance(Util.java:382) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1013) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:987) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:982) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:927) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.ConnectionImpl.throwConnectionClosedException(ConnectionImpl.java:1213) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.ConnectionImpl.getMutex(ConnectionImpl.java:3101) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.ConnectionImpl.setAutoCommit(ConnectionImpl.java:4975) 2012-04-13T23:00:44+00:00 app[web.1]: at org.hibernate.jdbc.BorrowedConnectionProxy.invoke(BorrowedConnectionProxy.java:74) 2012-04-13T23:00:44+00:00 app[web.1]: at $Proxy49.setAutoCommit(Unknown Source) 2012-04-13T23:00:44+00:00 app[web.1]: at play.db.jpa.JPAPlugin.closeTx(JPAPlugin.java:368) 2012-04-13T23:00:44+00:00 app[web.1]: at play.db.jpa.JPAPlugin.onInvocationException(JPAPlugin.java:328) 2012-04-13T23:00:44+00:00 app[web.1]: at play.plugins.PluginCollection.onInvocationException(PluginCollection.java:447) 2012-04-13T23:00:44+00:00 app[web.1]: at play.Invoker$Invocation.onException(Invoker.java:240) 2012-04-13T23:00:44+00:00 app[web.1]: at play.jobs.Job.onException(Job.java:124) 2012-04-13T23:00:44+00:00 app[web.1]: at play.jobs.Job.call(Job.java:163) 2012-04-13T23:00:44+00:00 app[web.1]: at play.jobs.Job$1.call(Job.java:66) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.FutureTask.run(FutureTask.java:166) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) 2012-04-13T23:00:44+00:00 app[web.1]: at java.lang.Thread.run(Thread.java:636) 2012-04-13T23:00:44+00:00 app[web.1]: Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

    Read the article

  • Watching setTimeout loops so that only one is running at a time.

    - by DA
    I'm creating a content rotator in jQuery. 5 items total. Item 1 fades in, pauses 10 seconds, fades out, then item 2 fades in. Repeat. Simple enough. Using setTimeout I can call a set of functions that create a loop and will repeat the process indefinitely. I now want to add the ability to interrupt this rotator at any time by clicking on a navigation element to jump directly to one of the content items. I originally started going down the path of pinging a variable constantly (say every half second) that would check to see if a navigation element was clicked and, if so, abandon the loop, then restart the loop based on the item that was clicked. The challenge I ran into was how to actually ping a variable via a timer. The solution is to dive into JavaScript closures...which are a little over my head but definitely something I need to delve into more. However, in the process of that, I came up with an alternative option that actually seems to be better performance-wise (theoretically, at least). I have a sample running here: http://jsbin.com/uxupi/14 (It's using console.log so have fireBug running) Sample script: $(document).ready(function(){ var loopCount = 0; $('p#hello').click(function(){ loopCount++; doThatThing(loopCount); }) function doThatOtherThing(currentLoopCount) { console.log('doThatOtherThing-'+currentLoopCount); if(currentLoopCount==loopCount){ setTimeout(function(){doThatThing(currentLoopCount)},5000) } } function doThatThing(currentLoopCount) { console.log('doThatThing-'+currentLoopCount); if(currentLoopCount==loopCount){ setTimeout(function(){doThatOtherThing(currentLoopCount)},5000); } } }) The logic being that every click of the trigger element will kick off the loop passing into itself a variable equal to the current value of the global variable. That variable gets passed back and forth between the functions in the loop. Each click of the trigger also increments the global variable so that subsequent calls of the loop have a unique local variable. Then, within the loop, before the next step of each loop is called, it checks to see if the variable it has still matches the global variable. If not, it knows that a new loop has already been activated so it just ends the existing loop. Thoughts on this? Valid solution? Better options? Caveats? Dangers? UPDATE: I'm using John's suggestion below via the clearTimeout option. However, I can't quite get it to work. The logic is as such: var slideNumber = 0; var timeout = null; function startLoop(slideNumber) { ...do stuff here to set up the slide based on slideNumber... slideFadeIn() } function continueCheck(){ if (timeout != null) { // cancel the scheduled task. clearTimeout(timeout); timeout = null; return false; }else{ return true; } }; function slideFadeIn() { if (continueCheck){ // a new loop hasn't been called yet so proceed... // fade in the LI $currentListItem.fadeIn(fade, function() { if(multipleFeatures){ timeout = setTimeout(slideFadeOut,display); } }); }; function slideFadeOut() { if (continueLoop){ // a new loop hasn't been called yet so proceed... slideNumber=slideNumber+1; if(slideNumber==features.length) { slideNumber = 0; }; timeout = setTimeout(function(){startLoop(slideNumber)},100); }; startLoop(slideNumber); The above kicks of the looping. I then have navigation items that, when clicked, I want the above loop to stop, then restart with a new beginning slide: $(myNav).click(function(){ clearTimeout(timeout); timeout = null; startLoop(thisItem); }) If I comment out 'startLoop...' from the click event, it, indeed, stops the initial loop. However, if I leave that last line in, it doesn't actually stop the initial loop. Why? What happens is that both loops seem to run in parallel for a period. So, when I click my navigation, clearTimeout is called, which clears it.

    Read the article

  • View bound to paged collection view not updating all of the time.

    - by Thomas
    I new to silverlight and trying to make a business application using the mvvm pattern and ria services. I have a view model class that contains a PagedCollectoinView and it is set to the item source of a datagrid. When I update the PagedCollectionView the datagrid is only updated the first time then after that subsequent changes to the data to not reflect in the view until after another edit. Things seem to be delayed one edit. Below is a summarized example of my xaml and code behind. This is the code for my view model public class CustomerContactLinks : INotifyPropertyChanged { private ObservableCollection<CustomerContactLink> _CustomerContact; public ObservableCollection<CustomerContactLink> CustomerContact { get { if (_CustomerContact == null) _CustomerContact = new ObservableCollection<CustomerContactLink>(); return _CustomerContact; } set { _CustomerContact = value; } } private PagedCollectionView _CustomerContactPaged; public PagedCollectionView CustomerContactPaged { get { if (_CustomerContactPaged == null) _CustomerContactPaged = new PagedCollectionView(CustomerContact); return _CustomerContactPaged; } } private TicketSystemDataContext _ctx; public TicketSystemDataContext ctx { get { if (_ctx == null) _ctx = new TicketSystemDataContext(); return _ctx; } } public void GetAll() { ctx.Load(ctx.GetCustomerContactInfoQuery(), LoadCustomerContactsComplete, null); } private void LoadCustomerContactsComplete(LoadOperation<CustomerContactLink> lo) { foreach (var entity in lo.Entities) { CustomerContact.Add(entity as CustomerContactLink); } } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; private void RaisePropertyChanged(string propertyName) { if (PropertyChanged != null) { this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } #endregion } Here is the basics of my XAML <Data:DataGrid x:Name="GridCustomers" MinHeight="100" MaxWidth="1000" IsReadOnly="True" AutoGenerateColumns="False"> <Data:DataGrid.Columns> <Data:DataGridTextColumn Header="First Name" Binding="{Binding Customer.FirstName}" Width="105" /> <Data:DataGridTextColumn Header="MI" Binding="{Binding Customer.MiddleName}" Width="35" /> <Data:DataGridTextColumn Header="Last Name" Binding="{Binding Customer.LastName}" Width="105"/> <Data:DataGridTextColumn Header="Address1" Binding="{Binding Contact.Address1}" Width="130"/> <Data:DataGridTextColumn Header="Address2" Binding="{Binding Contact.Address2}" Width="130"/> <Data:DataGridTextColumn Header="City" Binding="{Binding Contact.City}" Width="110"/> <Data:DataGridTextColumn Header="State" Binding="{Binding Contact.State}" Width="50"/> <Data:DataGridTextColumn Header="Zip" Binding="{Binding Contact.Zip}" Width="45"/> <Data:DataGridTextColumn Header="Home" Binding="{Binding Contact.PhoneHome}" Width="85"/> <Data:DataGridTextColumn Header="Cell" Binding="{Binding Contact.PhoneCell}" Width="85"/> <Data:DataGridTextColumn Header="Email" Binding="{Binding Contact.Email}" Width="118"/> </Data:DataGrid.Columns> </Data:DataGrid> <DataForm:DataForm x:Name="CustomerDetails" Header="Customer Details" AutoGenerateFields="False" AutoEdit="False" AutoCommit="False" CommandButtonsVisibility="Edit" Width="1000" Margin="0,5,0,0"> <DataForm:DataForm.EditTemplate> </DataForm:DataForm.EditTemplate> </DataForm:DataForm> And here is my code behind public Customers() { InitializeComponent(); BusyDialogIndicator.IsBusy = true; Loaded += new RoutedEventHandler(Customers_Loaded); CustomerDetails.BeginningEdit += new EventHandler(CustomerDetails_BeginningEdit); } void CustomerDetails_BeginningEdit(object sender, System.ComponentModel.CancelEventArgs e) { CustomerContacts.CustomerContactPaged.EditItem(CustomerDetails.CurrentItem); } private void Customers_Loaded(object sender, RoutedEventArgs e) { CustomerContacts = new CustomerContactLinks(); CustomerContacts.GetAll(); GridCustomers.ItemsSource = CustomerContacts.CustomerContactPaged; GridCustomerPager.Source = CustomerContacts.CustomerContactPaged; GridCustomers.SelectionChanged += new SelectionChangedEventHandler(GridCustomers_SelectionChanged); BusyDialogIndicator.IsBusy = false; } void GridCustomers_SelectionChanged(object sender, SelectionChangedEventArgs e) { CustomerDetails.CurrentItem = GridCustomers.SelectedItem as CustomerContactLink; } private void SaveChanges_Click(object sender, RoutedEventArgs e) { if (WebContext.Current.User.IsAuthenticated) { bool commited = CustomerDetails.CommitEdit(); if (commited && (!CustomerDetails.IsItemChanged && CustomerDetails.IsItemValid)) { CustomerContacts.Update(CustomerDetails.CurrentItem as CustomerContactLink); CustomerContacts.ctx.SubmitChanges(); CustomerContacts.CustomerContactPaged.CommitEdit(); CustomerContacts.CustomerContactPaged.Refresh(); (GridCustomers.ItemsSource as PagedCollectionView).Refresh(); } } }

    Read the article

  • SharePoint 2010 release date - is it that important?

    - by CharlesLee
    There has been lots of excitement in the SharePoint community over the last few days as Microsoft have announced the official release date of SharePoint 2010. May 12th is the date for your diaries (RTM in April.) The twittersphere has been telling everyone for the last few days about this news and there is much excitement. The major conferences this year all seem to have a SharePoint 2010 focus and some are entirely focussed on the new product (e.g. SharePoint Evolution Conference.)  Now by all accounts Microsoft have plugged some significant functionality gaps that exist in WSS 3.0 and MOSS 2007 and provided some exciting new functionality.  You don't need me to tell you about these as the MVPs (and other community members) are doing a sterling job, after all that is why Microsoft has MVPs in the first place. Lets get real for a second though as there is a significant investment involved in moving to SharePoint 2010:  Firstly you need 64 bit architecture across the board, now for some environments that is no inconsequential hurdle, that's a pretty significant roadblock.   The development farm, test farm and UAT farm are all going to require the same infrastructure upgrades. To take advantage of the tooling for SP2010 you will need to upgrade to Visual Studio 2010 and your development team is going to require 64 bit hardware/OS too.  I would not recommend installing SP 2010 in client installation mode (i.e. for Windows 7) on your developer machines, I would use this for demo machines only. Something that lots of people seem to forget in all their whooping and hollering about the new release is that there is a large amount of end user training going to be required as the browser UI has now adopted the omnipotent ribbon interface and there are other new and more complicated features. SharePoint Designer has also entirely changed in both look and feel and some significant feature changes have taken place. Lest we should forget that some companies have not long upgraded to MOSS 2007 and are yet to see a significant ROI for that project. And the reticence that most companies feel about implementing v1 Microsoft products.  This is only the surface of the deeper issues which would be involved in any upgrade process, so I guess I share a small part of the concern voiced by Mark Miller of EndUserSharePoint.com.  Is SharePoint 2010 relevant? I don't share this sentiment in its entirety as I firmly believe that all companies should be looking at SharePoint 2010 from day one, however most large scale existing implementations of MOSS 2007 are going to be several years away from a serious upgrade project.  So should the conference organisers and the SharePoint community as a whole be a little more understanding of the real world issues?  It's easy to get carried away in the excitement of a new product and new tools to play with but there needs to be a focus on the real world issues that most people are facing day to day and at the moment and for the short term future (at the very least the next 12 months) that is fairly and squarely in the WSS 2.0/3.0 and SPS 2003/MOSS 2007 camps. Don't get me wrong, I am very very excited about getting to grips with SharePoint 2010 in the real world and I cannot wait for my first real project to come along, but for now I am just being realistic about the reality for most people who work with SharePoint. I have been spending a lot of time on www.sharepointoverflow.com recently as there is a community of people building up who are committed to answering the real world questions that folks are dealing with every day.  I urge you to take a look and either ask or answer some questions direct from the front line of the SharePoint world.

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Python: combining making two scripts into one

    - by Alex
    I have two separately made python scripts one that makes a sine wave sound based off time, and another that produces a sine wave graph that is based off the same time factors. I need help combining them into one running file. Here's the first: from struct import pack from math import sin, pi import time def au_file(name, freq, freq1, dur, vol): fout = open(name, 'wb') # header needs size, encoding=2, sampling_rate=8000, channel=1 fout.write('.snd' + pack('>5L', 24, 8*dur, 2, 8000, 1)) factor = 2 * pi * freq/8000 factor1 = 2 * pi * freq1/8000 # write data for seg in range(8 * dur): # sine wave calculations sin_seg = sin(seg * factor) + sin(seg * factor1) fout.write(pack('b', vol * 64 * sin_seg)) fout.close() t = time.strftime("%S", time.localtime()) ti = time.strftime("%M", time.localtime()) tis = float(t) tis = tis * 100 tim = float(ti) tim = tim * 100 if __name__ == '__main__': au_file(name='timeSound.au', freq=tim, freq1=tis, dur=1000, vol=1.0) import os os.startfile('timeSound.au') and the second is this: from Tkinter import * import math import time t = time.strftime("%S", time.localtime()) ti = time.strftime("%M", time.localtime()) tis = float(t) tis = tis / 100 tim = float(ti) tim = tim / 100 root = Tk() root.title("This very moment") width = 400 height = 300 center = height//2 x_increment = 1 # width stretch x_factor1 = tis x_factor2 = tim # height stretch y_amplitude = 50 c = Canvas(width=width, height=height, bg='black') c.pack() str1 = "sin(x)=white" c.create_text(10, 20, anchor=SW, text=str1) center_line = c.create_line(0, center, width, center, fill='red') # create the coordinate list for the sin() curve, have to be integers xy1 = [] xy2 = [] for x in range(400): # x coordinates xy1.append(x * x_increment) xy2.append(x * x_increment) # y coordinates xy1.append(int(math.sin(x * x_factor1) * y_amplitude) + center) xy2.append(int(math.sin(x * x_factor2) * y_amplitude) + center) sinS_line = c.create_line(xy1, fill='white') sinM_line = c.create_line(xy2, fill='yellow') root.mainloop()

    Read the article

  • Database design for credit based purchases

    - by FreshCode
    I need an elegant way to implement credit-based purchases for an online store with a small variety of products which can be purchased using virtual credit or real currency. Alternatively, products could only be priced in credits. Previous work I have implemented credit-based purchasing before using different product types (eg. Credit, Voucher or Music) with post-order processing to assign purchased credit to users in the form of real currency, which could subsequently be used to discount future orders' charge totals. This worked fairly well as a makeshift solution, but did not succeed in disconnecting the virtual currency from the real currency, which is what I'd like to do, since spending credits is psychologically easier for customers than spending real currency. Design I need guidance on designing the database correctly with support for the simultaneous bulk purchase of credits at a discount along with real currency products. Alternatively, should all products be priced in credits and only credit have a real currency value? Existing Database Design Partial Products table: ProductId Title Type UnitPrice SalePrice Partial Orders table: OrderId UserId (related to Users table, not shown) Status Value Total Partial OrderItems table (similar to CartItems table): OrderItemId OrderId (related to Orders table) ProductId (related to Products table) Quantity UnitPrice SalePrice Prospective UserCredits table: CreditId UserId (related to Users table, not shown) Value (+/- value. Summed over time to determine saldo.) Date I'm using ASP.NET MVC and LINQ-to-SQL on a SQL Server database.

    Read the article

  • How reliable is DateTime.Utc in Silverlight applications?

    - by Edward Tanguay
    I have a silverlight application which users will be running in various time zones. The applications load their data from the server at one time, then cache it in IsolatedStorage. When I make changes to the data on the server, I want to be able to change the "last updated time" so that all applications download the newest data the next time they check this date. However, I'm a bit confused as to how to handle the time zone issue since a if the server is in New York and the update time is set to 2010-01-01 17:00:00 and a client in Seattle checks compares it to its local time of 2010-01-01 14:00:00 it won't update and will continue to provide old data for three more hours. My solution is to always post the update time in UTC time, not with the time on the server, then make the Silverlight app check with DateTime.UtcNow. Is this as easy as it sounds or are their issues with this, e.g. that timezones are not set correctly on computers and hence the SilverlightApp does not report the correct UTC time. Can anyone say from experience how likely it is that using DateTime.UtcNow like this for cache refreshing will work in all cases? If DateTime.UtcNow is not reliable, I will just use an incremented "DataVersion" integer but there are other scenarios in which getting time zone sychronization down would make it useful thoroughly understand how to solve this in silverlight apps.

    Read the article

  • How do you perform arithmetic calculations on symbols in Scheme/Lisp?

    - by kunjaan
    I need to perform calculations with a symbol. I need to convert the time which is of hh:mm form to the minutes passed. ;; (get-minutes symbol)->number ;; convert the time in hh:mm to minutes ;; (get-minutes 6:19)-> 6* 60 + 19 (define (get-minutes time) (let* ((a-time (string->list (symbol->string time))) (hour (first a-time)) (minutes (third a-time))) (+ (* hour 60) minutes))) This is an incorrect code, I get a character after all that conversion and cannot perform a correct calculation. Do you guys have any suggestions? I cant change the input type. Context: The input is a flight schedule so I cannot alter the data structure. ;; ---------------------------------------------------------------------- Edit: Figured out an ugly solution. Please suggest something better. (define (get-minutes time) (let* ((a-time (symbol->string time)) (hour (string->number (substring a-time 0 1))) (minutes (string->number (substring a-time 2 4)))) (+ (* hour 60) minutes)))

    Read the article

  • Password value on click is not changing the text to 'Password'

    - by Sam
    window.onload=function() { var password = document.getElementById('apassword'); var real = document.getElementById('password'); var fake = document.createElement('input'); fake.setAttribute('type', 'text'); /*fake.setAttribute('id', 'password');*/ fake.setAttribute('class', 'contact-input contact-right'); password.appendChild(fake); fake.setAttribute('value', 'Password'); fake.onfocus = function() {this.style.display='none';real.style.display=''; real.focus();}; real.style.display = 'none'; real.setAttribute('value', ''); real.onblur = function() {if(this.value==''){this.style.display='none';fake.style.display=''}}; }; AND <label id="apassword"> <input type="password" title="Password" id="password" class="contact-input contact-right" name="password" /> </label> What is supposed to happen is that when you click on the input box, it changes from 'Password' to a blank type="password" input box, however it doesn't happen. This originally worked, but then I had to change some ID's and classes etc. I'm not sure how to debug scripts, so hopefully someone can help me with that, and also with my question :). Thankyou :).

    Read the article

  • XSLT: For each node transform, if A =2 and A=1 were both found do this else do that

    - by Larry
    Example 1: <time> <timestamp>01:00</timestamp> <event>arrived<event> </time> <time> <timestamp>02:00</timestamp> <event>left<event> </time> Example 2: <time> <timestamp>02:00</timestamp> <event>left<event> </time> The XSLT needs to do: FOR EACH node DO: IF event=arrived THEN set eventtype=atdestination IF event=left is found AND event=arrived is found THEN set new node type=leftdestination ELSE set type=left XSLT applied to example 1: <event> <time>01:00</time> <type>atdestination</type> <event> <event> <time>02:00</time> <type>leftdestination</type> <event> XSLT applied to example 2: <event> <time>02:00</time> <type>left</type> <event>

    Read the article

  • Making more recent items more likely to be drawn

    - by bobo
    There are a few hundred of book records in the database and each record has a publish time. In the homepage of the website, I am required to write some codes to randomly pick 10 books and put them there. The requirement is that newer books need to have higher chances of getting displayed. Since the time is an integer, I am thinking like this to calculate the probability for each book: Probability of a book to be drawn = (current time - publish time of the book) / ((current time - publish time of the book1) + (current time - publish time of the book1) + ... (current time - publish time of the bookn)) After a book is drawn, the next round of the loop will minus the (current time - publish time of the book) from the denominator and recalculate the probability for each of the remaining books, the loop continues until 10 books have been drawn. Is this algorithm a correct one? By the way, the website is written in PHP. Feel free to suggest some PHP codes if you have a better algorithm in your mind. Many thanks to you all.

    Read the article

  • What is Inversion of control and why we need it?

    - by Jalpesh P. Vadgama
    Most of programmer need inversion of control pattern in today’s complex real time application world. So I have decided to write a blog post about it. This blog post will explain what is Inversion of control and why we need it. We are going to take a real world example so it would be better to understand. The problem- Why we need inversion of control? Before giving definition of Inversion of control let’s take a simple real word example to see why we need inversion of control. Please have look on the following code. public class class1 { private class2 _class2; public class1() { _class2=new class2(); } } public class class2 { //Some implementation of class2 } I have two classes “Class1” and “Class2”.  If you see the code in that I have created a instance of class2 class in the class1 class constructor. So the “class1” class is dependent on “class2”. I think that is the biggest issue in real world scenario as if we change the “class2” class then we might need to change the “class1” class also. Here there is one type of dependency between this two classes that is called Tight Coupling. Tight coupling will have lots of problem in real world applications as things are tends to be change in future so we have to change all the tight couple classes that are dependent of each other. To avoid this kind of issue we need Inversion of control. What is Inversion of Control? According to the wikipedia following is a definition of Inversion of control. “In software engineering, Inversion of Control (IoC) is an object-oriented programming practice where the object coupling is bound at run time by an assembler object and is typically not known at compile time using static analysis.” So if you read the it carefully it says that we should have object coupling at run time not compile time where it know what object it will create, what method it will call or what feature it will going to use for that. We need to use same classes in such way so that it will not tight couple with each other. There are multiple way to implement Inversion of control. You can refer wikipedia link for knowing multiple ways of implementing Inversion of control. In future posts we are going to see all the different way of implementing Inversion of control.

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by J Swaroop
    Cross-posting Dain Hansen's excellent recap of the Big Data/Fast Data announcement during OOW: For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details.

    Read the article

  • Windows 7 - traceroute hop with high latency! [closed]

    - by Mac
    I've been experiencing this problem for quite a while, and it's quite frustrating. I'll do a traceroute, to www.l.google.com, for example. This is the result (please note: I will replace some parts of personal information with text - i.e. ISP.IP is in reality an actual IP address, and ISPNAME replaces the actual ISP name): Tracing route to www.l.google.com [173.194.34.212] over a maximum of 30 hops: 1 1 ms 1 ms <1 ms 192.168.1.1 2 9 ms 8 ms 10 ms ISP.EXCHANGE.NAME [ISP.IP.172.205] 3 161 ms 171 ms 177 ms host-ISP.IP.215.246.ISPNAME.net [ISP.IP.215.246] 4 12 ms 9 ms 10 ms host-ISP.IP.215.246.ISPNAME.net [ISP.IP.215.246] 5 10 ms 9 ms 17 ms host-ISP.IP.224.165.ISPNAME.net [ISP.IP.224.165] 6 10 ms 9 ms 10 ms 10.42.0.3 7 9 ms 9 ms 10 ms host-ISP.IP.202.129.ISPNAME.net [ISP.IP.202.129] 8 10 ms 9 ms 9 ms host-ISP.IP.209.33.ISPNAME.net [ISP.IP.209.33] 9 77 ms 129 ms 164 ms host-ISP.IP.198.162.ISPNAME.net [ISP.IP.198.162] 10 43 ms 42 ms 43 ms 72.14.212.13 11 42 ms 42 ms 42 ms 209.85.252.36 12 59 ms 59 ms 59 ms 209.85.241.210 13 60 ms 76 ms 68 ms 72.14.237.124 14 59 ms 59 ms 58 ms mad01s08-in-f20.1e100.net [173.194.34.212] Trace complete. Notice that there is a spike on the 3rd hop, but also notice that the 3rd and 4th hop are to the exact same destination. Furthermore, when I ping the offended hop separately, I get the low latency I would expect to that server: Pinging ISP.IP.215.246 with 32 bytes of data: Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=12ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Ping statistics for ISP.IP.215.246: Packets: Sent = 10, Received = 10, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 9ms, Maximum = 12ms, Average = 9ms I'm baffled as to why or how this is happening, and it seems to "fix itself" at random times. Here is an example of where it was working as expected: http://i.imgur.com/bysno.png Notice how many fewer hops were taken. Please note that all the posted results occurred within 10 minutes of testing. I've tried contacting my ISP, and they seem clueless; in their eyes, as long as "the download speed is not slow", then they're doing everything right. Any insight would be very much appreciated, and thanks in advanced!

    Read the article

  • Secret of SQL Trace Duration Column

    - by Dan Guzman
    Why would a trace of long-running queries not show all queries that exceeded the specified duration filter?  We have a server-side SQL Trace that includes RPC:Completed and SQL:BatchCompleted events with a filter on Duration >= 100000.  Nearly all of the queries on this busy OLTP server run in under this 100 millisecond threshold so any that appear in the trace are candidates for root cause analysis and/or performance tuning opportunities. After an application experienced query timeouts, the DBA looked at the trace data to corroborate the problem.  Surprisingly, he found no long-running queries in the trace from the application that experienced the timeouts even though the application’s error log clearly showed detail of the problem (query text, duration, start time, etc.).  The trace did show, however, that there were hundreds of other long-running queries from different applications during the problem timeframe.  We later determined those queries were blocked by a large UPDATE query against a critical table that was inadvertently run during this busy period. So why didn’t the trace include all of the long-running queries?  The reason is because the SQL Trace event duration doesn’t include the time a request was queued while awaiting a worker thread.  Remember that the server was under considerable stress at the time due to the severe blocking episode.  Most of the worker threads were in use by blocked queries and new requests were queued awaiting a worker to free up (a DMV query on the DAC connection will show this queuing: “SELECT scheduler_id, work_queue_count FROM sys.dm_os_schedulers;”).  Technically, those queued requests had not started.  As worker threads became available, queries were dequeued and completed quickly.  These weren’t included in the trace because the duration was under the 100ms duration filter.  The duration reflected the time it took to actually run the query but didn’t include the time queued waiting for a worker thread. The important point here is that duration is not end-to-end response time.  Duration of RPC:Completed and SQL:BatchCompleted events doesn’t include time before a worker thread is assigned nor does it include the time required to return the last result buffer to the client.  In other words, duration only includes time after the worker thread is assigned until the last buffer is filled.  But be aware that duration does include the time need to return intermediate result set buffers back to the client, which is a factor when large query results are returned.  Clients that are slow in consuming results sets can increase the duration value reported by the trace “completed” events.

    Read the article

  • WebCenter Customer Spotlight: Global Village Telecom Ltda

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryGlobal Village Telecom Ltda. (GVT)  is a leading Brazilian telecommunications company, developing solutions and providing services for corporate and end users. GVT is located in Curitiba, Brazil, employs 6,000 people and has an annual revenue of around US$1 billion.  GVT business objectives were to improve corporate communications, accelerate internal information flow, provide continuous access to the all business files and  enable the company’s leadership to provide information to all departments in real time. GVT implemented Oracle WebCenter Content to centralize the company's content and they built  a portal to share and find content in real-time. Oracle WebCenter Content enabled GVT to quickly and efficiently integrate communication among all company employees—ensuring that GVT maintain a competitive edge in the market. Human Resources reduced the time required for issuing internal statements to all staff from three weeks to one day. Company OverviewGlobal Village Telecom Ltda. (GVT)  is a leading telecommunications company, developing solutions and providing services for corporate and end users. The company offers diverse innovative products and advanced solutions in conventional fixed telephone communications, data transmission, high speed broadband internet services, and voice over IP (VoIP) services for all market segment. GVT is located in Curitiba, Brazil, employs 6,000 people and have an  annual revenue of around US$1 billion.   Business ChallengesGVT business objectives were to improve corporate communications, accelerate internal information flow, provide continuous access to the all business files and enable the company’s leadership to provide information to all departments in real time. Solution DeployedGVT worked with the Oracle Partner IT7 to deploy Oracle WebCenter Content to securely centralize the company's content such as growth indicators, spreadsheets, and corporate and descriptive project schedules. The solution enabled real-time information sharing through the development of Click GVT, a portal that currently receives 100,000 monthly impressions from employee searches. Business ResultsGVT gained a competitive edge in the communications market by accelerating internal information flow, streamlining the content standardizing information and enabled real-time information sharing and discovery. Human Resources  reduced the time required for issuing  internal statements to all staff from three weeks to one day. “The competitive nature of telecommunication industry demands rapid information in the internal flow of the company. Oracle WebCenter Content enabled us to quickly and efficiently integrate communication among all company employees—ensuring that we maintain a competitive edge in the market.” Marcel Mendes Filho, Systems Manager, Global Village Telecom Ltda. Additional Information Global Viallage Telecom Ltda Customer Snapshot Oracle WebCenter Content

    Read the article

  • Advice for a distracted, unhappy, recently graduated programmer? [closed]

    - by Re-Invent
    I graduated 4 months ago. I had offers from a few good places to work at. At the same time I wanted to stick to building a small software business of my own, still have some ideas with good potential, some half done projects frozen in my github. But due to social pressures, I chose a job, the pay is great, but I am half-passionate about it. A small team of smart folks building useful product, working out contracts across the world. I've started finding it extremely boring. Boring to the extent that I skip 2-3 days a week together not doing work. Neither do I spend that time progressing any of my own projects. Yes, I feel stupid at the way I'm wasting time, but I don't understand exactly why is it happening. It's as if all the excitement has been drained. What can I do about it? Long version: School - I was in third standard. Only students, 6th grade had access to computer labs. I once peeked into the lab from the little door opening. No hard-disks, MS DOS on 5 1/2 inch floppies. I asked a senior student to play some sound in BASIC. He used PLAY to compose a tune. Boy! I was so excited, I was jumping from within. Back home, asked my brother to teach me some programming. We bought a book "MODERN All About GW-BASIC for Schools & Colleges". The book had everything, right from printing, to taking input, file i/o, game programming, machine level support, etc. I was in 6th standard, wrote my first game - a wheel of fortune, rotated the wheel by manipulating 16 color palette's definition. Got internet soon, got hooked to QuickBasic programming community. Made some more games "007 in Danger", "Car Crush 2" for submission to allbasiccode archives. I was extremely excited about all this. My interests now swayed into "hacking" (computer security). Taught myself some perl, found it annoying, learnt PHP and a bit of SQL. Also taught myself Visual Basic one of the winters and wrote a pacman clone with Direct X. By the time I was in 10th standard, I created some evil tools using visual basic, php and mysql and eventually landed myself into an unpaid side-job at a government facility, building evil tools for them. It was a dream come true for crackers of that time. And so was I, still very excited. Things changed soon, last two years of school were not so great as I was balancing preps for college, work at govt. and studies for school at same time. College - College was opposite of all I had wished it to be. I imagined it to be a place where I'd spend my 4 years building something awesome. It was rather an epitome of rote learning, attendance, rules, busy schedules, ban on personal laptops, hardly any hackers surrounding you and shit like that. We had to take permissions to even introduce some cultural/creative activities in our annual schedule. The labs won't be open on weekends because the lab employees had to have their leaves. Yes, a horrible place for someone like me. I still managed to pull out a project with a friend over 2 months. Showed it to people high in the academia hierarchy. They were immensely impressed, we proposed to allow personal computers for students. They made up half-assed reasons and didn't agree. We felt frustrated. And so on, I still managed to teach myself new languages, do new projects of my own, do an intern at the same govt. facility, start a small business for sometime, give a talk at a conference I'm passionate about, win game-dev and hacking contest at most respected colleges, solve good deal of programming contest problems, etc. At the same time I was not content with all these restrictions, great emphasis on rote learning, and sheer wastage of time due to college. I never felt I was overdoing, but now I feel I burnt myself out. During my last days at college, I did an intern at a bigco. While I spent my time building prototypes for certain LBS, the other interns around me, even a good friend, was just skipping time. I thought maybe, in a few weeks he would put in some serious efforts at work assigned to him, but all he did was to find creative ways to skip work, hide his face from manager, engage people in talks if they try to question his progress, etc. I tried a few time to get him on track, but it seems all he wanted was to "not to work hard at all and still reap the fruits". I don't know how others take such people, but I find their vicinity very very poisonous to one's own motivation and productivity. Over that, the place where I come from, HRs don't give much value to what have you done past 4 years. So towards the end of out intern, we all were offered work at the bigco, but the slacker, even after not writing more than 200 lines of code was made a much better offer. I felt enraged instantly - "Is this how the corp world treats someone who does fruitful, if not extra-ordinary work form them for past 6 months?". Yes, I did try to negotiate and debate. The bigcos seem blind due to departmentalization of responsibilities and many layers of management. I decided not to be in touch with any characters of that depressing play. Probably the busy time I had at college, ignoring friends, ignoring fun and squeezing every bit of free time for myself is also responsible. Probably this is what has drained all my willingness to work for anyone. I find my day job boring, at the same time I with to maintain it for financial reasons. I feel a bit burnt out, unsatisfied and at the same time an urge to quit working for someone else and start finishing my frozen side-projects (which may be profitable). Though I haven't got much to support myself with food, office, internet bills, etc in savings. I still have my day job, but I don't find it very interesting, even though the pay is higher than the slacker, I don't find money to be a great motivator here. I keep comparing myself to my past version. I wonder how to get rid of this and reboot myself back to the way I was in school days - excited about it, tinkering, building, learning new things daily, and NOT BORED?

    Read the article

  • WebCenter Customer Spotlight: Regency Centers Corporation

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryRegency Centers Corporation, based in Jacksonville, FL, is a leading national owner, operator, and developer of grocery-anchored and community shopping centers. Regency grew rapidly over much of the last decade. To keep up with the monthly and yearly administrative processes required to manage thousands of tenants, including reconciling yearly pass-through expenses, the customer upgraded to Oracle’s JD Edwards EnterpriseOne Version 9.0 and deployed Oracle WebCenter Imaging, Process Management and Oracle BI Publisher, to streamline invoice processing and reporting. Using Oracle WebCenter Imaging - Regency accelerated and improved vendor invoice accuracy  which increases process integrity by identifying potential duplicate bills while enabling rapid approval of electronic invoice documents. Company Overview Regency Centers Corporation, based in Jacksonville, FL,  is a leading national owner, operator, and developer of grocery-anchored and community shopping centers. The company owns 367 centers, totaling nearly 50 million square feet, located in top markets throughout the United States. Founded in 1963 and operating as a fully integrated real estate company, Regency is a qualified real estate investment trust that is self-administered and self-managed, operating from 17 regional offices around the country.  Business Challenges Ensure continued support of vital business applications that drive the real estate developer’s key business processes, including property management and tenant payment processing Streamline year-end expense recognition and calculation, enabling faster tenant billing Move to a Web-based platform to deliver greater mobility and convenience to employees Minimize system customizations to reduce IT management costs and burden moving forward Solution DeployedRecency Centers Corporation worked with the  Oracle Partner ICS to upgrade to Oracle’s JD Edwards EnterpriseOne Version 9.0, migrating to a more user-friendly, Web-based platform and realizing numerous new efficiencies in property management and tenant payment processing. They accelerated and improved vendor invoice accuracy with Oracle WebCenter Imaging, which increases process integrity by identifying potential duplicate bills while enabling rapid approval of electronic invoice documents. Business Results Enabled faster and more accurate tenant billing for year-end expenses, accelerating collections of millions of dollars in revenue Gained full audit and drill-down capabilities that facilitate understanding various aspects of calculations for expense participation generation Increases process integrity by identifying potential duplicate bills while enabling rapid approval of electronic invoice documents Helped to ensure on-time payments to hundreds of vendors, including contractors and utilities "We have realized numerous efficiencies with Oracle’s JD Edwards EnterpriseOne 9.0, particularly around tenant billings. It accelerates our year-end expense reconciliation process and enables us to create and process billings more quickly.” James Chiang, Vice President of Real Estate Accounting Regency Centers Corporation Additional Information Regency Centers Corporation Customer Snapshot Oracle WebCenter Imaging JD Edwards EnterpriseOne Financials 9.0 JD Edwards EnterpriseOne Project Costing JD Edwards EnterpiseOne Real Estate Management Oracle Business Intelligence Publisher Oracle Essbase

    Read the article

  • Avoid random disk names

    - by BarsMonster
    Hi! I have Ubuntu Server 10.04 1 system disk, and 5 disks in RAID-5 configuration. The problem is that names of these disks are changed from time to time, they are being randomly mixed from time to time (sda,b,c,d,e,f - so system disks might be sda, or sdc at different time for example).... is there any way to fix drive names, so that even if it's disconnected for example, no other drive can occupy this letter based on disk UUID or something?

    Read the article

  • How to calculate continuous motion with angular velocity in 2d

    - by Rulk
    I'm really new with physics. Maybe someone would be able to help me to solve the next problem: I need to calculate position of an agent on the plane(2D) in next time step where time step is large(20+ seconds) What I know about agent's motion: Initial Position Direction(normalised vector) Velocity(linear function from time ) - object always moves along it's direction Angular Velocity(linear function from time) Optional: External force direction External force (linear function from time) Running discreet simulation with t-0 is not an option.

    Read the article

  • What to do with DATETIMEOFFSET?

    - by GavinPayneUK
    Someone asked me today if the time zone of a specific instance of SQL Server could be changed to match the country which that instance served. Some database products allow you to set this at an engine level which made me wonder if your data’s time fields “move” with the time zone setting of the database server instance?  If something was logged as happening at 9am Paris time it happened then, if I change my database server parameters did that event now happen at 9am New York time?  Perhaps...(read more)

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >