Search Results

Search found 71854 results on 2875 pages for 'build time'.

Page 283/2875 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Fixed strptime exception with thread lock, but slows down the program

    - by eWizardII
    I have the following code, which when is running inside of a thread (the full code is here - https://github.com/eWizardII/homobabel/blob/master/lovebird.py) for null in range(0,1): while True: try: with open('C:/Twitter/tweets/user_0_' + str(self.id) + '.json', mode='w') as f: f.write('[') threadLock.acquire() for i, seed in enumerate(Cursor(api.user_timeline,screen_name=self.ip).items(200)): if i>0: f.write(", ") f.write("%s" % (json.dumps(dict(sc=seed.author.statuses_count)))) j = j + 1 threadLock.release() f.write("]") except tweepy.TweepError, e: with open('C:/Twitter/tweets/user_0_' + str(self.id) + '.json', mode='a') as f: f.write("]") print "ERROR on " + str(self.ip) + " Reason: ", e with open('C:/Twitter/errors_0.txt', mode='a') as a_file: new_ii = "ERROR on " + str(self.ip) + " Reason: " + str(e) + "\n" a_file.write(new_ii) break Now without the thread lock I generate the following error: Exception in thread Thread-117: Traceback (most recent call last): File "C:\Python27\lib\threading.py", line 530, in __bootstrap_inner self.run() File "C:/Twitter/homobabel/lovebird.py", line 62, in run for i, seed in enumerate(Cursor(api.user_timeline,screen_name=self.ip).items(200)): File "build\bdist.win-amd64\egg\tweepy\cursor.py", line 110, in next self.current_page = self.page_iterator.next() File "build\bdist.win-amd64\egg\tweepy\cursor.py", line 85, in next items = self.method(page=self.current_page, *self.args, **self.kargs) File "build\bdist.win-amd64\egg\tweepy\binder.py", line 196, in _call return method.execute() File "build\bdist.win-amd64\egg\tweepy\binder.py", line 182, in execute result = self.api.parser.parse(self, resp.read()) File "build\bdist.win-amd64\egg\tweepy\parsers.py", line 75, in parse result = model.parse_list(method.api, json) File "build\bdist.win-amd64\egg\tweepy\models.py", line 38, in parse_list results.append(cls.parse(api, obj)) File "build\bdist.win-amd64\egg\tweepy\models.py", line 49, in parse user = User.parse(api, v) File "build\bdist.win-amd64\egg\tweepy\models.py", line 86, in parse setattr(user, k, parse_datetime(v)) File "build\bdist.win-amd64\egg\tweepy\utils.py", line 17, in parse_datetime date = datetime(*(time.strptime(string, '%a %b %d %H:%M:%S +0000 %Y')[0:6])) File "C:\Python27\lib\_strptime.py", line 454, in _strptime_time return _strptime(data_string, format)[0] File "C:\Python27\lib\_strptime.py", line 300, in _strptime _TimeRE_cache = TimeRE() File "C:\Python27\lib\_strptime.py", line 188, in __init__ self.locale_time = LocaleTime() File "C:\Python27\lib\_strptime.py", line 77, in __init__ raise ValueError("locale changed during initialization") ValueError: locale changed during initialization The problem is with thread lock on, each thread runs itself serially basically, and it takes way to long for each loop to run for there to be any advantage to having a thread anymore. So if there isn't a way to get rid of the thread lock, is there a way to have it run the for loop inside of the try statement faster?

    Read the article

  • Using enums or a set of classes when I know I have a finite set of different options?

    - by devoured elysium
    Let's say I have defined the following class: public abstract class Event { public DateTime Time { get; protected set; } protected Event(DateTime time) { Time = time; } } What would you prefer between this: public class AsleepEvent : Event { public AsleepEvent(DateTime time) : base(time) { } } public class AwakeEvent : Event { public AwakeEvent(DateTime time) : base(time) { } } and this: public enum StateEventType { NowAwake, NowAsleep } public class StateEvent : Event { protected StateEventType stateType; public MealEvent(DateTime time, StateEventType stateType) : base(time) { stateType = stateType; } } and why? I am generally more inclined to the first option, but I can't explain why. Is it totally the same or are any advantages in using one instead of the other? Maybe with the first method its easier to add more "states", altough in this case I am 100% sure I will only want two states: now awake, and now asleep (they signal the moments when one awakes and one falls asleep).

    Read the article

  • How to override loading a TImage from the object inspector (at run-time)?

    - by Mawg
    Further to my previous question, which did not get a useful answer despite a bounty, I will try rephrasing the question. Basically, when the user clicks the ellipsis in the object inspector, Delphi opens a file/open dialog. I want to replace this handling with my own, so that I can save the image's path. I would have expected that all I need to do is to derive a class from TImage and override the Assign() function, as in the following code. However, when I do the assign function is never called. So, it looks like I need to override something else, but what? unit my_Image; interface uses Classes, ExtCtrls, Jpeg, Graphics; type Tmy_Image = class(Timage) private FPicture : TPicture; protected procedure OnChange(Sender: TObject); public { Public declarations } Constructor Create(AOwner: TComponent); override; procedure SetPicture(picture : TPicture); procedure Assign(Source: TPersistent); override; published { Published declarations - available in the Object Inspector at design-time } property Picture : TPicture read FPicture write SetPicture; end; // of class Tmy_Image() procedure Register; implementation uses Controls, Dialogs; procedure Register; begin RegisterComponents('Standard', [Tmy_Image]); end; Constructor Tmy_Image.Create(AOwner: TComponent); begin inherited; // Call the parent Create method Hint := 'Add an image from a file|Add an image from a file'; // Tooltip | status bar text AutoSize := True; // Control resizes when contents change (new image is loaded) Height := 104; Width := 104; FPicture := TPicture.Create(); self.Picture.Bitmap.LoadFromResourceName(hInstance, 'picture_poperty_bmp'); end; procedure Tmy_Image.OnChange(Sender: TObject); begin Constraints.MaxHeight := Picture.Height; Constraints.MaxWidth := Picture.Width; Self.Height := Picture.Height; Self.Width := Picture.Width; end; procedure Tmy_Image.SetPicture(picture : TPicture); begin MessageDlg('Tmy_Image.SetPicture', mtWarning, [mbOK], 0); // never called end; procedure Tmy_Image.Assign(Source: TPersistent); begin MessageDlg('Tmy_Image.Assign', mtWarning, [mbOK], 0); // never called end; end.

    Read the article

  • Unable to link to opengl libraries? DOS / MSVC

    - by Mark
    Is there something wrong with this link.exe command line? OpenGL32.lib and Glu32.lib are found at both of the LIBPATH directories. Is it possible the libraries are somehow incompatible? Is there a way to have the link.exe say that instead of unresolved external symbol? Googling shows that this error usually means the libraries are not found, but they are there. E:\mvs90\VC\BIN\link.exe /DLL /nologo /INCREMENTAL:no /DEBUG /pdb:None /LIBPATH:E:\code\python\python\py26\libs /LIBPATH:E:\code\python\python\py26\PCbuild opengl32.lib glu32.lib /EXPORT:init_rabbyt build\temp.win32-2.6-pydebug\Debug\rabbyt/rabbyt._rabbyt.obj /OUT:build\lib.win32-2.6-pydebug\rabbyt\_rabbyt_d.pyd /IMPLIB:build\temp.win32-2.6-pydebug\Debug\rabbyt\_rabbyt_d.lib /MANIFESTFILE:build\temp.win32-2.6-pydebug\Debug\rabbyt\_rabbyt_d.pyd.manifest Creating library build\temp.win32-2.6-pydebug\Debug\rabbyt\_rabbyt_d.lib and object build\temp.win32-2.6-pydebug\Debug\rabbyt\_rabbyt_d.exp rabbyt._rabbyt.obj : error LNK2019: unresolved external symbol __imp__glOrtho re ferenced in function ___pyx_f_6rabbyt_7_rabbyt_set_viewport Directory of E:\code\python\python\py26\libs 09/27/2007 02:20 PM 12,672 GlU32.Lib 09/27/2007 02:20 PM 76,924 OpenGL32.Lib

    Read the article

  • How to force Maven to download maven-metadata.xml from the central repository?

    - by Alceu Costa
    What I want to do is to force Maven to download the 'maven-metadata.xml' for each artifact that I have in my local repository. The default Maven behaviour is to download only metadata from remote repositories (see this question). Why I want to do that: Currently I have a remote repository running in a build machine. By remote repository I mean a directory located in the build machine that contains all dependencies that I need to build my Maven projects. Note that I'm not using a repository manager like Nexus, the repository is just a copy of a local repository that I have uploaded to my build machine. However, since my local repository did not contain the 'maven-metadata.xml' files, these metadata files are also missing in the build machine repository. If I could retrieve the metadata files from the central repository, then it would be possible to upload a working remote repository to my build machine.

    Read the article

  • How to set focus for CustCombBox in a CellEditingTemplate when entering page at the first time(MVVM

    - by Shamin
    PreparingCellForEdit="dg_PreparingCellForEdit" BeginningEdit="dg_BeginningEdit" <data:DataGridTemplateColumn MinWidth="300"> <data:DataGridTemplateColumn.HeaderStyle> <Style TargetType="primitives:DataGridColumnHeader" BasedOn="{StaticResource FOTDataGridColumnHeaderStyle}"> <Setter Property="ContentTemplate"> <Setter.Value> <DataTemplate> <TextBlock Text="{Binding CancelReasonText2,Source={StaticResource LabelResource}}" Style="{StaticResource TextBlockLabelStandardStyle}"/> </DataTemplate> </Setter.Value> </Setter> </Style> </data:DataGridTemplateColumn.HeaderStyle> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding CancelReason.CancelCodeDescription}" Style="{StaticResource TextBlockLabelStandardStyle}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> <data:DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <input:AutoCompleteBox x:Name="cBoxCancelReason" FilterMode="StartsWith" IsDropDownOpen="True" SelectedItem="{Binding CancelReason, Mode=TwoWay}" ItemsSource="{Binding CancelCodes}" ValueMemberPath="CancelCodeDescription" > <input:AutoCompleteBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding CancelCodeDescription}" Style="{StaticResource TextBlockLabelStandardStyle}"/> </DataTemplate> </input:AutoCompleteBox.ItemTemplate> </input:AutoCompleteBox> </DataTemplate> </data:DataGridTemplateColumn.CellEditingTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> ---CodeBind public partial class CancelFlightView : UserControl,ICancelFlightView { private data.CancelCode DefaultCancelCode { get { data.CancelCode code = new data.CancelCode(); code.CancelCd = "-1"; code.CancelCodeDescription = "-- Select Cancel Reason --"; return code; } } public CancelFlightView() { InitializeComponent(); this.dg.LoadingRow += new EventHandler<DataGridRowEventArgs>(dg_LoadingRow); //this.Loaded += new RoutedEventHandler(CancelFlightView_Loaded); } void dg_LoadingRow(object sender, DataGridRowEventArgs e) { CheckBox checkBox = (CheckBox)dg.Columns[0].GetCellContent(e.Row); if (checkBox.IsChecked.Value) { FrameworkElement obj = (FrameworkElement)dg.Columns[1].GetCellContent(e.Row); System.Windows.Browser.HtmlPage.Plugin.Focus(); DataGridCell cellEdit = (DataGridCell)obj.Parent; cellEdit.Focus(); dg.BeginEdit(); } } //private void UserControl_Loaded(object sender, RoutedEventArgs e) //{ // if (DataContext != null) // { // CancelFlightViewModel viewModel = (CancelFlightViewModel)DataContext; // viewModel.View = this; // viewModel.Grid = dg; // //viewModel.InitFocus(); // } //} //void CancelFlightView_Loaded(object sender, RoutedEventArgs e) //{ // if (dg.SelectedItem != null) // { // CheckBox checkBox = (CheckBox)dg.Columns[0].GetCellContent(dg.SelectedItem); // if (checkBox.IsChecked.Value) // { // DataGridCell cellEdit = ((DataGridCell)((System.Windows.Controls.Primitives.DataGridCellsPresenter)((DataGridCell)checkBox.Parent).Parent).Children[1]); // dg.CurrentColumn = dg.Columns[1]; // System.Windows.Browser.HtmlPage.Plugin.Focus(); // cellEdit.Focus(); // dg.BeginEdit(); // } // } //} public CancelFlightView(CancelFlightViewModel viewModel):this() { ViewModel = viewModel; } private void dg_PreparingCellForEdit(object sender, DataGridPreparingCellForEditEventArgs e) { object obj = dg.Columns[1].GetCellContent(e.Row); if (obj != null && obj.GetType() == typeof(AutoCompleteBox)) { AutoCompleteBox cBoxCancelReason = (AutoCompleteBox)obj; System.Windows.Browser.HtmlPage.Plugin.Focus(); cBoxCancelReason.Focus(); } } private void CustomComboBox_SelectionChanged(object sender, SelectionChangedEventArgs e) { } private void dg_BeginningEdit(object sender, DataGridBeginningEditEventArgs e) { } private void chkFlight_Click(object sender, RoutedEventArgs e) { CheckBox chkTemp = sender as CheckBox; if (!chkTemp.IsChecked.Value) { } else { DataGridCell cellEdit = ((DataGridCell)((System.Windows.Controls.Primitives.DataGridCellsPresenter)((DataGridCell)chkTemp.Parent).Parent).Children[1]); dg.CurrentColumn = dg.Columns[1]; cellEdit.Focus(); dg.BeginEdit(); } } private void LayoutRoot_KeyUp(object sender, KeyEventArgs e) { //if (e.Key == Key.Enter) //{ //} } #region ICancelFlightView Members public CancelFlightViewModel ViewModel { get { return DataContext as CancelFlightViewModel; } set { DataContext = value; } } #endregion } Now, when user click CheckBox, I can set focus on CustCombBox, but I can't set focus on Whose checkBox.IsChecked.Value = true when page is opened for the first time. is it possible on MVVM pattern? Looking forward your reply, thanks very much.

    Read the article

  • How can I make a div expand on click with only one open at a time?

    - by imHavoc
    As shown in here: http://www.learningjquery.com/2007/03/accordion-madness. But I need help to edit it so that it will work for my circumstances. Sample HTML <div class="row even"> <div class="info"> <div class="class">CK1</div> <div class="teacher_chinese">??</div> <div class="teacher_english">Teacher Name</div> <div class="assistant_chinese">??</div> <div class="assistant_english">Assistant Name</div> <div class="room">Room 00</div> <div class="book"></div> </div> <div class="chapters"> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=1"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=2"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=3"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=4"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=5"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=6"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=7"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=8"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=9"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=10"><span class="chapter">?</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=11"><span class="chapter">??</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=12"><span class="chapter">??</span></a> <a href="../../curriculum/cantonese/textbook.php?cls=C1&amp;ch=13"><span class="chapter">??</span></a> </div> </div> JQUERY [Work In Progress] $(document).ready(function() { $('div#table_cantonese .chapters').hide(); $('div#table_cantonese .book').click(function() { var $nextDiv = $(this).next(); var $visibleSiblings = $nextDiv.siblings('div:visible'); if ($visibleSiblings.length ) { $visibleSiblings.slideUp('fast', function() { $nextDiv.slideToggle('fast'); }); } else { $nextDiv.slideToggle('fast'); } }); }); So when the end-user click on div.book, div.chapters will expand. And only one div.chapters will be shown at a time. So if a div.chapters is already open, then it will close the open one first before animating the one the user clicked on.

    Read the article

  • Play! Framework 1.2.4 --- C3P0 settings to avoid Communications link failure do to idle time

    - by HelpMeStackOverflowMyOnlyHope
    I'm trying to customize my C3P0 settings to avoid the error shown at the bottom of this post. It was suggested at this url --- http://make-it-open.blogspot.com/2008/12/sql-error-0-sqlstate-08s01.html --- to adjust the settings as follows: In hibernate.cfg.xml, write <property name="c3p0.min_size">5</property> <property name="c3p0.max_size">20</property> <property name="c3p0.timeout">1800</property> <property name="c3p0.max_statements">50</property> Then create "c3p0.properties" in your root classpath folder and write c3p0.testConnectionOnCheckout=true c3p0.acquireRetryDelay=1000 c3p0.acquireRetryAttempts=1 I've tried to make those adjustments following the direction of the Play! Framework documentation, where they say use "db.pool..." as follows: db.pool.timeout=1800 db.pool.maxSize=15 db.pool.minSize=5 db.pool.initialSize=5 db.pool.acquireRetryAttempts=1 db.pool.preferredTestQuery=SELECT 1 db.pool.testConnectionOnCheckout=true db.pool.acquireRetryDelay=1000 db.pool.maxStatements=50 Are those settings not going to work? Should I be trying to set them in a different way? With those settings I still get the error shown below, that is due to to long of a idle time. Complete Stack Trace of Error: 23:00:44,932 WARN ~ SQL Error: 0, SQLState: 08S01 2012-04-13T23:00:44+00:00 app[web.1]: 23:00:44,932 ERROR ~ Communications link failure 2012-04-13T23:00:44+00:00 app[web.1]: 2012-04-13T23:00:44+00:00 app[web.1]: The last packet successfully received from the server was 274,847 milliseconds ago. The last packet sent successfully to the server was 7 milliseconds ago. 2012-04-13T23:00:44+00:00 app[web.1]: 23:00:44,934 ERROR ~ Why the driver complains here? 2012-04-13T23:00:44+00:00 app[web.1]: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.Connection was implicitly closed by the driver. 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.Util.handleNewInstance(Util.java:407) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.Util.getInstance(Util.java:382) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1013) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:987) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:982) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:927) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.ConnectionImpl.throwConnectionClosedException(ConnectionImpl.java:1213) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.ConnectionImpl.getMutex(ConnectionImpl.java:3101) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.ConnectionImpl.setAutoCommit(ConnectionImpl.java:4975) 2012-04-13T23:00:44+00:00 app[web.1]: at org.hibernate.jdbc.BorrowedConnectionProxy.invoke(BorrowedConnectionProxy.java:74) 2012-04-13T23:00:44+00:00 app[web.1]: at $Proxy49.setAutoCommit(Unknown Source) 2012-04-13T23:00:44+00:00 app[web.1]: at play.db.jpa.JPAPlugin.closeTx(JPAPlugin.java:368) 2012-04-13T23:00:44+00:00 app[web.1]: at play.db.jpa.JPAPlugin.onInvocationException(JPAPlugin.java:328) 2012-04-13T23:00:44+00:00 app[web.1]: at play.plugins.PluginCollection.onInvocationException(PluginCollection.java:447) 2012-04-13T23:00:44+00:00 app[web.1]: at play.Invoker$Invocation.onException(Invoker.java:240) 2012-04-13T23:00:44+00:00 app[web.1]: at play.jobs.Job.onException(Job.java:124) 2012-04-13T23:00:44+00:00 app[web.1]: at play.jobs.Job.call(Job.java:163) 2012-04-13T23:00:44+00:00 app[web.1]: at play.jobs.Job$1.call(Job.java:66) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.FutureTask.run(FutureTask.java:166) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) 2012-04-13T23:00:44+00:00 app[web.1]: at java.lang.Thread.run(Thread.java:636) 2012-04-13T23:00:44+00:00 app[web.1]: Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

    Read the article

  • Watching setTimeout loops so that only one is running at a time.

    - by DA
    I'm creating a content rotator in jQuery. 5 items total. Item 1 fades in, pauses 10 seconds, fades out, then item 2 fades in. Repeat. Simple enough. Using setTimeout I can call a set of functions that create a loop and will repeat the process indefinitely. I now want to add the ability to interrupt this rotator at any time by clicking on a navigation element to jump directly to one of the content items. I originally started going down the path of pinging a variable constantly (say every half second) that would check to see if a navigation element was clicked and, if so, abandon the loop, then restart the loop based on the item that was clicked. The challenge I ran into was how to actually ping a variable via a timer. The solution is to dive into JavaScript closures...which are a little over my head but definitely something I need to delve into more. However, in the process of that, I came up with an alternative option that actually seems to be better performance-wise (theoretically, at least). I have a sample running here: http://jsbin.com/uxupi/14 (It's using console.log so have fireBug running) Sample script: $(document).ready(function(){ var loopCount = 0; $('p#hello').click(function(){ loopCount++; doThatThing(loopCount); }) function doThatOtherThing(currentLoopCount) { console.log('doThatOtherThing-'+currentLoopCount); if(currentLoopCount==loopCount){ setTimeout(function(){doThatThing(currentLoopCount)},5000) } } function doThatThing(currentLoopCount) { console.log('doThatThing-'+currentLoopCount); if(currentLoopCount==loopCount){ setTimeout(function(){doThatOtherThing(currentLoopCount)},5000); } } }) The logic being that every click of the trigger element will kick off the loop passing into itself a variable equal to the current value of the global variable. That variable gets passed back and forth between the functions in the loop. Each click of the trigger also increments the global variable so that subsequent calls of the loop have a unique local variable. Then, within the loop, before the next step of each loop is called, it checks to see if the variable it has still matches the global variable. If not, it knows that a new loop has already been activated so it just ends the existing loop. Thoughts on this? Valid solution? Better options? Caveats? Dangers? UPDATE: I'm using John's suggestion below via the clearTimeout option. However, I can't quite get it to work. The logic is as such: var slideNumber = 0; var timeout = null; function startLoop(slideNumber) { ...do stuff here to set up the slide based on slideNumber... slideFadeIn() } function continueCheck(){ if (timeout != null) { // cancel the scheduled task. clearTimeout(timeout); timeout = null; return false; }else{ return true; } }; function slideFadeIn() { if (continueCheck){ // a new loop hasn't been called yet so proceed... // fade in the LI $currentListItem.fadeIn(fade, function() { if(multipleFeatures){ timeout = setTimeout(slideFadeOut,display); } }); }; function slideFadeOut() { if (continueLoop){ // a new loop hasn't been called yet so proceed... slideNumber=slideNumber+1; if(slideNumber==features.length) { slideNumber = 0; }; timeout = setTimeout(function(){startLoop(slideNumber)},100); }; startLoop(slideNumber); The above kicks of the looping. I then have navigation items that, when clicked, I want the above loop to stop, then restart with a new beginning slide: $(myNav).click(function(){ clearTimeout(timeout); timeout = null; startLoop(thisItem); }) If I comment out 'startLoop...' from the click event, it, indeed, stops the initial loop. However, if I leave that last line in, it doesn't actually stop the initial loop. Why? What happens is that both loops seem to run in parallel for a period. So, when I click my navigation, clearTimeout is called, which clears it.

    Read the article

  • View bound to paged collection view not updating all of the time.

    - by Thomas
    I new to silverlight and trying to make a business application using the mvvm pattern and ria services. I have a view model class that contains a PagedCollectoinView and it is set to the item source of a datagrid. When I update the PagedCollectionView the datagrid is only updated the first time then after that subsequent changes to the data to not reflect in the view until after another edit. Things seem to be delayed one edit. Below is a summarized example of my xaml and code behind. This is the code for my view model public class CustomerContactLinks : INotifyPropertyChanged { private ObservableCollection<CustomerContactLink> _CustomerContact; public ObservableCollection<CustomerContactLink> CustomerContact { get { if (_CustomerContact == null) _CustomerContact = new ObservableCollection<CustomerContactLink>(); return _CustomerContact; } set { _CustomerContact = value; } } private PagedCollectionView _CustomerContactPaged; public PagedCollectionView CustomerContactPaged { get { if (_CustomerContactPaged == null) _CustomerContactPaged = new PagedCollectionView(CustomerContact); return _CustomerContactPaged; } } private TicketSystemDataContext _ctx; public TicketSystemDataContext ctx { get { if (_ctx == null) _ctx = new TicketSystemDataContext(); return _ctx; } } public void GetAll() { ctx.Load(ctx.GetCustomerContactInfoQuery(), LoadCustomerContactsComplete, null); } private void LoadCustomerContactsComplete(LoadOperation<CustomerContactLink> lo) { foreach (var entity in lo.Entities) { CustomerContact.Add(entity as CustomerContactLink); } } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; private void RaisePropertyChanged(string propertyName) { if (PropertyChanged != null) { this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } #endregion } Here is the basics of my XAML <Data:DataGrid x:Name="GridCustomers" MinHeight="100" MaxWidth="1000" IsReadOnly="True" AutoGenerateColumns="False"> <Data:DataGrid.Columns> <Data:DataGridTextColumn Header="First Name" Binding="{Binding Customer.FirstName}" Width="105" /> <Data:DataGridTextColumn Header="MI" Binding="{Binding Customer.MiddleName}" Width="35" /> <Data:DataGridTextColumn Header="Last Name" Binding="{Binding Customer.LastName}" Width="105"/> <Data:DataGridTextColumn Header="Address1" Binding="{Binding Contact.Address1}" Width="130"/> <Data:DataGridTextColumn Header="Address2" Binding="{Binding Contact.Address2}" Width="130"/> <Data:DataGridTextColumn Header="City" Binding="{Binding Contact.City}" Width="110"/> <Data:DataGridTextColumn Header="State" Binding="{Binding Contact.State}" Width="50"/> <Data:DataGridTextColumn Header="Zip" Binding="{Binding Contact.Zip}" Width="45"/> <Data:DataGridTextColumn Header="Home" Binding="{Binding Contact.PhoneHome}" Width="85"/> <Data:DataGridTextColumn Header="Cell" Binding="{Binding Contact.PhoneCell}" Width="85"/> <Data:DataGridTextColumn Header="Email" Binding="{Binding Contact.Email}" Width="118"/> </Data:DataGrid.Columns> </Data:DataGrid> <DataForm:DataForm x:Name="CustomerDetails" Header="Customer Details" AutoGenerateFields="False" AutoEdit="False" AutoCommit="False" CommandButtonsVisibility="Edit" Width="1000" Margin="0,5,0,0"> <DataForm:DataForm.EditTemplate> </DataForm:DataForm.EditTemplate> </DataForm:DataForm> And here is my code behind public Customers() { InitializeComponent(); BusyDialogIndicator.IsBusy = true; Loaded += new RoutedEventHandler(Customers_Loaded); CustomerDetails.BeginningEdit += new EventHandler(CustomerDetails_BeginningEdit); } void CustomerDetails_BeginningEdit(object sender, System.ComponentModel.CancelEventArgs e) { CustomerContacts.CustomerContactPaged.EditItem(CustomerDetails.CurrentItem); } private void Customers_Loaded(object sender, RoutedEventArgs e) { CustomerContacts = new CustomerContactLinks(); CustomerContacts.GetAll(); GridCustomers.ItemsSource = CustomerContacts.CustomerContactPaged; GridCustomerPager.Source = CustomerContacts.CustomerContactPaged; GridCustomers.SelectionChanged += new SelectionChangedEventHandler(GridCustomers_SelectionChanged); BusyDialogIndicator.IsBusy = false; } void GridCustomers_SelectionChanged(object sender, SelectionChangedEventArgs e) { CustomerDetails.CurrentItem = GridCustomers.SelectedItem as CustomerContactLink; } private void SaveChanges_Click(object sender, RoutedEventArgs e) { if (WebContext.Current.User.IsAuthenticated) { bool commited = CustomerDetails.CommitEdit(); if (commited && (!CustomerDetails.IsItemChanged && CustomerDetails.IsItemValid)) { CustomerContacts.Update(CustomerDetails.CurrentItem as CustomerContactLink); CustomerContacts.ctx.SubmitChanges(); CustomerContacts.CustomerContactPaged.CommitEdit(); CustomerContacts.CustomerContactPaged.Refresh(); (GridCustomers.ItemsSource as PagedCollectionView).Refresh(); } } }

    Read the article

  • How do I remove the time from printpreview dialog?

    - by Albo Best
    Here is my code: Imports System.Data.OleDb Imports System.Drawing.Printing Namespace Print Public Class Form1 Inherits System.Windows.Forms.Form Dim PrintC As PrinterClass Dim conn As OleDb.OleDbConnection Dim connectionString As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=..\\db1.mdb" Dim sql As String = String.Empty Dim ds As DataSet Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load FillDataGrid() '//create printerclass object PrintC = New PrinterClass(PrintDocument1, dataGrid) End Sub Private Sub FillDataGrid() Try Dim dt As New DataTable Dim ds As New DataSet ds.Tables.Add(dt) Dim da As New OleDbDataAdapter con.Open() da = New OleDbDataAdapter("SELECT * from klient ", con) da.Fill(dt) con.Close() dataGrid.DataSource = dt.DefaultView Dim dTable As DataTable For Each dTable In ds.Tables Dim dgStyle As DataGridTableStyle = New DataGridTableStyle dgStyle.MappingName = dTable.TableName dataGrid.TableStyles.Add(dgStyle) Next ' DataGrid settings dataGrid.CaptionText = "TE GJITHE KLIENTET" dataGrid.HeaderFont = New Font("Verdana", 12) dataGrid.TableStyles(0).GridColumnStyles(0).Width = 60 dataGrid.TableStyles(0).GridColumnStyles(1).Width = 140 dataGrid.TableStyles(0).GridColumnStyles(2).Width = 140 dataGrid.TableStyles(0).GridColumnStyles(3).Width = 140 dataGrid.TableStyles(0).GridColumnStyles(4).Width = 140 dataGrid.TableStyles(0).GridColumnStyles(5).HeaderText = "" dataGrid.TableStyles(0).GridColumnStyles(5).Width = -1 Catch ex As Exception MessageBox.Show(ex.Message) End Try End Sub Private Sub btnPrint_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnPrint.Click 'create printerclass object PrintC = New PrinterClass(PrintDocument1, dataGrid) PrintDocument1.Print() End Sub Private Sub btnPreview_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnPreview.Click 'create printerclass object PrintC = New PrinterClass(PrintDocument1, dataGrid) ''preview Dim ps As New PaperSize("A4", 840, 1150) ps.PaperName = PaperKind.A4 PrintDocument1.DefaultPageSettings.PaperSize = ps PrintPreviewDialog1.WindowState = FormWindowState.Normal PrintPreviewDialog1.StartPosition = FormStartPosition.CenterScreen PrintPreviewDialog1.ClientSize = New Size(600, 600) PrintPreviewDialog1.ShowDialog() End Sub Private Sub PrintDocument1_PrintPage(ByVal sender As System.Object, ByVal e As System.Drawing.Printing.PrintPageEventArgs) Handles PrintDocument1.PrintPage 'print grid Dim morepages As Boolean = PrintC.Print(e.Graphics) If (morepages) Then e.HasMorePages = True End If End Sub End Class End Namespace This is how data looks in DataGrid (that's perfect)... and here is how it looks when I click PrintPreview. (I don't want the time to appear there, the "12:00:00" part. in database the date is stored as Short Date (10-Dec-12) Can somebody suggest a way around that? Imports System Imports System.Windows.Forms Imports System.Drawing Imports System.Drawing.Printing Imports System.Collections Imports System.Data Namespace Print Public Class PrinterClass '//clone of Datagrid Dim PrintGrid As Grid '//printdocument for initial printer settings Private PrintDoc As PrintDocument '//defines whether the grid is ordered right to left Private bRightToLeft As Boolean '//Current Top Private CurrentY As Single = 0 '//Current Left Private CurrentX As Single = 0 '//CurrentRow to print Private CurrentRow As Integer = 0 '//Page Counter Public PageCounter As Integer = 0 '/// <summary> '/// Constructor Class '/// </summary> '/// <param name="pdocument"></param> '/// <param name="dgrid"></param> Public Sub New(ByVal pdocument As PrintDocument, ByVal dgrid As DataGrid) 'MyBase.new() PrintGrid = New Grid(dgrid) PrintDoc = pdocument '//The grid columns are right to left bRightToLeft = dgrid.RightToLeft = RightToLeft.Yes '//init CurrentX and CurrentY CurrentY = pdocument.DefaultPageSettings.Margins.Top CurrentX = pdocument.DefaultPageSettings.Margins.Left End Sub Public Function Print(ByVal g As Graphics, ByRef currentX As Single, ByRef currentY As Single) As Boolean '//use predefined area currentX = currentX currentY = currentY PrintHeaders(g) Dim Morepages As Boolean = PrintDataGrid(g) currentY = currentY currentX = currentX Return Morepages End Function Public Function Print(ByVal g As Graphics) As Boolean CurrentX = PrintDoc.DefaultPageSettings.Margins.Left CurrentY = PrintDoc.DefaultPageSettings.Margins.Top PrintHeaders(g) Return PrintDataGrid(g) End Function '/// <summary> '/// Print the Grid Headers '/// </summary> '/// <param name="g"></param> Private Sub PrintHeaders(ByVal g As Graphics) Dim sf As StringFormat = New StringFormat '//if we want to print the grid right to left If (bRightToLeft) Then CurrentX = PrintDoc.DefaultPageSettings.PaperSize.Width - PrintDoc.DefaultPageSettings.Margins.Right sf.FormatFlags = StringFormatFlags.DirectionRightToLeft Else CurrentX = PrintDoc.DefaultPageSettings.Margins.Left End If Dim i As Integer For i = 0 To PrintGrid.Columns - 1 '//set header alignment Select Case (CType(PrintGrid.Headers.GetValue(i), Header).Alignment) Case HorizontalAlignment.Left 'left sf.Alignment = StringAlignment.Near Case HorizontalAlignment.Center sf.Alignment = StringAlignment.Center Case HorizontalAlignment.Right sf.Alignment = StringAlignment.Far End Select '//advance X according to order If (bRightToLeft) Then '//draw the cell bounds (lines) and back color g.FillRectangle(New SolidBrush(PrintGrid.HeaderBackColor), CurrentX - PrintGrid.Headers(i).Width, CurrentY, PrintGrid.Headers(i).Width, PrintGrid.Headers(i).Height) g.DrawRectangle(New Pen(PrintGrid.LineColor), CurrentX - PrintGrid.Headers(i).Width, CurrentY, PrintGrid.Headers(i).Width, PrintGrid.Headers(i).Height) '//draw the cell text g.DrawString(PrintGrid.Headers(i).CText, PrintGrid.Headers(i).Font, New SolidBrush(PrintGrid.HeaderForeColor), New RectangleF(CurrentX - PrintGrid.Headers(i).Width, CurrentY, PrintGrid.Headers(i).Width, PrintGrid.Headers(i).Height), sf) '//next cell CurrentX -= PrintGrid.Headers(i).Width Else '//draw the cell bounds (lines) and back color g.FillRectangle(New SolidBrush(PrintGrid.HeaderBackColor), CurrentX, CurrentY, PrintGrid.Headers(i).Width, PrintGrid.Headers(i).Height) g.DrawRectangle(New Pen(PrintGrid.LineColor), CurrentX, CurrentY, PrintGrid.Headers(i).Width, PrintGrid.Headers(i).Height) '//draw the cell text g.DrawString(PrintGrid.Headers(i).CText, PrintGrid.Headers(i).Font, New SolidBrush(PrintGrid.HeaderForeColor), New RectangleF(CurrentX, CurrentY, PrintGrid.Headers(i).Width, PrintGrid.Headers(i).Height), sf) '//next cell CurrentX += PrintGrid.Headers(i).Width End If Next '//reset to beginning If (bRightToLeft) Then '//right align CurrentX = PrintDoc.DefaultPageSettings.PaperSize.Width - PrintDoc.DefaultPageSettings.Margins.Right Else '//left align CurrentX = PrintDoc.DefaultPageSettings.Margins.Left End If '//advance to next row CurrentY = CurrentY + CType(PrintGrid.Headers.GetValue(0), Header).Height End Sub Private Function PrintDataGrid(ByVal g As Graphics) As Boolean Dim sf As StringFormat = New StringFormat PageCounter = PageCounter + 1 '//if we want to print the grid right to left If (bRightToLeft) Then CurrentX = PrintDoc.DefaultPageSettings.PaperSize.Width - PrintDoc.DefaultPageSettings.Margins.Right sf.FormatFlags = StringFormatFlags.DirectionRightToLeft Else CurrentX = PrintDoc.DefaultPageSettings.Margins.Left End If Dim i As Integer For i = CurrentRow To PrintGrid.Rows - 1 Dim j As Integer For j = 0 To PrintGrid.Columns - 1 '//set cell alignment Select Case (PrintGrid.Cell(i, j).Alignment) '//left Case HorizontalAlignment.Left sf.Alignment = StringAlignment.Near Case HorizontalAlignment.Center sf.Alignment = StringAlignment.Center '//right Case HorizontalAlignment.Right sf.Alignment = StringAlignment.Far End Select '//advance X according to order If (bRightToLeft) Then '//draw the cell bounds (lines) and back color g.FillRectangle(New SolidBrush(PrintGrid.BackColor), CurrentX - PrintGrid.Cell(i, j).Width, CurrentY, PrintGrid.Cell(i, j).Width, PrintGrid.Cell(i, j).Height) g.DrawRectangle(New Pen(PrintGrid.LineColor), CurrentX - PrintGrid.Cell(i, j).Width, CurrentY, PrintGrid.Cell(i, j).Width, PrintGrid.Cell(i, j).Height) '//draw the cell text g.DrawString(PrintGrid.Cell(i, j).CText, PrintGrid.Cell(i, j).Font, New SolidBrush(PrintGrid.ForeColor), New RectangleF(CurrentX - PrintGrid.Cell(i, j).Width, CurrentY, PrintGrid.Cell(i, j).Width, PrintGrid.Cell(i, j).Height), sf) '//next cell CurrentX -= PrintGrid.Cell(i, j).Width Else '//draw the cell bounds (lines) and back color g.FillRectangle(New SolidBrush(PrintGrid.BackColor), CurrentX, CurrentY, PrintGrid.Cell(i, j).Width, PrintGrid.Cell(i, j).Height) g.DrawRectangle(New Pen(PrintGrid.LineColor), CurrentX, CurrentY, PrintGrid.Cell(i, j).Width, PrintGrid.Cell(i, j).Height) '//draw the cell text '//Draw text by alignment g.DrawString(PrintGrid.Cell(i, j).CText, PrintGrid.Cell(i, j).Font, New SolidBrush(PrintGrid.ForeColor), New RectangleF(CurrentX, CurrentY, PrintGrid.Cell(i, j).Width, PrintGrid.Cell(i, j).Height), sf) '//next cell CurrentX += PrintGrid.Cell(i, j).Width End If Next '//reset to beginning If (bRightToLeft) Then '//right align CurrentX = PrintDoc.DefaultPageSettings.PaperSize.Width - PrintDoc.DefaultPageSettings.Margins.Right Else '//left align CurrentX = PrintDoc.DefaultPageSettings.Margins.Left End If '//advance to next row CurrentY += PrintGrid.Cell(i, 0).Height CurrentRow += 1 '//if we are beyond the page margin (bottom) then we need another page, '//return true If (CurrentY > PrintDoc.DefaultPageSettings.PaperSize.Height - PrintDoc.DefaultPageSettings.Margins.Bottom) Then Return True End If Next Return False End Function End Class End Namespace

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Python: combining making two scripts into one

    - by Alex
    I have two separately made python scripts one that makes a sine wave sound based off time, and another that produces a sine wave graph that is based off the same time factors. I need help combining them into one running file. Here's the first: from struct import pack from math import sin, pi import time def au_file(name, freq, freq1, dur, vol): fout = open(name, 'wb') # header needs size, encoding=2, sampling_rate=8000, channel=1 fout.write('.snd' + pack('>5L', 24, 8*dur, 2, 8000, 1)) factor = 2 * pi * freq/8000 factor1 = 2 * pi * freq1/8000 # write data for seg in range(8 * dur): # sine wave calculations sin_seg = sin(seg * factor) + sin(seg * factor1) fout.write(pack('b', vol * 64 * sin_seg)) fout.close() t = time.strftime("%S", time.localtime()) ti = time.strftime("%M", time.localtime()) tis = float(t) tis = tis * 100 tim = float(ti) tim = tim * 100 if __name__ == '__main__': au_file(name='timeSound.au', freq=tim, freq1=tis, dur=1000, vol=1.0) import os os.startfile('timeSound.au') and the second is this: from Tkinter import * import math import time t = time.strftime("%S", time.localtime()) ti = time.strftime("%M", time.localtime()) tis = float(t) tis = tis / 100 tim = float(ti) tim = tim / 100 root = Tk() root.title("This very moment") width = 400 height = 300 center = height//2 x_increment = 1 # width stretch x_factor1 = tis x_factor2 = tim # height stretch y_amplitude = 50 c = Canvas(width=width, height=height, bg='black') c.pack() str1 = "sin(x)=white" c.create_text(10, 20, anchor=SW, text=str1) center_line = c.create_line(0, center, width, center, fill='red') # create the coordinate list for the sin() curve, have to be integers xy1 = [] xy2 = [] for x in range(400): # x coordinates xy1.append(x * x_increment) xy2.append(x * x_increment) # y coordinates xy1.append(int(math.sin(x * x_factor1) * y_amplitude) + center) xy2.append(int(math.sin(x * x_factor2) * y_amplitude) + center) sinS_line = c.create_line(xy1, fill='white') sinM_line = c.create_line(xy2, fill='yellow') root.mainloop()

    Read the article

  • How reliable is DateTime.Utc in Silverlight applications?

    - by Edward Tanguay
    I have a silverlight application which users will be running in various time zones. The applications load their data from the server at one time, then cache it in IsolatedStorage. When I make changes to the data on the server, I want to be able to change the "last updated time" so that all applications download the newest data the next time they check this date. However, I'm a bit confused as to how to handle the time zone issue since a if the server is in New York and the update time is set to 2010-01-01 17:00:00 and a client in Seattle checks compares it to its local time of 2010-01-01 14:00:00 it won't update and will continue to provide old data for three more hours. My solution is to always post the update time in UTC time, not with the time on the server, then make the Silverlight app check with DateTime.UtcNow. Is this as easy as it sounds or are their issues with this, e.g. that timezones are not set correctly on computers and hence the SilverlightApp does not report the correct UTC time. Can anyone say from experience how likely it is that using DateTime.UtcNow like this for cache refreshing will work in all cases? If DateTime.UtcNow is not reliable, I will just use an incremented "DataVersion" integer but there are other scenarios in which getting time zone sychronization down would make it useful thoroughly understand how to solve this in silverlight apps.

    Read the article

  • How do you perform arithmetic calculations on symbols in Scheme/Lisp?

    - by kunjaan
    I need to perform calculations with a symbol. I need to convert the time which is of hh:mm form to the minutes passed. ;; (get-minutes symbol)->number ;; convert the time in hh:mm to minutes ;; (get-minutes 6:19)-> 6* 60 + 19 (define (get-minutes time) (let* ((a-time (string->list (symbol->string time))) (hour (first a-time)) (minutes (third a-time))) (+ (* hour 60) minutes))) This is an incorrect code, I get a character after all that conversion and cannot perform a correct calculation. Do you guys have any suggestions? I cant change the input type. Context: The input is a flight schedule so I cannot alter the data structure. ;; ---------------------------------------------------------------------- Edit: Figured out an ugly solution. Please suggest something better. (define (get-minutes time) (let* ((a-time (symbol->string time)) (hour (string->number (substring a-time 0 1))) (minutes (string->number (substring a-time 2 4)))) (+ (* hour 60) minutes)))

    Read the article

  • XSLT: For each node transform, if A =2 and A=1 were both found do this else do that

    - by Larry
    Example 1: <time> <timestamp>01:00</timestamp> <event>arrived<event> </time> <time> <timestamp>02:00</timestamp> <event>left<event> </time> Example 2: <time> <timestamp>02:00</timestamp> <event>left<event> </time> The XSLT needs to do: FOR EACH node DO: IF event=arrived THEN set eventtype=atdestination IF event=left is found AND event=arrived is found THEN set new node type=leftdestination ELSE set type=left XSLT applied to example 1: <event> <time>01:00</time> <type>atdestination</type> <event> <event> <time>02:00</time> <type>leftdestination</type> <event> XSLT applied to example 2: <event> <time>02:00</time> <type>left</type> <event>

    Read the article

  • Making more recent items more likely to be drawn

    - by bobo
    There are a few hundred of book records in the database and each record has a publish time. In the homepage of the website, I am required to write some codes to randomly pick 10 books and put them there. The requirement is that newer books need to have higher chances of getting displayed. Since the time is an integer, I am thinking like this to calculate the probability for each book: Probability of a book to be drawn = (current time - publish time of the book) / ((current time - publish time of the book1) + (current time - publish time of the book1) + ... (current time - publish time of the bookn)) After a book is drawn, the next round of the loop will minus the (current time - publish time of the book) from the denominator and recalculate the probability for each of the remaining books, the loop continues until 10 books have been drawn. Is this algorithm a correct one? By the way, the website is written in PHP. Feel free to suggest some PHP codes if you have a better algorithm in your mind. Many thanks to you all.

    Read the article

  • How to add EXTRA_CFLAGS to indigo eclipse cdt?

    - by jacknad
    I used the instructions here to install eclipse and the here to create an eclipse project but I suspect the instructions were written for an older version of eclipse. Specifically, there is no Build (Incremental Build): build install EXTRA_CFLAGS+=-g... in this version of eclipse. I have created the project without the EXTRA_CFLAGS and have been poking around in it looking for a place to add or set them. I see a number of things that look close in Project Properties but nothing that seems like a match.

    Read the article

  • FrankenUPS Hack Turns a Server UPS into a Whole House UPS

    - by Jason Fitzpatrick
    This well documented build guide showcases the process of turning a rack-mounted UPS battery device intended for a server bank, into a super-charged whole-house UPS system with a massive 14 hours of backup juice. It’s a very ambitious build and, due to the work required in the main circuit breaker of your home, we highly recommend only those experienced with electrical work undertake the project. That said, it’s a really clever bit of recycling that yielded an impressive half-day worth of backup power. Hit up the link below for the detailed build log. FrankenUPS [via Hack A Day] The Best Free Portable Apps for Your Flash Drive Toolkit How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC

    Read the article

  • Robotic Arm &ndash; Hardware

    - by Szymon Kobalczyk
    This is first in series of articles about project I've been building  in my spare time since last Summer. Actually it all began when I was researching a topic of modeling human motion kinematics in order to create gesture recognition library for Kinect. This ties heavily into motion theory of robotic manipulators so I also glanced at some designs of robotic arms. Somehow I stumbled upon this cool looking open source robotic arm: It was featured on Thingiverse and published by user jjshortcut (Jan-Jaap). Since for some time I got hooked on toying with microcontrollers, robots and other electronics, I decided to give it a try and build it myself. In this post I will describe the hardware build of the arm and in later posts I will be writing about the software to control it. Another reason to build the arm myself was the cost factor. Even small commercial robotic arms are quite expensive – products from Lynxmotion and Dagu look great but both cost around USD $300 (actually there is one cheap arm available but it looks more like a toy to me). In comparison this design is quite cheap. It uses seven hobby grade servos and even the cheapest ones should work fine. The structure is build from a set of laser cut parts connected with few metal spacers (15mm and 47mm) and lots of M3 screws. Other than that you’d only need a microcontroller board to drive the servos. So in total it comes a lot cheaper to build it yourself than buy an of the shelf robotic arm. Oh, and if you don’t like this one there are few more robotic arm projects at Thingiverse (including one by oomlout). Laser cut parts Some time ago I’ve build another robot using laser cut parts so I knew the process already. You can grab the design files in both DXF and EPS format from Thingiverse, and there are also 3D models of each part in STL. Actually the design is split into a second project for the mini servo gripper (there is also a standard servo version available but it won’t fit this arm).  I wanted to make some small adjustments, layout, and add measurements to the parts before sending it for cutting. I’ve looked at some free 2D CAD programs, and finally did all this work using QCad 3 Beta with worked great for me (I also tried LibreCAD but it didn’t work that well). All parts are cut from 4 mm thick material. Because I was worried that acrylic is too fragile and might break, I also ordered another set cut from plywood. In the end I build it from plywood because it was easier to glue (I was told acrylic requires a special glue). Btw. I found a great laser cutter service in Kraków and highly recommend it (www.ebbox.com.pl). It cost me only USD $26 for both sets ($16 acrylic + $10 plywood). Metal parts I bought all the M3 screws and nuts at local hardware store. Make sure to look for nylon lock (nyloc) nuts for the gripper because otherwise it unscrews and comes apart quickly. I couldn’t find local store with metal spacers and had to order them online (you’d need 11 x 47mm and 3 x 15mm). I think I paid less than USD $10 for all metal parts. Servos This arm uses five standards size servos to drive the arm itself, and two micro servos are used on the gripper. Author of the project used Modelcraft RS-2 Servo and Modelcraft ES-05 HT Servo. I had two Futaba S3001 servos laying around, and ordered additional TowerPro SG-5010 standard size servos and TowerPro SG90 micro servos. However it turned out that the SG90 won’t fit in the gripper so I had to replace it with a slightly smaller E-Sky EK2-0508 micro servo. Later it also turned out that Futaba servos make some strange noise while working so I swapped one with TowerPro SG-5010 which has higher torque (8kg / cm). I’ve also bought three servo extension cables. All servos cost me USD $45. Assembly The build process is not difficult but you need to think carefully about order of assembling it. You can do the base and upper arm first. Because two servos in the base are close together you need to put first with one piece of lower arm already connected before you put the second servo. Then you connect the upper arm and finally put the second piece of lower arm to hold it together. Gripper and base require some gluing so think it through too. Make sure to look closely at all the photos on Thingiverse (also other people copies) and read additional posts on jjshortcust’s blog: My mini servo grippers and completed robotic arm  Multiply the robotic arm and electronics Here is also Rob’s copy cut from aluminum My assembled arm looks like this – I think it turned out really nice: Servo controller board The last piece of hardware I needed was an electronic board that would take command from PC and drive all seven servos. I could probably use Arduino for this task, and in fact there are several Arduino servo shields available (for example from Adafruit or Renbotics).  However one problem is that most support only up to six servos, and second that their accuracy is limited by Arduino’s timer frequency. So instead I looked for dedicated servo controller and found a series of Maestro boards from Pololu. I picked the Pololu Mini Maestro 12-Channel USB Servo Controller. It has many nice features including native USB connection, high resolution pulses (0.25µs) with no jitter, built-in speed and acceleration control, and even scripting capability. Another cool feature is that besides servo control, each channel can be configured as either general input or output. So far I’m using seven channels so I still have five available to connect some sensors (for example distance sensor mounted on gripper might be useful). And last but important factor was that they have SDK in .NET – what more I could wish for! The board itself is very small – half of the size of Tic-Tac box. I picked one for about USD $35 in this store. Perhaps another good alternative would be the Phidgets Advanced Servo 8-Motor – but it is significantly more expensive at USD $87.30. The Maestro Controller Driver and Software package includes Maestro Control Center program with lets you immediately configure the board. For each servo I first figured out their move range and set the min/max limits. I played with setting the speed an acceleration values as well. Big issue for me was that there are two servos that control position of lower arm (shoulder joint), and both have to be moved at the same time. This is where the scripting feature of Pololu board turned out very helpful. I wrote a script that synchronizes position of second servo with first one – so now I only need to move one servo and other will follow automatically. This turned out tricky because I couldn’t find simple offset mapping of the move range for each servo – I had to divide it into several sub-ranges and map each individually. The scripting language is bit assembler-like but gets the job done. And there is even a runtime debugging and stack view available. Altogether I’m very happy with the Pololu Mini Maestro Servo Controller, and with this final piece I completed the build and was able to move my arm from the Meastro Control program.   The total cost of my robotic arm was: $10 laser cut parts $10 metal parts $45 servos $35 servo controller ----------------------- $100 total So here you have all the information about the hardware. In next post I’ll start talking about the software that I wrote in Microsoft Robotics Developer Studio 4. Stay tuned!

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Glibc importance of error ...

    - by Oz123
    Hi Everyone, I am following LFS 6.7, and I reached the point where I compile glibc-2.12.1 . I mounted the LFS partition with the atime option: here is a confirm on that I think: /dev/sdb1 on /mnt /lfs type ext4 (rw) I get the following errors on making the test, and I have no clue if I should try to resolve them, or just ignore them and go on ... rpc/types.h sunrpc/rpc/svc_auth.h sunrpc/rpcsvc/bootparam.h sysvipc/sys/ipc.h \ sysvipc/sys/msg.h sysvipc/sys/sem.h sysvipc/sys/shm.h termios/termios.h \ termios/sys/termios.h termios/sys/ttychars.h time/time.h time/sys/time.h \ time/sys/timeb.h wcsmbs/wchar.h wctype/wctype.h > \ /sources/glibc-build/begin-end-check.out make[1]: Target `check' not remade because of errors. make[1]: Leaving directory `/sources/glibc-2.12.1' make: *** [check] Error 2 root:/sources/glibc-build# grep Error glibc-check-log make[2]: *** [/sources/glibc-build/math/test-float.out] Error 1 make[2]: *** [/sources/glibc-build/math/test-ifloat.out] Error 1 make[1]: *** [math/tests] Error 2 make[2]: [/sources/glibc-build/posix/annexc.out] Error 1 (ignored) make: *** [check] Error 2 thanks in advance, Oz

    Read the article

  • Creating Visual Studio projects that only contain static files

    - by Eilon
    Have you ever wanted to create a Visual Studio project that only contained static files and didn’t contain any code? While working on ASP.NET MVC we had a need for exactly this type of project. Most of the projects in the ASP.NET MVC solution contain code, such as managed code (C#), unit test libraries (C#), and Script# code for generating our JavaScript code. However, one of the projects, MvcFuturesFiles, contains no code at all. It only contains static files that get copied to the build output folder: As you may well know, adding static files to an existing Visual Studio project is easy. Just add the file to the project and in the property grid set its Build Action to “Content” and the Copy to Output Directory to “Copy if newer.” This works great if you have just a few static files that go along with other code that gets compiled into an executable (EXE, DLL, etc.). But this solution does not work well if the projects only contains static files and has no compiled code. If you create a new project in Visual Studio and add static files to it you’ll still get an EXE or DLL copied to the output folder, despite not having any actual code. We wanted to avoid having a teeny little DLL generated in the output folder. In ASP.NET MVC 2 we came up with a simple solution to this problem. We started out with a regular C# Class Library project but then edited the project file to alter how it gets built. The critical part to get this to work is to define the MSBuild targets for Build, Clean, and Rebuild to perform custom tasks instead of running the compiler. The Build, Clean, and Rebuild targets are the three main targets that Visual Studio requires in every project so that the normal UI functions properly. If they are not defined then running certain commands in Visual Studio’s Build menu will cause errors. Once you create the class library projects there are a few easy steps to change it into a static file project: The first step in editing the csproj file is to remove the reference to the Microsoft.CSharp.targets file because the project doesn’t contain any C# code: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The second step is to define the new Build, Clean, and Rebuild targets to delete and then copy the content files: <Target Name="Build"> <Copy SourceFiles="@(Content)" DestinationFiles="@(Content->'$(OutputPath)%(RelativeDir)%(Filename)%(Extension)')" /> </Target> <Target Name="Clean"> <Exec Command="rd /s /q $(OutputPath)" Condition="Exists($(OutputPath))" /> </Target> <Target Name="Rebuild" DependsOnTargets="Clean;Build"> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The third and last step is to add all the files to the project as normal Content files (as you would do in any project type). To see how we did this in the ASP.NET MVC 2 project you can download the source code and inspect the MvcFutureFules.csproj project file. If you’re working on a project that contains many static files I hope this solution helps you out!

    Read the article

  • 24 DIY Softbox Designs for Cheap and Flexible Photography Lighting [DIY]

    - by Jason Fitzpatrick
    If you’re looking for just the right softbox for your budget and photography needs this collection of 24 great softbox designs is bound to have the perfect fit. At DIY Photography they hosted a DIY softbox contest. Out of the 70 entries they culled it down to the top 24 designs and rounded up the photo tours and build guides for you to browse. You can build the foamcore and CFL model seen in the photo above by following the build guide here. Hit up the link below to check out all the other designs that range from full body softboxes to on-camera softboxes. How To Build 24 DIY Softboxes [DIY Photography via Make] HTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)What is a Histogram, and How Can I Use it to Improve My Photos?

    Read the article

  • Dependent on CVS tagging for automated builds

    - by OMG Ponies
    My current work relies on using tags in CVS for an automated build process (ANT currently) to build for respective environments (development, QA, production). From our research, neither Git or Subversion support tagging in the same manner. If we use Subversion or Git, they don't support tags (in the same manner - please correct me?). So how would ANT or Maven know what to pick up for the respective build? Example: For a webapp, when viewing our repository say for the web.xml file -- the history would look like: web.xml v1 ... web.xml v1.2.3 Tag: Prod web.xml v1.2.4 web.xml v1.2.5 Tag: QA web.xml v1.2.6 web.xml v1.2.7 Head The ANT build scripts are run as CRON jobs, at different times & intervals for different environments. The environment build is based on the repository checkout, based on the tag. Development continues, and eventually the respective tags are moved: web.xml v1 ... web.xml v1.2.3 web.xml v1.2.4 web.xml v1.2.5 web.xml v1.2.6 Tag: Prod web.xml v1.2.7 Tag: QA web.xml v1.2.8 Head

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >