Search Results

Search found 31075 results on 1243 pages for 'change'.

Page 250/1243 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Fun tips with Analytics

    - by user12620172
    If you read this blog, I am assuming you are at least familiar with the Analytic functions in the ZFSSA. They are basically amazing, very powerful and deep. However, you may not be aware of some great, hidden functions inside the Analytic screen. Once you open a metric, the toolbar looks like this: Now, I’m not going over every tool, as we have done that before, and you can hover your mouse over them and they will tell you what they do. But…. Check this out. Open a metric (CPU Percent Utilization works fine), and click on the “Hour” button, which is the 2nd clock icon. That’s easy, you are now looking at the last hour of data. Now, hold down your ‘Shift’ key, and click it again. Now you are looking at 2 hours of data. Hold down Shift and click it again, and you are looking at 3 hours of data. Are you catching on yet? You can do this with not only the ‘Hour’ button, but also with the ‘Minute’, ‘Day’, ‘Week’, and the ‘Month’ buttons. Very cool. It also works with the ‘Show Minimum’ and ‘Show Maximum’ buttons, allowing you to go to the next iteration of either of those. One last button you can Shift-click is the handy ‘Drill’ button. This button usually drills down on one specific aspect of your metric. If you Shift-click it, it will display a “Rainbow Highlight” of the current metric. This works best if this metric has many ‘Range Average’ items in the left-hand window. Give it a shot. Also, one will sometimes click on a certain second of data in the graph, like this:  In this case, I clicked 4:57 and 21 seconds, and the 'Range Average' on the left went away, and was replaced by the time stamp. It seems at this point to some people that you are now stuck, and can not get back to an average for the whole chart. However, you can actually click on the actual time stamp of "4:57:21" right above the chart. Even though your mouse does not change into the typical browser finger that most links look like, you can click it, and it will change your range back to the full metric. Another trick you may like is to save a certain view or look of a group of graphs. Most of you know you can save a worksheet, but did you know you could Sync them, Pause them, and then Save it? This will save the paused state, allowing you to view it forever the way you see it now.  Heatmaps. Heatmaps are cool, and look like this:  Some metrics use them and some don't. If you have one, and wish to zoom it vertically, try this. Open a heatmap metric like my example above (I believe every metric that deals with latency will show as a heatmap). Select one or two of the ranges on the left. Click the "Change Outlier Elimination" button. Click it again and check out what it does.  Enjoy. Perhaps my next blog entry will be the best Analytic metrics to keep your eyes on, and how you can use the Alerts feature to watch them for you. Steve 

    Read the article

  • Problem with sprite direction and rotation

    - by user2236165
    I have a sprite called Tool that moves with a speed represented as a float and in a direction represented as a Vector2. When I click the mouse on the screen the sprite change its direction and starts to move towards the mouseclick. In addition to that I rotate the sprite so that it is facing in the direction it is heading. However, when I add a camera that is suppose to follow the sprite so that the sprite is always centered on the screen, the sprite won't move in the given direction and the rotation isn't accurate anymore. This only happens when I add the Camera.View in the spriteBatch.Begin(). I was hoping anyone could maybe shed a light on what I am missing in my code, that would be highly appreciated. Here is the camera class i use: public class Camera { private const float zoomUpperLimit = 1.5f; private const float zoomLowerLimit = 0.1f; private float _zoom; private Vector2 _pos; private int ViewportWidth, ViewportHeight; #region Properties public float Zoom { get { return _zoom; } set { _zoom = value; if (_zoom < zoomLowerLimit) _zoom = zoomLowerLimit; if (_zoom > zoomUpperLimit) _zoom = zoomUpperLimit; } } public Rectangle Viewport { get { int width = (int)((ViewportWidth / _zoom)); int height = (int)((ViewportHeight / _zoom)); return new Rectangle((int)(_pos.X - width / 2), (int)(_pos.Y - height / 2), width, height); } } public void Move(Vector2 amount) { _pos += amount; } public Vector2 Position { get { return _pos; } set { _pos = value; } } public Matrix View { get { return Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(ViewportWidth * 0.5f, ViewportHeight * 0.5f, 0)); } } #endregion public Camera(Viewport viewport, float initialZoom) { _zoom = initialZoom; _pos = Vector2.Zero; ViewportWidth = viewport.Width; ViewportHeight = viewport.Height; } } And here is my Update and Draw-method: protected override void Update (GameTime gameTime) { float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; TouchCollection touchCollection = TouchPanel.GetState (); foreach (TouchLocation tl in touchCollection) { if (tl.State == TouchLocationState.Pressed || tl.State == TouchLocationState.Moved) { //direction the tool shall move towards direction = touchCollection [0].Position - toolPos; if (direction != Vector2.Zero) { direction.Normalize (); } //change the direction the tool is moving and find the rotationangle the texture must rotate to point in given direction toolPos += (direction * speed * elapsed); RotationAngle = (float)Math.Atan2 (direction.Y, direction.X); } } if (direction != Vector2.Zero) { direction.Normalize (); } //move tool in given direction toolPos += (direction * speed * elapsed); //change cameracentre to the tools position Camera.Position = toolPos; base.Update (gameTime); } protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear (Color.Blue); spriteBatch.Begin (SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, null, null, null, Camera.View); spriteBatch.Draw (tool, new Vector2 (toolPos.X, toolPos.Y), null, Color.White, RotationAngle, originOfToolTexture, 1, SpriteEffects.None, 1); spriteBatch.End (); base.Draw (gameTime); }

    Read the article

  • Project Time Tracker

    - by Geertjan
    Based on yesterday's blog entry, let's do something semi useful and display, in the project popup, which is available when you right-click a project in the Projects window, the time since the last change was made anywhere in the project, i.e., we can listen recursively to any changes done within a project and then update the popup with the newly acquired information, dynamically: import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Collection; import java.util.List; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.awt.StatusDisplayer; import org.openide.filesystems.FileAttributeEvent; import org.openide.filesystems.FileChangeListener; import org.openide.filesystems.FileEvent; import org.openide.filesystems.FileRenameEvent; import org.openide.util.Lookup; import org.openide.util.LookupEvent; import org.openide.util.LookupListener; import org.openide.util.Utilities; import org.openide.util.WeakListeners; @ActionID( category = "Demo", id = "org.ptt.TrackProjectSelectionAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference( path = "Projects/Actions", position = 0) public final class TrackProjectSelectionAction extends AbstractAction implements LookupListener, FileChangeListener { private Lookup.Result<Project> projects; private Project context; private Long startTime; private Long changedTime; private DateFormat formatter; private List<Project> timedProjects; public TrackProjectSelectionAction() { putValue("popupText", "Timer"); formatter = new SimpleDateFormat("HH:mm:ss"); timedProjects = new ArrayList<Project>(); projects = Utilities.actionsGlobalContext().lookupResult(Project.class); projects.addLookupListener( WeakListeners.create(LookupListener.class, this, projects)); resultChanged(new LookupEvent(projects)); } @Override public void resultChanged(LookupEvent le) { Collection<? extends Project> allProjects = projects.allInstances(); if (allProjects.size() == 1) { Project currentProject = allProjects.iterator().next(); if (!timedProjects.contains(currentProject)) { String currentProjectName = ProjectUtils.getInformation(currentProject).getDisplayName(); putValue("popupText", "Start Timer for Project: " + currentProjectName); StatusDisplayer.getDefault().setStatusText( "Current Project: " + currentProjectName); timedProjects.add(currentProject); context = currentProject; } } } @Override public void actionPerformed(ActionEvent e) { refresh(); } protected void refresh() { startTime = System.currentTimeMillis(); String formattedStartTime = formatter.format(startTime); putValue("popupText", "Timer started: " + formattedStartTime + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); } @Override public void fileChanged(FileEvent fe) { changedTime = System.currentTimeMillis(); formatter = new SimpleDateFormat("mm:ss"); String formattedLapse = formatter.format(changedTime - startTime); putValue("popupText", "Time since last change: " + formattedLapse + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); startTime = changedTime; } @Override public void fileFolderCreated(FileEvent fe) {} @Override public void fileDataCreated(FileEvent fe) {} @Override public void fileDeleted(FileEvent fe) {} @Override public void fileRenamed(FileRenameEvent fre) {} @Override public void fileAttributeChanged(FileAttributeEvent fae) {} } Some more work needs to be done to complete the above, i.e., for each project you somehow need to maintain the start time and last change and redisplay that whenever the user right-clicks the project.

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • How to be Agile when new work keeps affecting completed work?

    - by jdln
    The project I'm working on is to re-skin an existing website. The functionally will stay the same, its just the styles that are changing. The HTML is not changing, I'm only modifying the CSS files. The site is pretty complex. There are dozens of pages. Users can be logged in and have a number of different roles. Depending on their role the content of the page and what pages they are allowed to see varys. We're using GIT and Github. I'm trying to write CSS that works as components. So when the same form elements, headings, etc appear on multiple pages they are already styled and are consistent. Most of time this is working well. Sadly the format and class names in the HTML are at times messy and unpredictable. When I fix something on one page it can break another. The job is also harder as no one knows exactly all the variations that are possible due to the user roles. As such I'm continuously finding new variations as I go along. I'm making headway by putting a lot of comments in my CSS. If I need to remove a CSS rule Ill comment it out so I can still see it with the chrome dev tools, and ill put a comment in the CSS saying why I removed it and for what page this was done. This means that if on another page I'm about to add add the rule to fix a different problem, there is more of a chance I will see how this would break the first page. This allows me to either find a different solution that will work for both pages, or I can make the override page specific. This has been working quite well for me. If I had complete free reign and the only deadline was to finish the project by the end then this method would be fine. However my manager is trying to mitigate risk by breaking the work into areas to be completed per sprint. This is counter to how I have been approaching things as something like my typography styles will affect all other pages on the site. The other issue is that the different stakeholders want to sign off each section as I go along. However once I've finished a section it may change if I change CSS that affects it and also affects a new section I'm working on. I've asked that the stakeholders have a quick unofficial sign off in stages (eg per sprint), and have the final official sign off at the end of the project, but this is being met with resistance. I do understand why it would be higher risk to do this, but the only way to guarantee that a signed off section will not change is to make ALL future changes page specific. In addition to this I'm being told that all work that I push to the Git repo should be ready to go live, and as such should not contain any code comments. This is risky for me as I wont know until I've finished the site if I will ever benefit from these comments or not. Has anyone else been in a similar situation and managed to find a compromise that worked for my development approach and also the desires of management and stakeholders to have a more Agile approach? A more Agile workflow works great when you can break the work into components and know that once something is done it wont be affected by future work. However the nature of this project makes this hard to achieve.

    Read the article

  • Reactive Extensions vs FileSystemWatcher

    - by Joel Mueller
    One of the things that has long bugged me about the FileSystemWatcher is the way it fires multiple events for a single logical change to a file. I know why it happens, but I don't want to have to care - I just want to reparse the file once, not 4-6 times in a row. Ideally, there would be an event that only fires when a given file is done changing, rather than every step along the way. Over the years I've come up with various solutions to this problem, of varying degrees of ugliness. I thought Reactive Extensions would be the ultimate solution, but there's something I'm not doing right, and I'm hoping someone can point out my mistake. I have an extension method: public static IObservable<IEvent<FileSystemEventArgs>> GetChanged(this FileSystemWatcher that) { return Observable.FromEvent<FileSystemEventArgs>(that, "Changed"); } Ultimately, I would like to get one event per filename, within a given time period - so that four events in a row with a single filename are reduced to one event, but I don't lose anything if multiple files are modified at the same time. BufferWithTime sounds like the ideal solution. var bufferedChange = watcher.GetChanged() .Select(e => e.EventArgs.FullPath) .BufferWithTime(TimeSpan.FromSeconds(1)) .Where(e => e.Count > 0) .Select(e => e.Distinct()); When I subscribe to this observable, a single change to a monitored file triggers my subscription method four times in a row, which rather defeats the purpose. If I remove the Distinct() call, I see that each of the four calls contains two identical events - so there is some buffering going on. Increasing the TimeSpan passed to BufferWithTime seems to have no effect - I went as high as 20 seconds without any change in behavior. This is my first foray into Rx, so I'm probably missing something obvious. Am I doing it wrong? Is there a better approach? Thanks for any suggestions...

    Read the article

  • WPF Toolkit DataGridCell Style DataTrigger

    - by KrisTrip
    I am trying to change the color of a cell to Yellow if the value has been updated in the DataGrid. My XAML: <toolkit:DataGrid x:Name="TheGrid" ItemsSource="{Binding}" IsReadOnly="False" CanUserAddRows="False" CanUserResizeRows="False" AutoGenerateColumns="False" CanUserSortColumns="False" SelectionUnit="CellOrRowHeader" EnableColumnVirtualization="True" VerticalScrollBarVisibility="Auto" HorizontalScrollBarVisibility="Auto"> <toolkit:DataGrid.CellStyle> <Style TargetType="{x:Type toolkit:DataGridCell}"> <Style.Triggers> <DataTrigger Binding="{Binding IsDirty}" Value="True"> <Setter Property="Background" Value="Yellow"/> </DataTrigger> </Style.Triggers> </Style> </toolkit:DataGrid.CellStyle> </toolkit:DataGrid> The grid is bound to a List of arrays (displaying a table of values kind of like excel would). Each value in the array is a custom object that contains an IsDirty dependency property. The IsDirty property gets set when the value is changed. When i run this: change a value in column 1 = whole row goes yellow change a value in any other column = nothing happens I want only the changed cell to go yellow no matter what column its in. Do you see anything wrong with my XAML?

    Read the article

  • WCF listenBacklog and maxConnections can't be set higher than 10. Why not?

    - by NeedWCFPro
    My service works great under low load. But under high load I start to get connection errors. I know about other settings but I am trying to change the listenBacklog parameter in particular for my TCP Buffered binding. If I set listenBacklog="10" I am able to telnet into the port where my WCF service is running. If I change listenBacklog to anything higher than 10 it will not let me telnet into my service when it is running. No errors seem to be thrown. What can I do? I get the same problem when I change my maxConnections away from 10. All other properties of the binding I can set higher without a problem. Here is what my binding looks like: <bindings> <netTcpBinding> <binding name="NetTcpBinding_IMyService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="1048576" maxConnections="10" maxReceivedMessageSize="1048576"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign"> </transport> <message clientCredentialType="Windows" /> </security> </binding> ... I really need to increase the values of maxConnections and listenBacklog

    Read the article

  • Crystal Reports Reportviewer - Set Datasource Dynamically Not Working :argh:

    - by Albert
    I'm running CR XI, and accessing .RPT files through a ReportViewer in my ASP.NET pages. I've already got the following code, which is supposed to set the Report Datasource dynamically. rptSP = New ReportDocument Dim rptPath As String = Request.QueryString("report") rptSP.Load(rptPath.ToString, 0) Dim SConn As New System.Data.SqlClient.SqlConnectionStringBuilder(ConfigurationManager.ConnectionStrings("MyConnectionString").ConnectionString) rptSP.DataSourceConnections(SConn.DataSource, SConn.InitialCatalog).SetConnection(SConn.DataSource, SConn.InitialCatalog, SConn.UserID, SConn.Password) Dim myConnectionInfo As ConnectionInfo = New ConnectionInfo myConnectionInfo.ServerName = SConn.DataSource myConnectionInfo.DatabaseName = SConn.InitialCatalog myConnectionInfo.UserID = SConn.UserID myConnectionInfo.Password = SConn.Password 'Two new methods to loop through all objects and tables contained in the requested report and set 'login credentials for each object and table. SetDBLogonForReport(myConnectionInfo, rptSP) SetDBLogonForSubreports(myConnectionInfo, rptSP) Me.CrystalReportViewer1.ReportSource = rptSP But when I go into each .RPT file, and open up the Database Expert section, there is obviously still servernames hardcoded in there, and the code listed above doesn't seem to be able to change the servernames that are hardcoded there. I say this because I have training and production environments. When the .RPT file is hardcoded with my production server, and I open it on my training server with the code above (and the web.config has the training server in the connection string), I get the ol: Object reference not set to an instance of an object. And then if I go into the .RPT file, and change over the datasource to the training server, and try to open it again, it works fine. Why doesn't the code above overwrite the .RPT files datasource? How can I avoid having to open up each .RPT and change the datasource when migrating reports from server to server? Is there a setting in the .RPT file I'm missing or something?

    Read the article

  • Problem with cascade delete using Entity Framework and System.Data.SQLite

    - by jamone
    I have a SQLite DB that is set up so when I delete a Person the delete is cascaded. This works fine when I manually delete a Person (all records that reference the PersonID are deleted). But when I use Entity Framework to delete the Person I get an error: System.InvalidOperationException: The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted. I don't understand why this is occurring. My trigger is set to clean up all related objects before deleting the object it was told to delete. When I go into the model editor and check the properties of the relationship it shows no action for the OnDelete property. Why isn't this set correctly by pulling it from the DB? If I change this value to Cascade everything works properly, but I would rather not rely on this manual change because what if I refresh my model from the DB and it looses that. Here's the relivent SQL for my tables. CREATE TABLE [SomeTable] ( [SomeTableID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, [PersonID] INTEGER NOT NULL REFERENCES [Person](PersonID) ON DELETE CASCADE ) CREATE TABLE [Person] ( [PersonID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT )

    Read the article

  • Password Recovery without sending password via email

    - by Brian
    So, I've been playing with asp:PasswordRecovery and discovered I really don't like it, for several reasons: 1) Alice's password can be reset even without having access to Alice's email. A security question for password resets mitigates this, but does not really satisfy me. 2) Alice's new password is sent back to her in cleartext. I would rather send her a special link to my page (e.g. a page like example.com/recovery.aspx?P=lfaj0831uefjc), which would let her change her password. I imagine I could do this myself by creating some sort of table of expiring password recovery pages and sending those pages to users who asked for a reset. Somehow those pages could also change user passwords behind the scenes (e.g. by resetting them manually and then using the text of the new password to change the password, since a password cannot be changed without knowing the old one). I'm sure others have had this problem before and that kind of solution strikes me as a little hacky. Is there a better way to do this? An ideal solution does not violate encapsulation by accessing the database directly but instead uses the existing stored procedures within the database...though that may not be possible.

    Read the article

  • Force Postback from code behind? Or reload JavaScript from an Asynchronous Postback?

    - by sah302
    Hi all, I've got a Jquery UI dialog that pops up to confirm the creation of an item after filling out a form. I have the form in an update panel due to various needs of the form, and especially because I want validation being done on the form without reloading the page. JavaScript appears to not reload on an asynchronoous postback. This means when the form is a success and I change the variable 'formSubmitPass' to true, it does not get passed to the Javascript via <%= formSubmitPass %. If I add a trigger to the submit button to do a full postback, it works. However I don't want the submit button to do a full postback as I said so I can validate the form within the update panel. How can I have this so my form validates asynchronously, but my javaScript will properly reload when the form is completed successfully and the item is saved to the database? Javascript: var formSubmitPass = '<%= formSubmitPass %>'; var redirectUrl = '<%= redirectUrl %>'; function pageLoad() { $('#formPassBox').dialog({ autoOpen: false, width: 400, resizable: false, modal: true, draggable: false, buttons: { "Ok": function() { window.location.href = redirectUrl; } }, open: function(event, ui) { $(".ui-dialog-titlebar-close").hide(); var t = window.setTimeout("goToUrl()", 5000); } }); if(formSubmitPass == 'True') { $('#formPassBox').dialog({ autoOpen: true }); } So how can I force a postback from the code behind, or reload the JavaScript on an Asynchronous Postback, or do this in a way that will work such that I can continue to do Async form validation? Edit: I change formSubmitPass at the very end of the code behind: If errorCount = 0 Then formSubmitPass = True upForm.Update() Else formSubmitPass = False End If So on a full postback, the value does change.

    Read the article

  • Detect blow in Mic and do something {iPhone SDK}

    - by Momeks
    Hi , i found this tutorial , and it's good , but doesn't work for me ! here is the code : - (void)listenForBlow:(NSTimer *)timer { [recorder updateMeters]; const double ALPHA = 0.05; double peakPowerForChannel = pow(10, (0.05 * [recorder peakPowerForChannel:0])); lowPassResults = ALPHA * peakPowerForChannel + (1.0 - ALPHA) * lowPassResults; if (lowPassResults > 0.95) NSLog(@"Mic blow detected"); //change the background color e.g ! } in the console show me the nslog reseult like this (without any bowling !): 2010-04-11 23:32:27.935 MicBlow[2358:207] Mic blow detected 2010-04-11 23:32:27.965 MicBlow[2358:207] Mic blow detected 2010-04-11 23:32:27.995 MicBlow[2358:207] Mic blow detected 2010-04-11 23:32:28.026 MicBlow[2358:207] Mic blow detected 2010-04-11 23:32:28.055 MicBlow[2358:207] Mic blow detected 2010-04-11 23:32:28.086 MicBlow[2358:207] Mic blow detected 2010-04-11 23:32:28.115 MicBlow[2358:207] Mic blow detected 2010-04-11 23:32:28.145 MicBlow[2358:207] Mic blow detected 2010-04-11 23:32:28.175 MicBlow[2358:207] Mic blow detected 2010-04-11 23:32:28.205 MicBlow[2358:207] Mic blow detected 2010-04-11 23:32:28.236 MicBlow[2358:207] Mic blow detected i change this value : if (lowPassResults < 0.95) to if (lowPassResults > 0.95) so it seems work ! but doesn't chage anything , again if i put the background changing code the , my code change background but without any bowling !! what's the problem ?

    Read the article

  • Html Select List: why would onchange be called twice?

    - by Mark Redman
    I have a page with a select list (ASP.NET MVC Page) The select list and onchange event specified like this: <%=Html.DropDownList("CompanyID", Model.CompanySelectList, "(select company)", new { @class = "data-entry-field", @onchange = "companySelectListChanged()" })%> The companySelectListChanged function is getting called twice? I am using the nifty code in this question to get the caller. both times the caller is the onchange event, however if i look at the callers caller using: arguments.callee.caller.caller the first call returns some system code as the caller (i presume) and the second call returns undefined. I am checking for undefined to only react once to onchange, but this doesnt seem ideal, whcy would onchange be called twice? UPDATE: ok, found the culprit! ...apart from me :-) but the issue of calling the companySelectListChanged function twice still stands. The onchange event is set directly on the select as mentioned. This calls the companySelectListChanged function. ..notice the 'data-entry-field' class, now in a separate linked javascript file a change event on all fields with this class is bound to a function that changes the colour of the save button. This means there is two events on the onchange, but the companySelectListChanged is called twice? The additional binding is set as follows: $('.data-entry-field').bind('keypress keyup change', function (e) { highLightSaveButtons(); }); Assuming its possible to have 2 change events on the select list, its would assume that setting keypress & keyup events may be breaking something? Any ideas?

    Read the article

  • IE7 HTML/CSS margin-bottom bug.

    - by mk
    Here is the scenario: I have a table with a margin-bottom of 19px. Below that I have a form that contains some fieldsets. One of them is floated right. The problem is that the margin-bottom is not getting the full 19px in IE7. I've gone through all of the IE7 css/margin/float bugs that I can think of and have tried remedies but have been unsuccessful. I have been googling for a while now and cannot find anything that is helping out. Here is what I have tried. Wrapping the form or fieldset in an unstyled div. No apparent change. Nixing the margin-bottom on the table and instead wrapping that with a div and giving it a padding-bottom of 19px. No apparent change. Nixing the margin-bottom on the table and adding a div with a fixed height of 19px. No apparent change. Putting a clear between the table and the fieldset. I know there are some others that I am forgetting, but those are the things I have tried out recently. This happens to each fieldset.

    Read the article

  • Any Other Ideas for prototyping..

    - by davehamptonusa
    I've used Douglass Crockford's Object.beget, but augmented it slightly to: Object.spawn = function (o, spec) { var F = function () {}, that = {}, node = {}; F.prototype = o; that = new F(); for (node in spec) { if (spec.hasOwnProperty(node)) { that[node] = spec[node]; } } return that; }; This way you can "beget" and augment in one fell swoop. var fop = Object.spawn(bar, { a: 'fast', b: 'prototyping' }); In English that means, "Make me a new object called 'fop' with 'bar' as its prototype, but change or add the members 'a' and 'b'. You can even nest it the spec to prototype deeper elements, should you choose. var fop = Object.spawn(bar, { a: 'fast', b: Object.spawn(quux,{ farple: 'deep' }), c: 'prototyping' }); This can help avoid hopping into an object's prototype unintentionally in a long object name like: foo.bar.quux.peanut = 'farple'; If quux is part of the prototype and not foo's own object, your change to 'peanut' will actually change the protoype, affecting all objects prototyped by foo's prototype object. But I digress... My question is this. Because your spec can itself be another object and that object could itself have properties from it's prototype in your new object - and you may want those properties...(at least you should be aware of them before you decided to use it as a spec)... I want to be able to grab all of the elements from all of the spec's prototype chain, except for the prototype object itself... This would flatten them into the new object. Should I use: Object.spawn = function (o, spec) { var F = function () {}, that = {}, node = {}; F.prototype = o; that = new F(); for (node in spec) { that[node] = spec[node]; } that.prototype = o; return that; }; I would love thoughts and suggestions...

    Read the article

  • How do I deselect grid row when grouping in David Poll's silverlight CollectionPrinter

    - by kpg
    I'm using David Poll's CollectionPrinter and modifications by Fama to perform grouping. I'm using the control to print a datagrid with grouping and it works well if not a little slow. Problem: When the grid is displayed the first row of the grid is selected and the first cell of the row is also selected. I want to either deselect the row or change the datagrid template to make selected rows/cells appear as not selected. I tried to specify a grid template to change the row/cell selection appearance but when I added the default template I got a COM error of all things - anyway I concluded that what I was doing was not compatible with the SLab libraries, or perhaps because the grid was specified in a datatemplate. In any case I abandoned that approach. Since I have the SLab source if I understood it more there may be a way to deselect the row after from that side of things - but I know the SLaB CommectionPrinter does not rely on the data template to be a grid, so I'm not sure how to modify the code to accomplish what I want. Question: How can I prevent the row from being selected or deselect it once it is or change the appearance of the selectd row when using the CollectionPrinter with grouping? Note that the row selection problem may occur without grouping as well, I don;t know, but it definatly does with grouping.

    Read the article

  • PrintableArea in C# - Bug?

    - by Brandi
    I am having an issue with PageSettings.PrintableArea's width and height values. Width, Height, and Size properties claim to "get or set" the values. Also, the inflate() function claims to change the size based on values passed in. However, all of these attempts to change the value have not worked. Inflate() is ignore (no error, just passes as if it worked, but the values remain unchanged. Attempting to set the height, width, or size gives a compiler error: "Cannot modify the return value of 'System.Drawing.Printing.PageSettings.PrintableArea' because it is not a variable". I get the feeling that this means the "or set" part of the description is a lie. Why I want to know this: (Someone always asks...) I have a printing application (C#, WinForm) that for most things is working rather well. I can set the printer settings and page settings objects to control what displays in the print dialog's printer properties. However, with Microsoft Office Document Image Writer, these settings are sometimes ignored, and the paper size returns as 0, 0 even when it displayed something else. All I really want it for it to be WYSIWYG as far as the displayed values go, so I change the paper size back to what it should be, but the printable area, if it is wrong, makes the resulting image wonky. The resulting image is the size of the printable area instead of the value in papersize. Just wondering if there was a reason for this or a way to get it not to do that. Thanks in advance. :)

    Read the article

  • DNN: Registered Mark changing to Question Mark

    - by coultertech
    I am having a problem with registered marks in the HTML module.  We need to use the Registered Trademark symbol (®) but some of them are being changed to question marks.  I can find no ryme or reason behind which change and which remain correct.  I have tried a number of things to fix this issue including the following: Using ® and ® using ® copy and paste of ® in both source and non source and using the "insert special character" from the RTE menu Some of the symbols remain but most revert back to question marks.  If i'm in edit mode, the questions marks change back to the registered mark.  Also sometimes the first time viewing the page not logged in or in view mode, they will look fine. But as soon as I got to edit mode or a new page then go back, they change back to question marks.  I am out of idea as to why this is happening. You can see the page at: http://fasttracsc.twif.net/AboutFastTracSC.aspx  Anywhere you see Fasttrac? it should be Fasttrac® Any help anyone can provide would be much appreciated. Thanks in advance.

    Read the article

  • Using db4o with multiple application instances under medium trust

    - by Anders Fjeldstad
    I recently stumbled over the object database engine db4o which I think looks really interesting. I would like to use it in an ASP.NET MVC application that will be deployed to a shared hosting environment under medium trust. Because of the trust level, I'm restricted to using db4o in embedded/in-process mode. That in itself should be no problem, but the hosting provider also transparently runs each web application in multiple (load-balanced) server instances with shared storage, which I would say is normally a very nice feature for a $10/month hoster. However, since an instance of a db4o server with write access (whether in-process or networked) locks the underlying database file, having multiple instances of the application using the same file won't work (or at least I can't see how it would). So the question is: is it possible to use db4o in this specific environment? I have considered letting each application have its own database which is synchronized with a master database using replication (dRS), but that approach will most likely end up with very frequent bi-directional replication (read master changes at beginning of each request, write to master after each change) which I don't think will be very efficient. Summary of the web application/environment characteristics: Read-intensive (but not entirely read-only) Some delay (a few seconds) is acceptible between the time that a change is made and the time when the change shows up in all the application instances' data Must run in medium trust No guarantee that the load-balancer uses "sticky sessions" All suggestions are much appreciated!

    Read the article

  • VSTO outlook data issue through exchange sync

    - by cipheremix
    I wrote an addin for outlook, It will popup appointment's LastModificationTime while I click button the button eventhandler like this Outlook.ApplicationClass outlook = new Outlook.ApplicationClass(); Outlook.NameSpace ns = outlook.GetNamespace("MAPI"); Outlook.MAPIFolder folder = ns.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderCalendar); Outlook.Items FolderItems = folder.Items; DateTime MyDate = DateTime.Now; List<Outlook.AppointmentItem> Appts = ( from Outlook.AppointmentItem i in folder.Items where i.Start.Month == MyDate.Month && i.Start.Year == MyDate.Year select i).ToList(); foreach (Outlook.AppointmentItem Appt in Appts) { System.Windows.Forms.MessageBox.Show(Appt.LastModificationTime.ToString()); } the issue is happened while I changed appointment in my mobile phone, then sync it to the outlook through exchange server steps which makes issue: click button, get LastModificationTime as "time1" change start date as "start1" in my mobile phone, sync to outlook through exchange server click button, get LastModificationTime, still "time1" change start date as "start2" in outlook, but the appointment is still in "start1" date. restart outlook click button, get new LastModificationTime as "time2", and appointment is in "start1" date, "start2" is gone. steps without issue click button, get LastModificationTime as "time1" 1.1. restart outlook change start date as "start1" in my mobile phone, sync to outlook through exchange server click button, get LastModificationTime, "time2" It looks like List Appts is never been refreshed to latest value if the appointment is changed through exchange server. Is there any solution for this issue? or other reason to make it happened?

    Read the article

  • Powershell: Select-Object with additional calculated difference value

    - by David.Chu.ca
    Let me explain my question. I have a daily report about one PC's free space as in text file (sp.txt) like: DateTime FreeSpace(MB) ----- ------------- 03/01/2010 100.43 DateTime FreeSpace(MB) ----- ------------- 03/02/2010 98.31 .... Then I need to use this file as input to generate a report with difference of free space between dates: DateTime FreeSpace(MB) Change ----- ------------- ------ 03/01/2010 100.43 03/02/2010 98.31 2.12 .... Here are some codes I have to get above result without the additional "Change" calculation: Get-Content "C:\report\sp.txt" ` | where {$_.trim().length -gt 0 -and -not ($_ -like "Date*") -and -not ($_ -like "---*")} ` | Select-Object @{Name='DateTime'; expression={[DateTime]::Parse($_.SubString(0, 10).Trim())}}, ` @{Name='FreeSpace(MB)'; expression={$_.SubString(12, 12).Trim()}}, ` | Sort-Object DateTime ` | Select-Object DateTime, 'FreeSpace(MB)' |ft -AutoSize # how to add additional column Change to this Select-Object? My understanding is that Select-Object will return a collection of objects. Here I create this collection from an input text file(parse each line into parts: datetime and freespace(MB)). In order to calculate the difference between dates, I guess that I need to add additional calculated value to the result of the last Select-Object, or pipe the result to another Select-Object with the calculation as additional property or column. In order to do it, I need to get the previous row's FreeSpace value. Not sure if it is possible and how?

    Read the article

  • Base 128 or 256 Encoding for the Binary Lexical Octet Adhoc Transport Protocol?

    - by Randolpho
    I'm in the process of implementing a network driver for the Binary Lexical Octet Adhoc Transport (BLOAT) protocols in the hopes of replacing the TCP/UDP/IP stack with a much more flexible XML structure. BLOAT is detailed in RFC 3252, so if you're unfamiliar with the protocol I highly recommend you read the entire RFC before providing any comments. Don't worry, it's short and sweet; you might even enjoy it. Anyway, my problem is this: BLOAT requires that the payload be Base64 encoded which doesn't make sense to me. I mean, sure, it's the internet standard for binary payloads, but there are better, more efficient encodings available: Base128 and Base256, for example. That the RFC requires Base64 and doesn't allow for any other payload encoding really bothers me. To that end, I'm considering a small optional change to the protocol. Embrace and extend, right? Anyway, I'd like to modify the payload element to accept an encoding attribute, which can extend the encoding to Base128 or Base256, or even to other encodings I can't conceive of at the moment. If the encoding attribute isn't present, Base64 would be assumed. So my question is this: should I? I mean, BLOAT is an accepted standard, even if it isn't exactly omnipresent. If I make this change, will there be compatibility issues? I don't foresee any, but perhaps you, oh great Stack Overflow Community, can? If I do implement this change, should I contact the original RFC author? Should I offer a supplemental RFC?

    Read the article

  • castle PerRequestLifestyle not recognize

    - by Herman
    Hi all, New to Castle/Windsor, please bear with me. I am currently using the framework System.Web.Mvc.Extensibility and in its start up code, it registered HttpContextBase like the following: container.Register(Component.For<HttpContextBase>().LifeStyle.Transient.UsingFactoryMethod(() => new HttpContextWrapper(HttpContext.Current))); What I wanted to do is to change the behavior and change the lifestyle of httpContextBase to be PerWebRequest. so I have change the code to the following: container.Register(Component.For<HttpContextBase>().LifeStyle.PerWebRequest.UsingFactoryMethod(() => new HttpContextWrapper(HttpContext.Current))); However, when I do this, I got the following error: System.Configuration.ConfigurationErrorsException: Looks like you forgot to register the http module Castle.MicroKernel.Lifestyle.PerWebRequestLifestyleModule Add '<add name="PerRequestLifestyle" type="Castle.MicroKernel.Lifestyle.PerWebRequestLifestyleModule, Castle.MicroKernel" />' to the <httpModules> section on your web.config which I did under <system.web> and <system.webServer>, however, I am still getting the same error. Any hints? Thanks in advance.

    Read the article

  • Git branching / rebasing good practices

    - by Pawel Krupinski
    I have a following scenario: 3 branches: - Master - MyBranch branched off Master for the purpose of developing a new feature of the system - MyBranchLocal branched off MyBranch as my local copy of the branch MyBranch is being rebased against and pushed to by other developers (who are working on the same feature as I am). As the owner of the MyBranch branch I want to keep it in sync with Master by rebasing. I also need to merge the changes I make to MyBranchLocal with MyBranch. What is a good way to do that? Couple of possible scenarios I tried so far: I. 1. Commit change to MyBranchLocal 2. Rebase MyBranch against Master 3. Rebase MyBranchLocal against MyBranch 4. Merge MyBranch with MyBranchLocal II. 1. Commit change to MyBranchLocal 2. Merge MyBranch with MyBranchLocal 3. Rebase MyBranch against Master 4. Rebase MyBranchLocal against MyBranch III. 1. Commit change to MyBranchLocal 2. Rebase MyBranch against Master 3. Merge MyBranch with MyBranchLocal 4. Rebase MyBranchLocal against MyBranch I already know that scenario III seems to be messing the commit history up a lot, potentially duplicating commits. What is your experience? What scenarios do you recommend?

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >