Search Results

Search found 13293 results on 532 pages for 'small ticket'.

Page 70/532 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Record everything on command line centos /fedora/ ubuntu

    - by neolix
    Hello all, we are 100% Linux user across the network and we do work round clock, what happens shift change next admin come to his shift on that time what all issues comes he get resolve the issues but they just clear history from terminal. If we want record every at terminal what they have done to resolve the same issues or we can monitor as well, for trouble ticket we have internal OTRS which they update for reporting. Thanks ton

    Read the article

  • Problems set-up Single Sign-On using Kerberos authentication

    - by user1124133
    I need for Ruby on Rail application set authentication via Active Directory using Kerberos authentication. Some technical information: I are using Apache installed mod_auth_kerb In httpd.conf I added LoadModule auth_kerb_module modules/mod_auth_kerb.so In /etc/krb5.conf I added following configuration [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EU.ORG.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = yes [realms] EU.ORG.COM = { kdc = eudc05.eu.org.com:88 admin_server = eudc05.eu.org.com:749 default_domain = eu.org.com } [domain_realm] .eu.org.com = EU.ORG.COM eu.org.com = EU.ORG.COM [appdefaults] pam = { debug = true ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false } When I test kinit validuser and enter password then authentication is successful. klist returns: Ticket cache: FILE:/tmp/krb5cc_600 Default principal: [email protected] Valid starting Expires Service principal 02/08/13 13:46:40 02/08/13 23:46:47 krbtgt/[email protected] renew until 02/09/13 13:46:40 Kerberos 4 ticket cache: /tmp/tkt600 klist: You have no tickets cached In application Apache configuration I added IfModule mod_auth_kerb.c> Location /winlogin> AuthType Kerberos AuthName "Kerberos Loginsss" KrbMethodNegotiate off KrbAuthoritative on KrbVerifyKDC off KrbAuthRealms EU.ORG.COM Krb5Keytab /home/crmdata/httpd/apache.keytab KrbSaveCredentials off Require valid-user </Location> </IfModule> I restarted apache Now some tests: When I try to access application from Win7, I got pop-up message box, with text: Warning: This server is requesting that your username and password be sent in an insecure manner (basic authentification without a secure connection) When I enter valid credentials then my application opens successfully, and all works fine. Questions: Is ok that for user pop-ups such windows? If I use NTLM authentication then there no such pop-up. I checked IE Internet Options and there 'Enable Integrated Windows Authentication' is checked. Why IE try to send username and password to application apache? If I correct to understand then Windows self must make authentication via Active Directory using Kerberos protocol. When I try to access application from Win7 and I enter incorrect credentials to pop-up message box Application say Authentication failed (this is OK) In apache error log I see: [error] [client 192.168.56.1] krb5_get_init_creds_password() failed: Client not found in Kerberos database But now I cannot get possibility to enter valid credentials, only when I restart IE I can get again pop-up box. What could be incorrect or missing in my Kerberos setup? I read in some blog post that probably something is needed to be done in Active Directory side. What exactly?

    Read the article

  • What's a good Text Expander software for windows?

    - by chris.w.mclean
    What's a good text expander out there for windows? Ideally it needs to work w/ MS Word, needs to be configurable in how it gets triggered, (i.e. the string hdt when followed by a space gets transformed into Help Desk Ticket, but hdt gets ignored). And needs to have an import option where a large list of tags & expansions can be loaded. Plugins for UltraEdit/Notepad++ would also be acceptable.

    Read the article

  • any software for managing data center, incidents, SLAs?

    - by coderwhiz
    I have been looking in using OTRS - ITSM to manage all of our services but wanted to know if any software existed that is not built into a ticket system? Pretty much want to add all services and tie them to an SLA. Once that is done, we would manually declare incidents and the system would automatically send our notificaton to the proper e-mail lists for downtime/planned maintenance. Would be nice if it calculates reports for SLAs etc. thanks

    Read the article

  • Project Management

    - by user311188
    Hi: I've seen a lot of project managers but I don't have one that have all this features ... do you know any ? (if possible open source) project management (for multiple projects) task assignations or ticket system task owner or task creator says ESTIMATION each user has his own dashboard with "my tasks of today" gantt graphs thank you

    Read the article

  • Using TypeScript in ASP.NET MVC Projects

    - by shiju
    In the previous blog post Microsoft TypeScript : A Typed Superset of JavaScript, I have given a brief introduction on TypeScript. In this post, I will demonstrate how to use TypeScript with ASP.NET MVC projects and how we can compile TypeScript within the ASP.NET MVC projects. Using TypeScript with ASP.NET MVC 3 Projects The Visual Studio plug-in for TypeScript provides an ASP.NET MVC 3 project template for TypeScript that lets you to compile TypeScript from the Visual Studio. The following screen shot shows the TypeScript template for ASP.NET MVC 3 project The “TypeScript Internet Application” template is just a ASP.NET MVC 3 internet application project template which will allows to compile TypeScript programs to JavaScript when you are building your ASP.NET MVC projects. This project template will have the following section in the .csproject file <None Include="Scripts\jquery.d.ts" /> <TypeScriptCompile Include="Scripts\site.ts" /> <Content Include="Scripts\site.js"> <DependentUpon>site.ts</DependentUpon> </Content> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <Target Name="BeforeBuild"> <Exec Command="&amp;quot;$(PROGRAMFILES)\ Microsoft SDKs\TypeScript\0.8.0.0\tsc&amp;quot; @(TypeScriptCompile ->'&quot;%(fullpath)&quot;', ' ')" /> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The “BeforeBuild” target will allows you to compile TypeScript programs when you are building your ASP.NET MVC projects. The TypeScript project template will provide a typing reference file for the jQuery library named “jquery.d.ts”. The following default app.ts file referenced to jquery.d.ts 1: ///<reference path='jquery.d.ts' /> 2:   3: $(document).ready(function () { 4:   5: $(".btn-slide").click(function () { 6: $("#main").slideToggle("slow"); 7: $(this).toggleClass("active"); 8: }); 9:   10: }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Using TypeScript with ASP.NET MVC 4 Projects The current preview version of TypeScript is not providing a project template for ASP.NET MVC 4 projects. But you can use TypeScript with ASP.NET MVC 4 projects by editing the project’s .csproject file. You can take the necessary settings from ASP.NET MVC 3 project file. I have just added the following section in the end of the .csproj file of a ASP.NET MVC 4 project, which will allows to compile all TypeScript when building ASP.NET MVC 4 project. <ItemGroup> <TypeScriptCompile Include="$(ProjectDir)\**\*.ts" /> </ItemGroup> <Target Name="BeforeBuild"> <Exec Command="&amp;quot;$(PROGRAMFILES)\ Microsoft SDKs\TypeScript\0.8.0.0\tsc&amp;quot; @(TypeScriptCompile ->'&quot;%(fullpath)&quot;', ' ')" /> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • AD Password About to Expire check problem with ASP.Net

    - by Vince
    Hello everyone, I am trying to write some code to check the AD password age during a user login and notify them of the 15 remaining days. I am using the ASP.Net code that I found on the Microsoft MSDN site and I managed to add a function that checks the if the account is set to change password at next login. The login and the change password at next login works great but I am having some problems with the check for the password age. This is the VB.Net code for the DLL file: Imports System Imports System.Text Imports System.Collections Imports System.DirectoryServices Imports System.DirectoryServices.AccountManagement Imports System.Reflection 'Needed by the Password Expiration Class Only -Vince Namespace FormsAuth Public Class LdapAuthentication Dim _path As String Dim _filterAttribute As String 'Code added for the password expiration added by Vince Private _domain As DirectoryEntry Private _passwordAge As TimeSpan = TimeSpan.MinValue Const UF_DONT_EXPIRE_PASSWD As Integer = &H10000 'Function added by Vince Public Sub New() Dim root As New DirectoryEntry("LDAP://rootDSE") root.AuthenticationType = AuthenticationTypes.Secure _domain = New DirectoryEntry("LDAP://" & root.Properties("defaultNamingContext")(0).ToString()) _domain.AuthenticationType = AuthenticationTypes.Secure End Sub 'Function added by Vince Public ReadOnly Property PasswordAge() As TimeSpan Get If _passwordAge = TimeSpan.MinValue Then Dim ldate As Long = LongFromLargeInteger(_domain.Properties("maxPwdAge")(0)) _passwordAge = TimeSpan.FromTicks(ldate) End If Return _passwordAge End Get End Property Public Sub New(ByVal path As String) _path = path End Sub 'Function added by Vince Public Function DoesUserHaveToChangePassword(ByVal userName As String) As Boolean Dim ctx As PrincipalContext = New PrincipalContext(System.DirectoryServices.AccountManagement.ContextType.Domain) Dim up = UserPrincipal.FindByIdentity(ctx, userName) Return (Not up.LastPasswordSet.HasValue) 'returns true if last password set has no value. End Function Public Function IsAuthenticated(ByVal domain As String, ByVal username As String, ByVal pwd As String) As Boolean Dim domainAndUsername As String = domain & "\" & username Dim entry As DirectoryEntry = New DirectoryEntry(_path, domainAndUsername, pwd) Try 'Bind to the native AdsObject to force authentication. Dim obj As Object = entry.NativeObject Dim search As DirectorySearcher = New DirectorySearcher(entry) search.Filter = "(SAMAccountName=" & username & ")" search.PropertiesToLoad.Add("cn") Dim result As SearchResult = search.FindOne() If (result Is Nothing) Then Return False End If 'Update the new path to the user in the directory. _path = result.Path _filterAttribute = CType(result.Properties("cn")(0), String) Catch ex As Exception Throw New Exception("Error authenticating user. " & ex.Message) End Try Return True End Function Public Function GetGroups() As String Dim search As DirectorySearcher = New DirectorySearcher(_path) search.Filter = "(cn=" & _filterAttribute & ")" search.PropertiesToLoad.Add("memberOf") Dim groupNames As StringBuilder = New StringBuilder() Try Dim result As SearchResult = search.FindOne() Dim propertyCount As Integer = result.Properties("memberOf").Count Dim dn As String Dim equalsIndex, commaIndex Dim propertyCounter As Integer For propertyCounter = 0 To propertyCount - 1 dn = CType(result.Properties("memberOf")(propertyCounter), String) equalsIndex = dn.IndexOf("=", 1) commaIndex = dn.IndexOf(",", 1) If (equalsIndex = -1) Then Return Nothing End If groupNames.Append(dn.Substring((equalsIndex + 1), (commaIndex - equalsIndex) - 1)) groupNames.Append("|") Next Catch ex As Exception Throw New Exception("Error obtaining group names. " & ex.Message) End Try Return groupNames.ToString() End Function 'Function added by Vince Public Function WhenExpires(ByVal username As String) As TimeSpan Dim ds As New DirectorySearcher(_domain) ds.Filter = [String].Format("(&(objectClass=user)(objectCategory=person)(sAMAccountName={0}))", username) Dim sr As SearchResult = FindOne(ds) Dim user As DirectoryEntry = sr.GetDirectoryEntry() Dim flags As Integer = CInt(user.Properties("userAccountControl").Value) If Convert.ToBoolean(flags And UF_DONT_EXPIRE_PASSWD) Then 'password never expires Return TimeSpan.MaxValue End If 'get when they last set their password Dim pwdLastSet As DateTime = DateTime.FromFileTime(LongFromLargeInteger(user.Properties("pwdLastSet").Value)) ' return pwdLastSet.Add(PasswordAge).Subtract(DateTime.Now); If pwdLastSet.Subtract(PasswordAge).CompareTo(DateTime.Now) > 0 Then Return pwdLastSet.Subtract(PasswordAge).Subtract(DateTime.Now) Else Return TimeSpan.MinValue 'already expired End If End Function 'Function added by Vince Private Function LongFromLargeInteger(ByVal largeInteger As Object) As Long Dim type As System.Type = largeInteger.[GetType]() Dim highPart As Integer = CInt(type.InvokeMember("HighPart", BindingFlags.GetProperty, Nothing, largeInteger, Nothing)) Dim lowPart As Integer = CInt(type.InvokeMember("LowPart", BindingFlags.GetProperty, Nothing, largeInteger, Nothing)) Return CLng(highPart) << 32 Or CUInt(lowPart) End Function 'Function added by Vince Private Function FindOne(ByVal searcher As DirectorySearcher) As SearchResult Dim sr As SearchResult = Nothing Dim src As SearchResultCollection = searcher.FindAll() If src.Count > 0 Then sr = src(0) End If src.Dispose() Return sr End Function End Class End Namespace And this is the Login.aspx page: sub Login_Click(sender as object,e as EventArgs) Dim adPath As String = "LDAP://DC=xxx,DC=com" 'Path to your LDAP directory server Dim adAuth As LdapAuthentication = New LdapAuthentication(adPath) Try If (True = adAuth.DoesUserHaveToChangePassword(txtUsername.Text)) Then Response.Redirect("passchange.htm") ElseIf (True = adAuth.IsAuthenticated(txtDomain.Text, txtUsername.Text, txtPassword.Text)) Then Dim groups As String = adAuth.GetGroups() 'Create the ticket, and add the groups. Dim isCookiePersistent As Boolean = chkPersist.Checked Dim authTicket As FormsAuthenticationTicket = New FormsAuthenticationTicket(1, _ txtUsername.Text, DateTime.Now, DateTime.Now.AddMinutes(60), isCookiePersistent, groups) 'Encrypt the ticket. Dim encryptedTicket As String = FormsAuthentication.Encrypt(authTicket) 'Create a cookie, and then add the encrypted ticket to the cookie as data. Dim authCookie As HttpCookie = New HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket) If (isCookiePersistent = True) Then authCookie.Expires = authTicket.Expiration End If 'Add the cookie to the outgoing cookies collection. Response.Cookies.Add(authCookie) 'Retrieve the password life Dim t As TimeSpan = adAuth.WhenExpires(txtUsername.Text) 'You can redirect now. If (passAge.Days = 90) Then errorLabel.Text = "Your password will expire in " & DateTime.Now.Subtract(t) 'errorLabel.Text = "This is" 'System.Threading.Thread.Sleep(5000) Response.Redirect("http://somepage.aspx") Else Response.Redirect(FormsAuthentication.GetRedirectUrl(txtUsername.Text, False)) End If Else errorLabel.Text = "Authentication did not succeed. Check user name and password." End If Catch ex As Exception errorLabel.Text = "Error authenticating. " & ex.Message End Try End Sub ` Every time I have this Dim t As TimeSpan = adAuth.WhenExpires(txtUsername.Text) enabled, I receive "Arithmetic operation resulted in an overflow." during the login and won't continue. What am I doing wrong? How can I correct this? Please help!! Thank you very much for any help in advance. Vince

    Read the article

  • Developer’s Life – Disaster Lessons – Notes from the Field #039

    - by Pinal Dave
    [Note from Pinal]: This is a 39th episode of Notes from the Field series. What is the best solution do you have when you encounter a disaster in your organization. Now many of you would answer that in this scenario you would have another standby machine or alternative which you will plug in. Now let me ask second question – What would you do if you as an individual faces disaster?  In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. Howdy! When it was my turn to share the Notes from the Field last time, I took a departure from my normal technical content to talk about Attitude and Communication.(http://blog.sqlauthority.com/2014/05/08/developers-life-attitude-and-communication-they-can-cause-problems-notes-from-the-field-027/) Pinal said it was a popular topic so I hope he won’t mind if I stick with Professional Development for another of my turns at sharing some information here. Like I said last time, the “soft skills” of the IT world are often just as important – sometimes more important – than the technical skills. As a consultant with Linchpin People – I see so many situations where the professional skills I’ve gained and use are more valuable to clients than knowing the best way to tune a query. Today I want to continue talking about professional development and tell you about the way I almost got myself hit by a train – and why that matters in our day jobs. Sometimes we can learn a lot from disasters. Whether we caused them or someone else did. If you are interested in learning about some of my observations in these lessons you can see more where I talk about lessons from disasters on my blog. For now, though, onto how I almost got my vehicle hit by a train… The Train Crash That Almost Was…. My family and I own a little schoolhouse building about a 10 mile drive away from our house. We use it as a free resource for families in the area that homeschool their children – so they can have some class space. I go up there a lot to check in on the property, to take care of the trash and to do work on the property. On the way there, there is a very small Stop Sign controlled railroad intersection. There is only two small freight trains a day passing there. Actually the same train, making a journey south and then back North. That’s it. This road is a small rural road, barely ever a second car driving in the neighborhood there when I am. The stop sign is pretty much there only for the train crossing. When we first bought the building, I was up there a lot doing renovations on the property. Being familiar with the area, I am also familiar with the train schedule and know the tracks are normally free of trains. So I developed a bad habit. You see, I’d approach the stop sign and slow down as I roll through it. Sometimes I’d do a quick look and come to an “almost” stop there but keep on going. I let my impatience and complacency take over. And that is because most of the time I was going there long after the train was done for the day or in between the runs. This habit became pretty well established after a couple years of driving the route. The behavior reinforced a bit by the success ratio. I saw others doing it as well from the neighborhood when I would happen to be there around the time another car was there. Well. You already know where this ends up by the title and backstory here. A few months ago I came to that little crossing, and I started to do the normal routine. I’d pretty much stopped looking in some respects because of the pattern I’d gotten into.  For some reason I looked and heard and saw the train slowly approaching and slammed on my brakes and stopped. It was an abrupt stop, and it was close. I probably would have made it okay, but I sat there thinking about lessons for IT professionals from the situation once I started breathing again and watched the cars loaded with sand and propane slowly labored down the tracks… Here are Those Lessons… It’s easy to get stuck into a routine – That isn’t always bad. Except when it’s a bad routine. Momentum and inertia are powerful. Once you have a habit and a routine developed – it’s really hard to break that. Make sure you are setting the right routines and habits TODAY. What almost dangerous things are you doing today? How are you almost messing up your production environment today? Stop doing that. Be Deliberate – (Even when you are the only one) – Like I said – a lot of people roll through that stop sign. Perhaps the neighbors or other drivers think “why is he fully stopping and looking… The train only comes two times a day!” – they can think that all they want. Through deliberate actions and forcing myself to pay attention, I will avoid that oops again. Slow down. Take a deep breath. Be Deliberate in your job. Pay attention to the small stuff and go out of your way to be careful. It will save you later. Be Observant – Keep your eyes open. By looking around, observing the situation and understanding what your servers, databases, users and vendors are doing – you’ll notice when something is out of place. But if you don’t know what is normal, if you don’t look to make sure nothing has changed – that train will come and get you. Where can you be more observant? What warning signs are you ignoring in your environment today? In the IT world – trains are everywhere. Projects move fast. Decisions happen fast. Problems turn from a warning sign to a disaster quickly. If you get stuck in a complacent pattern of “Everything is okay, it always has been and always will be” – that’s the time that you will most likely get stuck in a bad situation. Don’t let yourself get complacent, don’t let your team get complacent. That will lead to being proactive. And a proactive environment spends less money on consultants for troubleshooting problems you should have seen ahead of time. You can spend your money and IT budget on improving for your customers. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • TechEd 2010 Day One – How I Travel

    - by BuckWoody
    Normally when I blog on the first day of a conference, well, there hasn’t been a first day yet. So I talk about the value of a conference or some other facet. And normally in my (non-conference) blogs, I show you how I have learned to be a data professional – things I’ve learned how to do over the years. But in all that time, I don’t think I’ve ever talked about a big part of my job – traveling. I’ve traveled a lot throughout the years, when I’ve taught, gone to conferences, consulted and in my current role assisting Microsoft customers with large-scale database system designs.  So I’ll share a few thoughts about what I do. Keep in mind that I travel for short durations, just a day or so, and sometimes I travel internationally. For those I prepare differently – what I’m talking about here is what I do for a multi-day, same-country trip. Hopefully you find it useful. I’ll tag a few other travelers I know to add their thoughts.  Preparing for Travel   When I’m notified of a trip, I begin researching the location. I find the flights, hotel and (if I have to) a car to use while I’m away. We have an in-house system we use to book the travel, but when I travel not-for-Microsoft I use Expedia and Kayak to find what I need.  Traveling on Sunday and Friday is the worst. I have to do it sometimes (like this week) and it’s always a bad idea. But you can blunt the impact by booking as early as you can stand it. That means I have to be up super-early, but the flights are normally on time. I stay flexible, and always have a backup plan in case the flights are delayed or canceled.  For the hotel, I tend to go on the cheaper side, and I look for older hotels that have been renovated, or quirky ones. For instance, in Boise, ID recently I stayed at a 60’s-themed (think Mad-Men) hotel that was very cool. Always I go on the less expensive side – I find the “luxury” hotels nail me for Internet, food, everything. The cheaper places include all kinds of things, and even have breakfasts, shuttles and all kinds of things that start to add up. I even call ahead to make sure there’s an iron and ironing board available, since I’ll need those when I get there.  I find any way I can not to get a car. I use mass-transit wherever possible, and try to make friends and pay their gas to take me places. In a pinch, I’ll use a taxi. It ends up being cheaper, faster, and less stressful all around.  Packing  Over the years I’ve learned never to check luggage whenever I can. To do that, I lay out everything I want to take with me on the bed, and then try and make sure I’m really going to use it. I wear a dark wool set of pants, which I can clean and wear in hot and cold climates. I bring undies and socks of course, and for most places I have to wear “dress up” shirts. I bring at least two print T-Shirts in case I want to dress down for something while I’m gone, but I only bring one set of shoes. All the  clothes are rolled as tightly as possible as I learned in the military. Then I use those to cushion the electronics I take.  For toiletries I bring a shaver, toothpaste and toothbrush, D/O and a small brush. Everything else the hotel will provide.  For entertainment, I take a small Zune, a full PC-Headset (so I can make IP calls on the road) and my laptop. I don’t take books or anything else – everything is electronic. I use E-books (downloaded from our Library), Audio-Books (on the Zune) and I also bring along a Kaossilator (more here) to play music in the hotel room or even on the plane without being heard.  If I can, I pack into one roll-on bag. There’s not a lot better than this one, but I also have a Bag I was given as a prize for something or other here at Microsoft. Either way, I like something with less pockets and more big, open compartments. Everything gets rolled up and packed in, with all of the wires and charges in small bags my wife made for me. The laptop (and anything I don’t want gate-checked) goes on top or in an outside pouch so I can grab it quickly if I have to gate-check the bag. As much as I can, I try to go in one bag. When I can’t (like this week) I use this bag since it can expand, roll up, crush and even be put away later. It’s super-heavy canvas and worth the price. This allows me to not check a bag.  Journey Logistics The day of the trip, I have everything ready since I’m getting up early. I pack a few small snacks inside a plastic large-mouth water bottle, which protects the snacks and lets me get water in the terminal. I bring along those little powdered drink mixes to add to the water.  At the airport, I make a beeline for the power-outlets. I charge up my laptop and phone, and download all my e-mails so I can work on them off-line in the air. I don’t travel as often as I used to – just every month or so now, so I don’t have a membership to an airline club. If I travel much more, I’ll invest in one again – they are WELL worth the money, for the wifi, food and quiet if for nothing else.  I print out my logistics on paper and put that in my pocket – flight numbers, hotel addresses and phones for everything. That way if I have to make a change, I don’t have to boot up anything or even have power to be able to roll with the punches if things change.  Working While Away  While I’m away I realize I’m going to be swamped with things at the conference or with my clients. So I turn on Out-Of-Office notifications to let people know I won’t be as responsive, and I keep my Outlook calendar up to date so my co-workers know what I’m up to. I even update it with hotel and phone info in case they really need to reach me. I share my calendar with my wife so my family knows what I’m doing as well.  I check my e-mail during breaks, but I only respond to them in the evening or early morning at the hotel. I tweet during conferences. The point is to be as present as possible during the event or when I’m at the clients. Both deserve it.  So those are my initial thoughts. I’ll tag Brent Ozar, Brad McGeHee and Paul Randal, and they can tag whomever they wish. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CodePlex Daily Summary for Saturday, March 20, 2010

    CodePlex Daily Summary for Saturday, March 20, 2010New ProjectsaMaze Mapa Generator: Parte do Projeto aMazeASP.Net RIA Controls: Simple ASP.Net server controls to integrate Flash and Silverlight controls into your web applications. Included controls don't use any JavaScript,...BMap.NET: BMaps.NET is a .NET application written in C#, for access Bing Maps from your computer without web browsers. With it you can access to Bing Maps an...DaliNet: A .NET API for the Tridonic.Atco DALI USB device.Fabrica7: This is the main project of Fabrica 7 Corp.Image Ripper: A Winform application parse & fetch various HD pictures in specific photo galleries.IoCWrap: Provides interfaces which wrap various IoC container implementations so that it is possible to switch to a different provider without changing any ...NetSockets: NetSockets is a .NET class library that provides easy-to-use, multi-threaded, event-based, client and server network communication.Network Backup: Network Backup is a home and small company backup solution for workstations and a backup server. It incorporates a backup service, scheduler, data ...NUnit.Specs: Specification extensions for NUnit.Nutrivida: Sistema para avaliação de especialização.OHTB Snake: OHTB Snake is a multiplayer game. In this incarnation, snakes may eat 3 types of powerups: standard berries, causing them to grow; sawberries, caus...Playground TDrouen: Tjerk's PlaygroundPower Plan Chooser: This is my first endeavor into a C# Windows application with XAML. The program sits in the notification area (task bar) and lets you quickly activa...Search IMDB in C#: In lack of an IMDB API most of us resort to screen scraping utilities to query the Internet Movie Database. This one is written in C# (.NET 2.0 sta...SIGPRO Desktop: FUNCERNSql2008 PerfMonCounter Fix: Small console application to Fix the SQL 2008 Express Edition installation error: Pequena aplicação para Corrigir o seguinte erro de Instalação do...TwiztedTracker: TwiztedTracker designed to make your bug tracking easy.UmbracoXsltLogHelper: I needed a way to easily add log rows from my xslt macros, and added a single-line-extension for that reason. Then I played around with the umbraco...VisualStock: VisualStock is stock data visualization, analysis application build on the Micorsoft Composite Application Library.WHS File Mover: A Windows Home Server Plugin to move files from a local directory ("drop" or "staging" directory to a folder share)XML based Content Deployment in SharePoint: XML based Content Deployment in Sharepoint helps you to easy deploy content into SharePoint, including webs, lists, items, files and folder. You wi...New ReleasesASP.Net RIA Controls: Version 1.0 Beta: The first functionnal version.BMap.NET: BMap.NET 1: This is the 1st version of BMap.NETDigital Media Processing Project 1: Image Processor: Image Processor 1.0: All features implemented. Added: clipping imageFamily Tree Analyzer: Version 1.3.1.0: Version 1.3.1.0 Added a cancel button to marriage and children IGI Searches Opening Results window now automatically shows first record Updated IGI...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.5 Released: Hi, This release contains fix for the following bug: * Chart threw exception if ZoomingEnabled property was set to True at real-time. You ca...Homework Helper: Homework Helper v.1.1: Sorry but the latest release didn't seem to be the latest. This should be the right one!Image Ripper: Image Ripper: Image Ripper based on HtmlAgilityPack and GData library.ManPowerEngine: 0.1: UpdatesSound System added. Bitmap Collider in Physics System works now. Improved the performance of HTTP download in images Physics Framework...NIPO Data Processing Component Framework: NIPO 1.0: The first release of NIPO. Includes the NIPO binary dll and documentation. This release does not include a starter application since it is still in...patterns & practices SharePoint Guidance: SPG2010 Drop7: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...Photosynth Point Cloud Exporter: Photosynth Point Cloud Exporter 1.0.2: Photosynth webservice reference updated to work with the new site OBJ file format support added (Note: this format doesn't support vertex colors)Power Plan Chooser: Power Plan Chooser 1.0.0: Power Plan Chooser is a small utility that sits in the notification area (task bar) in Windows 7 and allows the user to quickly activate one of the...Restart Explorer: RestartExplorer Release 1.00.0001: Initial release: Start, stop and restart Windows Explorer with this utility.Search IMDB in C#: Search IMDB 1.0: Source code included with compiled example.SIMD Detector: 3rd Release: Added Intel AES instruction check Added a CSharp Winform NetSIMDDetector application. Changes the red ball and green ball images to red cross a...Sql2008 PerfMonCounter Fix: Sql2008FIx_PerfMonCounter.zip: Small console application to Fix the SQL 2008 Express Edition installation error: http://support.microsoft.com/kb/300956 Rule Name PerfMonCounter...UmbracoXsltLogHelper: 0.9 Working Beta: First version. XsltLogHelper09 is the installable package.VCC: Latest build, v2.1.30319.0: Automatic drop of latest buildWCF RIA Services Contrib: RIA Services Contrib RC Release: This version is recompiled against the RC release of WCF RIA Services.XML based Content Deployment in SharePoint: SPContentDeployment 1.0.0.0: The first link contains the resources and a sample project. The second link contains everything included in the first package and an additional fo...Yet Another GPS: YAGPS Alfa.2: Yet another GPS tracker is a very powerful GPS track application for Windows Mobile Speed Guage, Sat Count number, KML for google map file formatZGuideTV.NET: ZGuideTV.NET 0.92: Vendredi 19 mars 2010 (ZGuideTV.NET bêta 9 build 0.92) - English below Corrections : - Gestion de certains contrôles dans l'écran principal. - Div...Most Popular ProjectsMetaSharpRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsLINQ to TwitterRawrOData SDK for PHPjQuery Library for SharePoint Web ServicesDirectQPHPExcelpatterns & practices – Enterprise LibraryBlogEngine.NETFarseer Physics EngineNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • Mscorlib mocking minus the attribute

    - by mehfuzh
    Mocking .net framework members (a.k.a. mscorlib) is always a daunting task. It’s the breed of static and final methods and full of surprises. Technically intercepting mscorlib members is completely different from other class libraries. This is the reason it is dealt differently. Generally, I prefer writing a wrapper around an mscorlib member (Ex. File.Delete(“abc.txt”)) and expose it via interface but that is not always an easy task if you already have years old codebase. While mocking mscorlib members first thing that comes to people’s mind is DateTime.Now. If you Google through, you will find tons of example dealing with just that. May be it’s the most important class that we can’t ignore and I will create an example using JustMock Q2 with the same. In Q2 2012, we just get rid of the MockClassAtrribute for mocking mscorlib members. JustMock is already attribute free for mocking class libraries. We radically think that vendor specific attributes only makes your code smelly and therefore decided the same for mscorlib. Now, I want to fake DateTime.Now for the following class: public class NestedDateTime { public DateTime GetDateTime() { return DateTime.Now; } } It is the simplest one that can be. The first thing here is that I tell JustMock “hey we have a DateTime.Now in NestedDateTime class that we want to mock”. To do so, during the test initialization I write this: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Mock.Replace(() => DateTime.Now).In<NestedDateTime>(x => x.GetDateTime());.csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I can also define it for all the members in the class, but that’s just a waste of extra watts. Mock.Replace(() => DateTime.Now).In<NestedDateTime>(); Now question, why should I bother doing it? The answer is that I am not using attribute and with this approach, I can mock any framework members not just File, FileInfo or DateTime. Here to note that we already mock beyond the three but when nested around a complex class, JustMock was not intercepting it correctly. Therefore, we decided to get rid of the attribute altogether fixing the issue. Finally, I write my test as usual. [TestMethod] public void ShouldAssertMockingDateTimeFromNestedClass() { var expected = new DateTime(2000, 1, 1); Mock.Arrange(() => DateTime.Now).Returns(expected); Assert.Equal(new NestedDateTime().GetDateTime(), expected); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } That’s it, we are good. Now let me do the same for a random one, let’s say I want mock a member from DriveInfo: Mock.Replace<DriveInfo[]>(() => DriveInfo.GetDrives()).In<MsCorlibFixture>(x => x.ShouldReturnExpectedDriveWhenMocked()); Moving forward, I write my test: [TestMethod] public void ShouldReturnExpectedDriveWhenMocked() { Mock.Arrange(() => DriveInfo.GetDrives()).MustBeCalled(); DriveInfo.GetDrives(); Mock.Assert(()=> DriveInfo.GetDrives()); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here is one convention; you have to replace the mscorlib member before executing the target method that contains it. Here the call to DriveInfo is within the MsCorlibFixture therefore it should be defined during test initialization or before executing the test method. Hope this gives you the idea.

    Read the article

  • Car animations in Frogger on Javascript

    - by Mijoro Nicolas Rasoanaivo
    I have to finish a Frogger game in Javascript for my engineering school degree, but I don't know how to animate the cars. Right now I tried to manipulate the CSS, the DOM, I wrote a script with a setTimeout(), but none of them works.Can I have some help please? Here's my code and my CSS: <html> <head> <title>Image d&eacute;filante</title> <link rel="stylesheet" type="text/css" href="map_style.css"/> </head> <body onload="start()"> <canvas id="jeu" width="800" height="450"> </canvas> <img id="voiture" class="voiture" src="car.png" onload="startTimerCar()"> <img id="voiture2" class="voiture" src="car.png" onload="startTimerCar()"> <img id="voiture3" class="voiture" src="car.png" onload="startTimerCar()"> <img id="bigrig" class="bigrig" src="bigrig.png" onload="startTimerBigrig()"> <img id="bigrig2" class="bigrig" src="bigrig.png" onload="startTimerBigrig()"> <img id="bigrig3" class="bigrig" src="bigrig.png" onload="startTimerBigrig()"> <img id="hotrod" src="hotrod.png" onload="startTimerHotrod()"> <img id="hotrod2" src="hotrod.png" onload="startTimerHotrod()"> <img id="turtle" src="turtles_diving.png" onload="startTimerTurtle()"> <img id="turtle2" src="turtles_diving.png" onload="startTimerTurtle()"> <img id="turtle3" src="turtles_diving.png" onload="startTimerTurtle()"> <img id="small" src="log_small.png" onload="startTimerSmall()"> <img id="small2" src="log_small.png" onload="startTimerSmall()"> <img id="small3" src="log_small.png" onload="startTimerSmall()"> <img id="small4" src="log_small.png" onload="startTimerSmall()"> <img id="med" src="log_medium.png" onload="startTimerMedium()"> <img id="med2" src="log_medium.png" onload="startTimerMedium()"> <img id="med3" src="log_medium.png" onload="startTimerMedium()"> <script type="text/javascript"> var X = 1; var timer; function start(){ setInterval(init,10); document.onkeydown = move; var canvas = document.getElementById('jeu'); var context = canvas.getContext('2d'); var frog = document.getElementById('frog'); var posX_frog = 415; var posY_frog = 400; var voiture = [document.getElementById('voiture'),document.getElementById('voiture2'),document.getElementById('voiture3')]; var bigrig = [document.getElementById('bigrig'),document.getElementById('bigrig2'),document.getElementById('bigrig3')]; var hotrod = [document.getElementById('hotrod'),document.getElementById('hotrod2')]; var turtle = [document.getElementById('turtle'),document.getElementById('turtle2'),document.getElementById('turtle3')]; var small = [document.getElementById('small'),document.getElementById('small2'),document.getElementById('small3'),document.getElementById('small4')]; var med = [document.getElementById('med'),document.getElementById('med2'),document.getElementById('med3')]; function init() { context.fillStyle = "#AEEE00"; context.fillRect(0,0,800,50); context.fillRect(0,200,800,50); context.fillRect(0,400,800,50); context.fillStyle = "#046380"; context.fillRect(0,50,800,150); context.fillStyle = "#000000"; context.fillRect(0,250,800,150); var img= new Image(); img.src="./frog.png"; context.drawImage(img,posX_frog, posY_frog, 46, 38); } function move(event){ if (event.keyCode == 39){ if( posX_frog < 716 ){ posX_frog += 50; } } if(event.keyCode == 37){ if( posX_frog >25 ){ posX_frog -= 50; } } if (event.keyCode == 38){ if( posY_frog > 10 ){ posY_frog -= 50; } } if(event.keyCode == 40){ if( posY_frog <400 ){ posY_frog += 50; } } } } </script> </body> And my map_css: #jeu{ z-index:10; width: 800px; height: 450px; border: 2px black solid; overflow: hidden; position: relative; transition:width 2s; -moz-transition:width 2s; /* Firefox 4 */ -webkit-transition:width 2s; /* Safari and Chrome */ } #voiture{ z-index: 100; position: absolute; top: 315px; left: 48px; transition-timing-function: linear; -webkit-transition-timing-function: linear; -moz-transition-timing-function: linear; } #voiture2{ z-index: 100; position: absolute; top: 315px; left: 144px; } #voiture3{ z-index: 100; position: absolute; top: 315px; left: 240px; } #bigrig{ z-index: 100; position: absolute; top: 365px; left: 200px; } #bigrig2{ z-index: 100; position: absolute; top: 365px; left: 400px; } #bigrig3{ z-index: 100; position: absolute; top: 365px; left: 600px; } #hotrod{ z-index: 100; position: absolute; top: 265px; left: 200px; } #hotrod2{ z-index: 100; position: absolute; top: 265px; left: 500px; } #hotrod3{ z-index: 100; position: absolute; top: 265px; left: 750px; } #turtle{ z-index: 100; position: absolute; top: 175px; left: 50px; } #turtle2{ z-index: 100; position: absolute; top: 175px; left: 450px; } #turtle3{ z-index: 100; position: absolute; top: 175px; left: 250px; } #small{ z-index: 100; position: absolute; top: 125px; left: 20px; } #small2{ z-index: 100; position: absolute; top: 125px; left: 220px; } #small3{ z-index: 100; position: absolute; top: 125px; left: 420px; } #small4{ z-index: 100; position: absolute; top: 125px; left: 620px; } #med{ z-index: 100; position: absolute; top: 75px; left: 120px; } #med2{ z-index: 100; position: absolute; top: 75px; left: 320px; } #med3{ z-index: 100; position: absolute; top: 75px; left: 520px; } I had to say that I'm in the obligation to code in HTML5, CSS3, and Javascript but not jQuery, who is way more easier, I already created games in jQuery... It takes me too much time and too much code lines right here.

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • Azure Task Scheduling Options

    - by charlie.mott
    Currently, the Azure PaaS does not offer a distributed\resilient task scheduling service.  If you do want to host a task scheduling product\solution off-premise (and ideally use Azure), what are your options? PaaS Option 1: Worker Roles Use a worker role to schedule and execute actions at specific time periods.  There are a few frameworks available to assist with this: http://azuretoolkit.codeplex.com https://github.com/Lokad/lokad-cloud/wiki/TaskScheduler http://blog.smarx.com/posts/building-a-task-scheduler-in-windows-azure - This addresses a slightly different set of requirements. It’s a more dynamic approach for queuing up tasks, but not repeatable tasks (e.g. daily). I found the Azure Toolkit option the most simple to implement.  Step 1 : Create a domain entity implementing IJob for each job to schedule.  In this sample, I asynchronously call a WCF service method. 1: namespace Acme.WorkerRole.Jobs 2: { 3: using AzureToolkit; 4: using ScheduledTasksService; 5: 6: public class UploadEmployeesJob : IJob 7: { 8: public void Run() 9: { 10: // Call Tasks Service 11: var client = new ScheduledTasksServiceClient("BasicHttpBinding_IScheduledTasksService"); 12: client.UploadEmployees(); 13: client.Close(); 14: } 15: } 16: } Step 2 : In the worker role run method, add the jobs to the toolkit engine. 1: namespace Acme.WorkerRole 2: { 3: using AzureToolkit.Engine; 4: using Jobs; 5:   6: public class WorkerRole : WorkerRoleEntryPoint 7: { 8: public override void Run() 9: { 10: var engine = new CloudEngine(); 11:   12: // Add Scheduled Jobs (using CronJob syntax - see http://www.adminschoice.com/crontab-quick-reference). 13:   14: // 1. Upload Employee job - 8.00 PM every weekday (Mon-Fri) 15: engine.WithJobScheduler().ScheduleJob<UploadEmployeesJob>(c => { c.CronSchedule = "0 20 * * 1-5"; }); 16: // 2. Purge Data job - 10 AM every Saturday 17: engine.WithJobScheduler().ScheduleJob<PurgeDataJob>(c => { c.CronSchedule = "0 10 * * 6"; }); 18: // 3. Process Exceptions job - Every 5 minutes 19: engine.WithJobScheduler().ScheduleJob<ProcessExceptionsJob>(c => { c.CronSchedule = "*/5 * * * *"; }); 20:   21: engine.Run(); 22: base.Run(); 23: } 24: } 25: } Pros Cons Azure Toolkit option is simple to implement. For the AzureToolkit option, you are limited to a single worker role.  Otherwise, the jobs will be executed multiple times, once for each worker role instance.   Paying for a continuously running worker role, even if it just processes a single job once a week.  If you only have a few scheduled tasks to run calling asynchronous services hosted in different web roles, an extra small worker role likely to be sufficient.  However, for an extra small worker role this still costs $14.40/month (03/09/2012). Option 2: Use Scheduled Task on Azure Web Role calling a console app Setup a Windows Scheduled Task on the Azure Web Role. This calls a console application that calls the WCF service methods that run the task actions. This design is described here: http://www.ronaldwidha.net/2011/02/23/cron-job-on-azure-using-scheduled-task-on-a-web-role-to-replace-azure-worker-role-for-background-job/ http://www.voiceoftech.com/swhitley/index.php/2011/07/windows-azure-task-scheduler/ http://devlicio.us/blogs/vinull/archive/2011/10/23/moving-to-azure-worker-roles-for-nothing-and-tasks-for-free.aspx Pros Cons Fairly easy to implement. Supportability - I RDC’ed onto the Azure server and stopped the scheduled task. I then rebooted the machine and the task was re-started. I also tried deleting the task and rebooting, the same thing occurred. The only way to permanently guarantee that a task is disabled is to do a fresh deployment. I think this is a major supportability concern.   Saleability - multiple instances would trigger multiple tasks. You can only have one instance for the scheduled task web role. The guidance implements setup of the scheduled task as part of a web role instance. But if you have more than one instance in a web role, the task will be triggered multiple times for each scheduled action (once per machine). Workaround: If we wanted to use scheduled tasks for another client with a saleable WCF service, then we could include the console & tasks scripts in a separate web role (e.g. a empty WCF service with no real purpose to it). SaaS Option 3: Azure Marketplace I thought that someone might be offering this type of service via the Azure marketplace. At the point of writing this blog post, I did not find anyone doing so. https://datamarket.azure.com/ Pros Cons   Nobody currently offers this on the Azure Marketplace. Option 4: Online Job Scheduling Service Provider There are plenty of online providers that offer this type of service on a pay-as-you-go approach.  Some of these are free for small usage.   Many of these providers are listed here: http://en.wikipedia.org/wiki/Webcron Pros Cons No bespoke development for scheduler. Reliance on third party. IaaS Option 5: Setup Scheduling Software on Azure IaaS VM’s One of job scheduling software offerings could be installed and configured on Azure VM’s.  A list of software options is listed here: http://en.wikipedia.org/wiki/List_of_job_scheduler_software Pros Cons Enterprise distributed\resilient task scheduling service VM Setup and maintenance   Software Licence Costs Option 6: VM Gallery A the time of writing this blog post, I did not spot a VM in the gallery that included pre-installation of any of the above software options. Pros Cons   No current VM template. Summary For my current project that had a small handful of tasks to schedule with a limited project budget I chose option 1 (a worker role using the Azure Toolkit to schedule tasks).  If I was building an enterprise scale solution for the future, options 4 and 5 are currently worthy of consideration. Hopefully, Microsoft will include tasks scheduling in the future as part of their PaaS offerings.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Search Alternative Search Engines from within Bing’s Search Page

    - by Asian Angel
    So you love using Bing Search but may still be curious to see what another search engine will provide if used. Now you can search using another search engine from within the Bing Search page and enjoy numbered results using two simple user scripts. Note: These user scripts may also be added to other browsers as well (i.e. Iron, Opera, etc.). Before Bing Search does nicely on searches but what if you would like to try the same search with another search engine? Having to manually open a new tab, navigate to the appropriate website, and then start a new search is not too convenient. Another possible frustration for some people may be knowing just how many search results that they have looked through. Well, both of these small problems are easy to fix with two wonderful user scripts. Installing the Scripts The first script that we installed (you may do either one first) was for adding alternative search engine links. Click “Install” to get started… Note: For our example we had the Greasemonkey extension installed. When the confirmation window pops up click on “Install” to finish adding the user script to Firefox. Repeating the same procedure as above add your second script to Firefox. Confirm the second user script installation and you are ready to enjoy nicer Bing Search results. After As you can see there are two small unobtrusive differences in our search results. The alternative search engine links are conveniently located at the top of the page and now you can easily know just how many search results that you have looked through. The results when we decided to try the search in a transfer over to Yahoo. Our search transferred to Ask Search. The alternative search links can be very helpful if Bing is not providing the kind of search results that you are hoping for. Still going very nicely past the 100 mark… Conclusion If you have been wanting a small booster to searching with Bing then these two scripts will get you on your way. Using Opera Browser? See our how-to for adding user scripts to Opera here. Links Install the Bing (Alternate Search Engine Links) User Script Install the Bing Numbered Search Results User Script Download the Greasemonkey extension for Firefox (Mozilla Add-ons) Download the Stylish extension for Firefox (Mozilla Add-ons) Similar Articles Productive Geek Tips Organize Your Firefox Search Engines Into FoldersFix for Slow "Instant Search" In Outlook 2007Gain Access to a Search Box in Google ChromeManage Web Searches In SafariModify Firefox’s Search Bar Behavior with SearchLoad Options TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain

    Read the article

  • Restricting Input in HTML Textboxes to Numeric Values

    - by Rick Strahl
    Ok, here’s a fairly basic one – how to force a textbox to accept only numeric input. Somebody asked me this today on a support call so I did a few quick lookups online and found the solutions listed rather unsatisfying. The main problem with most of the examples I could dig up was that they only include numeric values, but that provides a rather lame user experience. You need to still allow basic operational keys for a textbox – navigation keys, backspace and delete, tab/shift tab and the Enter key - to work or else the textbox will feel very different than a standard text box. Yes there are plug-ins that allow masked input easily enough but most are fixed width which is difficult to do with plain number input. So I took a few minutes to write a small reusable plug-in that handles this scenario. Imagine you have a couple of textboxes on a form like this: <div class="containercontent"> <div class="label">Enter a number:</div> <input type="text" name="txtNumber1" id="txtNumber1" value="" class="numberinput" /> <div class="label">Enter a number:</div> <input type="text" name="txtNumber2" id="txtNumber2" value="" class="numberinput" /> </div> and you want to restrict input to numbers. Here’s a small .forceNumeric() jQuery plug-in that does what I like to see in this case: [Updated thanks to Elijah Manor for a couple of small tweaks for additional keys to check for] <script type="text/javascript"> $(document).ready(function () { $(".numberinput").forceNumeric(); }); // forceNumeric() plug-in implementation jQuery.fn.forceNumeric = function () { return this.each(function () { $(this).keydown(function (e) { var key = e.which || e.keyCode; if (!e.shiftKey && !e.altKey && !e.ctrlKey && // numbers key >= 48 && key <= 57 || // Numeric keypad key >= 96 && key <= 105 || // comma, period and minus key == 190 || key == 188 || key == 109 || // Backspace and Tab and Enter key == 8 || key == 9 || key == 13 || // Home and End key == 35 || key == 36 || // left and right arrows key == 37 || key == 39 || // Del and Ins key == 46 || key == 45) return true; return false; }); }); } </script> With the plug-in in place in your page or an external .js file you can now simply use a selector to apply it: $(".numberinput").forceNumeric(); The plug-in basically goes through each selected element and hooks up a keydown() event handler. When a key is pressed the handler is fired and the keyCode of the event object is sent. Recall that jQuery normalizes the JavaScript Event object between browsers. The code basically white-lists a few key codes and rejects all others. It returns true to indicate the keypress is to go through or false to eat the keystroke and not process it which effectively removes it. Simple and low tech, and it works without too much change of typical text box behavior.© Rick Strahl, West Wind Technologies, 2005-2011Posted in JavaScript  jQuery  HTML  

    Read the article

  • Open Your Windows - 4/Maio/10

    - by Claudia Costa
    This FREE technical briefing is designed to show ISVs/SIs how to leverage the Oracle11g Technology especially in the small to medium business. The briefing focuses on Oracle's 11g platform on Windows & Linux and gives a very comprehensive technical competitive overview to the products offered by Microsoft. The technical part covers Integration and Migration aspects of various Microsoft products such as SQL Server, .NET and Active Directory. Register Today! With Oracle11g Oracle introduced various products (ApplicationExpress, OracleExpress Edition, ADF, BPEL) and licenses (Oracle Database Standard Edition One, Application Server Java Edition) specifically targetting the small to medium business market and to show that Oracle Database and Application Server are as easy to use and costs less than Microsoft products in terms of purchase price and ongoing support & maintenance and even much much less when considering the Linux platform.. For those ISVs have already adopted Microsoft .NET framework and using SQL Server as their database layer, we will demostrate that Oracle11g Database is as easy as SQL Server to install, configure, and manage. In addition to that, their application development .NET platform does not requires dramatic changes to enable it to run on the Oracle database. Besides the standard functionalities, Oracle has enhanced some of the advanced features; such as Intermedia, Security, Ref Cursor, etc., tightly integrated with .NET framework so that .NET developers can take full advantage of the Oracle technology, without worrying or programming the complexity components. Objectives ·         Understand Oracle's strategy and commitment on Windows & Linux ·         Learn how to migrate from SQL Server to Oracle on Windows AND Linux ·         Understand that Oracle11g is easy to manage and to install on Windows & Linux ·         Learn how to integrate Windows products with the Oracle11g Platform ·         Learn how Oracle products interoperate & integrate with Microsoft .NET ·         Learn how an Oracle database on Windows will easily be ported to a lower cost Linux database platform and interoperate with a .NET application Prerequisites General Operating System expertise including MS-Windows and Linux. Agenda ·         Welcome and Intro ·         Oracle at a glance ·         Strategy; Small to Medium Business, Microsoft and Linux ·         Oracle 11g Architecture on Linux & Windows ·         Managing Oracle 11g on Linux & Windows ·         Application Development ·         Migration ·         Value propositions for ISVs & Wrap-up   ---------------------------------------------------------------------------- Para mais informações/inscrições, contacte: [email protected].

    Read the article

  • Add Spell Checking to Your Favorite Windows Apps

    - by Asian Angel
    Some but not all Windows apps have built-in spell checking of some sort. If you want to add spell checking to all of your apps (or a select group) then join us as we look at tinySpell. Note: There is a paid version of this software (tinySpell+) available as well for those who want extra functionality. tinySpell in Action The installation process is simple and straightforward…as soon as you have finished installing tinySpell you will see your new “System Tray Icon”. You can see tinySpell’s “Context Menu” here. Before going any further you may want to have a look through the settings to make any desired display modifications. During our tests we found it very helpful to modify the Spelling Tip options…it will make for a much nicer and easier to read display when you have a spelling error. Clicking on the Applications… Command in the Context Menu will bring up the following window. You can really finesse how active tinySpell will be here: Create a special list of apps that tinySpell will not monitor Create a custom list of apps that tinySpell will monitor If you have any particular or unique words that you would like to add to tinySpell’s Dictionary ahead of time you can do that by clicking on the Dictionary… Command in the Context Menu. Want to check the spelling of a word ahead of time or find that you are just curious about how it is spelled? Click on Open spelling window in the Context Menu to access a special spell check window. For our example we misspelled “spelling” on purpose…notice that the word has turned red. Clicking on the Check Mark Button will open a drop-down list with suggested spellings for the word that you are inquiring about. Click on the appropriate listing if you intend to copy and paste the word. Next we moved on to Notepad. As we were typing tinySpell alerted us when we typed the word “app”. You will hear a small default system sound and see a small popup as shown here if tinySpell thinks a word has been misspelled. The System Tray Icon will also change to a yellow color. You can access the list of suggested spellings by either left clicking on the small popup or the System Tray Icon. If the word is a properly spelled “abbreviation” (or special/custom) like our word here you can select Add to dictionary. Going further in our text document we once again purposely misspelled “spelling”… Left clicking on the popup gave us access to the drop-down list of suggested spellings… And clicking on the correct spelling automatically inserted it into our document in place of the misspelled word. As you can see here tinySpell was even monitoring file names when we went to save the document. Very thorough indeed. Conclusion If your favorite app does not have built-in spell checking, then tinySpell will definitely be a welcome (and very helpful) addition to your Windows system. They offer a portable version as well so you can take it with you to any PC. Links Download tinySpell *Note: The download link is located approximately half-way down the page. Similar Articles Productive Geek Tips Quick Tip: Spell Check Firefox Text Input FieldsEdit the Windows Live Writer Custom DictionaryAccess Your Favorite Google Services in Chrome the Easy WayLaunch External Apps from FirefoxNinite Makes Installing Software Incredibly Simple TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app

    Read the article

  • Reconciling the Boy Scout Rule and Opportunistic Refactoring with code reviews

    - by t0x1n
    I am a great believer in the Boy Scout Rule: Always check a module in cleaner than when you checked it out." No matter who the original author was, what if we always made some effort, no matter how small, to improve the module. What would be the result? I think if we all followed that simple rule, we'd see the end of the relentless deterioration of our software systems. Instead, our systems would gradually get better and better as they evolved. We'd also see teams caring for the system as a whole, rather than just individuals caring for their own small little part. I am also a great believer in the related idea of Opportunistic Refactoring: Although there are places for some scheduled refactoring efforts, I prefer to encourage refactoring as an opportunistic activity, done whenever and wherever code needs to cleaned up - by whoever. What this means is that at any time someone sees some code that isn't as clear as it should be, they should take the opportunity to fix it right there and then - or at least within a few minutes Particularly note the following excerpt from the refactoring article: I'm wary of any development practices that cause friction for opportunistic refactoring ... My sense is that most teams don't do enough refactoring, so it's important to pay attention to anything that is discouraging people from doing it. To help flush this out be aware of any time you feel discouraged from doing a small refactoring, one that you're sure will only take a minute or two. Any such barrier is a smell that should prompt a conversation. So make a note of the discouragement and bring it up with the team. At the very least it should be discussed during your next retrospective. Where I work, there is one development practice that causes heavy friction - Code Review (CR). Whenever I change anything that's not in the scope of my "assignment" I'm being rebuked by my reviewers that I'm making the change harder to review. This is especially true when refactoring is involved, since it makes "line by line" diff comparison difficult. This approach is the standard here, which means opportunistic refactoring is seldom done, and only "planned" refactoring (which is usually too little, too late) takes place, if at all. I claim that the benefits are worth it, and that 3 reviewers will work a little harder (to actually understand the code before and after, rather than look at the narrow scope of which lines changed - the review itself would be better due to that alone) so that the next 100 developers reading and maintaining the code will benefit. When I present this argument my reviewers, they say they have no problem with my refactoring, as long as it's not in the same CR. However I claim this is a myth: (1) Most of the times you only realize what and how you want to refactor when you're in the midst of your assignment. As Martin Fowler puts it: As you add the functionality, you realize that some code you're adding contains some duplication with some existing code, so you need to refactor the existing code to clean things up... You may get something working, but realize that it would be better if the interaction with existing classes was changed. Take that opportunity to do that before you consider yourself done. (2) Nobody is going to look favorably at you releasing "refactoring" CRs you were not supposed to do. A CR has a certain overhead and your manager doesn't want you to "waste your time" on refactoring. When it's bundled with the change you're supposed to do, this issue is minimized. The issue is exacerbated by Resharper, as each new file I add to the change (and I can't know in advance exactly which files would end up changed) is usually littered with errors and suggestions - most of which are spot on and totally deserve fixing. The end result is that I see horrible code, and I just leave it there. Ironically, I feel that fixing such code not only will not improve my standings, but actually lower them and paint me as the "unfocused" guy who wastes time fixing things nobody cares about instead of doing his job. I feel bad about it because I truly despise bad code and can't stand watching it, let alone call it from my methods! Any thoughts on how I can remedy this situation ?

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Increasing touch surface (#wp7dev)

    - by Laurent Bugnion
    When you design for Windows Phone 7 (or for any touch device, for that matter, and most especially small screens), you need to be very careful to give enough surface to your users’ fingers. It is easy to miss a touch on such small screens, and that can be horrifyingly frustrating. This is especially true when people are on the move, and trying to hit the control while walking and holding their device in one hand, or when the device is mounted in a car and vibrating with the engine. In my experience, a touch surface should be ideally minimum 60x60 pixels to be easy to activate on the Windows Phone 7 screen (which is, as we know, 800 pixels x 480 pixels). Ideally, I try to make my touch surfaces 80x80 pixels minimum. This causes a few design challenges of course. Using transparent backgrounds However, one thing is helping us tremendously: some surfaces can be made transparent, and yet react to touch. The secret is the following: If you have a panel that has a Null background (i.e. the Background is not set at all), then the empty surface does not react to touch. If however the Background is set to the Transparent color (or any color where the Alpha channel is set to 0), then it will react to touch. Setting a transparent background is easy. For example: <Grid Background="#00000000"> </Grid> or <Grid Background="Transparent"> </Grid> In C#: var grid = new Grid { Background = new SolidColorBrush( Colors.Transparent) }; Using negative margins Having a transparent background reactive to touch is a good start, but in addition, you must make sure that the surface is big enough for my clumsy fingers. One way to achieve that is to increase the transparent, touch-reactive surface, and reposition the element using negative margins. For example, consider the following UI. I changed the transparent background of the HyperlinkButton to Red, in order to visualize the touch surface. In this figure, the Settings HyperlinkButton is 105 pixels x 31 pixels. This is wide enough, but really small in height and easy to miss. To improve this, we can use negative margins, for instance: <HyperlinkButton Content="Settings" HorizontalAlignment="Right" VerticalAlignment="Bottom" Height="60" Margin="0,0,0,-15" /> Notice the usage of negative bottom margin to bring the HyperlinkButton back at the bottom of the main Grid’s first row, where it belongs. And the result is: Notice how the touch surface is much bigger than before. This makes the HyperlinkButton easier to reach, and improves the user experience. With the background set back to normal, the UI looks exactly the same, as it should: In summary: Remember to maximize the touch surface for your controls. Plan your design in consequence by reserving enough room around each control to allow their hit surface to be expanded as shown in this article. Do not cram too many controls in one page. If REALLY needed, use an additional page (or even better: use a Pivot control with multiple pivot items) for the controls that don’t fit on the first one. This should ensure a smoother user experience and improved touch behavior. Happy coding! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • DataBinder Eval and Indexed properties

    - by erwin21
    As you probably know you can “Eval” an array property like below: <%# Eval("MyArray[0].Title") %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }But what if your data object has indexed property? how do “Eval” that? Well it’s easier then you think it is: <%# Eval("[0]") %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }And if your indexed property is based on for example a NameValueCollection you can “Eval” it like this: <%# Eval("[key]") %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } As you see it’s very easy to “Eval” this kind of properties in you web application.

    Read the article

  • A tiny Utility to recycle an IIS Application Pool

    - by Rick Strahl
    In the last few weeks I've annoyingly been having problems with an area on my Web site. It's basically ancient articles that are using ASP classic pages and for reasons unknown ASP classic locks up on these pages frequently. It's not an individual page, but ALL ASP classic pages lock up. Ah yes, gotta old tech gone bad. It's not super critical since the content is really old, but still a hassle since it's linked content that still gets quite a bit of traffic. When it happens all ASP classic in that AppPool dies. I've been having a hard time tracking this one down - I suspect an errant COM object I have a Web Monitor running on the server that's checking for failures and while the monitor can detect the failures when the timeouts occur, I didn't have a good way to just restart that particular application pool. I started putzing around with PowerShell, but - as so often seems the case - I can never get the PowerShell syntax right - I just don't use it enough and have to dig out cheat sheets etc. In any case, after about 20 minutes of that I decided to just create a small .NET Console Application that does the trick instead, and in a few minutes I had this:using System; using System.Collections.Generic; using System.Text; using System.DirectoryServices; namespace RecycleApplicationPool { class Program { static void Main(string[] args) { string appPoolName = "DefaultAppPool"; string machineName = "LOCALHOST"; if (args.Length > 0) appPoolName = args[0]; if (args.Length > 1) machineName = args[1]; string error = null; DirectoryEntry root = null; try { Console.WriteLine("Restarting Application Pool " + appPoolName + " on " + machineName + "..."); root = new DirectoryEntry("IIS://" + machineName + "/W3SVC/AppPools/" +appPoolName); Console.WriteLine(root.InvokeGet("Name")); root.Invoke("Recycle"); Console.WriteLine("Application Pool recycling complete..."); } catch(Exception ex) { error = "Error: Unable to access AppPool: " + ex.Message; } if ( !string.IsNullOrEmpty(error) ) { Console.WriteLine(error); return; } } } } To run in you basically provide the name of the ApplicationPool and optionally a machine name if it's not on the local box. RecyleApplicationPool.exe "WestWindArticles" And off it goes. What's nice about AppPool recycling versus doing a full IISRESET is that it only affects the AppPool, and more importantly AppPool recycles happen in a staggered fashion - the existing instance isn't shut down immediately until requests finish while a new instance is fired up to handle new requests. So, now I can easily plug this Executable into my West Wind Web Monitor as an action to take when the site is not responding or timing out which is a big improvement than hanging for an unspecified amount of time. I'm posting this fairly trivial bit of code just in case somebody (maybe myself a few months down the road) is searching for ApplicationPool recyling code. It's clearly trivial, but I've written batch files for this a bunch of times before and actually having a small utility around without having to worry whether Powershell is installed and configured right is actually an improvement. Next time I think about using PowerShell remind me that it's just easier to just build a small .NET Console app, 'k? :-) Resources Download Executable and VS Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7  .NET  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >