Search Results

Search found 47 results on 2 pages for 'indefinite'.

Page 2/2 | < Previous Page | 1 2 

  • when is a push notification old?

    - by hookjd
    I have noted that when the iPhone OS receives a push notification, it considers that a user action to click on the action button as a "response" to the push notification for some indefinite period of time. If the user lets the push notification sit on screen for a number of seconds, or lets the phone go to sleep, the phone no longer considers the users action as a response to the push notification itself, and therefore does not launch the corresponding app. So my question is... does anyone know what the precise definition from the iPhone OS is as to how long the phone considers a push notification response to be corresponding to the push? Sorry, I can't find a great way to phrase this question, but I hope it makes sense. I'm guessing its something like 20 seconds from my testing, but I don't see this specifically documented anywhere.

    Read the article

  • Android - When to use Service

    - by Chris
    This question is related to my previous question: http://stackoverflow.com/questions/2786720/android-service-ping-url So I have an Android app that on the click of a button, opens up a web page. Now, in the background I want to call another http url for gathering stats. My question is does this have to be a service? I know a service is for background tasks that run for an indefinite period of time, while the user is busy doing something else. In my case, all I really need is to get the URL in the background, not show it to the user, instead show the web page to the user. Can I just not write code to get contents of the http url and fire up the activity that displays the web page? Coz all I want is to get the url in the background and be done with it. Or does this have to be done using the Service class? I am confused. Thanks Chris

    Read the article

  • iui transition moving in wrong direction

    - by Toddeman
    i am using iUI to build a native looking web app for iDevices. whenever i click a link with an href of #something that leads to another div on my page, the transition effect moves (correctly) as if the page were sliding in from the right like it does on any other iDevice app. a portion of my app requires an indefinite number of sub pages though, so i generate them on the fly, assign them and id, and set the window location to something like myip/mobile/#_newdiv. this causes the transition effect to move in the wrong direction though (as if the page were sliding in from the LEFT, opposite native iDevices). is there any way to fix this?

    Read the article

  • Fade between looped background images using jQuery

    - by da5id
    I'm trying to get the background image of a legacy div (by which I mean it already has a background image, which I cannot control & thus have to initially over-write) to smoothly fade between new images indefinitely. Here's what I have so far: var images = [ "/images/home/19041085158.jpg", "/images/home/19041085513.jpg", "/images/home/19041085612.jpg" ]; var counter = 0; setInterval(function() { $(".home_banner").css('backgroundImage', 'url("'+images[counter]+'")'); counter++; if (counter == images.length) { counter = 0; } }, 2000); Trouble is, it's not smooth (I'm aiming for something like the innerfade plugin). EDIT: question originally said "and it's not indefinite (it only runs once through the array).", but Mario corrected my stupid naming error. EDIT2: I'm now using Reigel's answer (see below), which works perfectly, but I still can't find any way to fade between the images smoothly. All help greatfully appreciated :)

    Read the article

  • Loop background images using jQuery

    - by da5id
    I'm trying to get the back-ground image of a legacy div (by which I mean it already has a background image, which I cannot control & thus have to initially over-write) to smoothly rotate indefinitely. Here's what I have so far: var images = [ "/images/home/19041085158.jpg", "/images/home/19041085513.jpg", "/images/home/19041085612.jpg" ]; var counter = 0; setInterval(function() { $(".home_banner").css('backgroundImage', 'url("'+images[counter]+'")'); counter ++; if (counter == colours.length) { counter = 0; } }, 2000); Trouble is, it's not smooth (I'm aiming for something like the innerfade plugin), and it's not indefinite (it only runs once through the array). All help greatfully appreciated :)

    Read the article

  • Chain dynamically created dropdowns with JQuery

    - by ricardocasares
    I'm building some kind of indefinite filters for an app, and I'm havin this problem when I clone some selects. The things is this selects are chained between them, trough the Chained Selects jQuery plugin. The problem is that every time I clone the selects, the chaining stops working, and I've tryed everything, such as .live() to make it work, but it seems I'm out of luck :D Here you have a sample of what I'm talking about, http://jsfiddle.net/7K2Eu/63/ At first, the selects chain normally, but when I clone the form, they stop working, except for the first row of selects. Thank you!!

    Read the article

  • How can I convert a decimal to a fraction ?

    - by CornyD
    How do I convert a indefinite decimal (i.e. .333333333...) to a string fraction representation (i.e. "1/3"). I am using VBA and the following is the code I used (i get an overflow error at the line "b = a Mod b": Function GetFraction(ByVal Num As Double) As String If Num = 0# Then GetFraction = "None" Else Dim WholeNumber As Integer Dim DecimalNumber As Double Dim Numerator As Double Dim Denomenator As Double Dim a, b, t As Double WholeNumber = Fix(Num) DecimalNumber = Num - Fix(Num) Numerator = DecimalNumber * 10 ^ (Len(CStr(DecimalNumber)) - 2) Denomenator = 10 ^ (Len(CStr(DecimalNumber)) - 2) If Numerator = 0 Then GetFraction = WholeNumber Else a = Numerator b = Denomenator t = 0 While b <> 0 t = b b = a Mod b a = t Wend If WholeNumber = 0 Then GetFraction = CStr(Numerator / a) & "/" & CStr(Denomenator / a) Else GetFraction = CStr(WholeNumber) & " " & CStr(Numerator / a) & "/" & CStr(Denomenator / a) End If End If End If End Function

    Read the article

  • Android ListView Item Visibility

    - by user1478754
    I'm trying to create a screen that displays multiples items. For this, I've created a listview but I'm not sure how to approach how to add/remove items from it. Since I'm using custom listview items, I've created my own listadapter but now I need a way to add items on a button click. Is there a way to create listview that can take an indefinite amount of items (instead of passing it in an fixed length array of View objects)? Also what's the best way to remove items from listviews? setVisibility?

    Read the article

  • Real-Time Multi-User Gaming Platform

    - by Victor Engel
    I asked this question at Stack Overflow but was told it's more appropriate here, so I'm posting it again here. I'm considering developing a real-time multi-user game, and I want to gather some information about possibilities before I do some real development. I've thought about how best to ask the question, and for simplicity, the best way that occurred to me was to make an analogy to the field (or playground) game darebase. In the field game of darebase, there are two or more bases. To start, there is one team on each base. The game is a fancy game of tag. When two people meet out in the field, the person who left his base most recently timewise captures the other person. They then return to that person's base. Play continues until everyone is part of the same team. So, analogizing this to an online computer game, let's suppose there are an indefinite number of bases. When a person starts up the game, he has a team that is located at, for example, his current GPS coordinates. It could be a virtual world, but for sake of argument, let's suppose the virtual world corresponds to the player's actual GPS coordinates. The game software then consults the database to see where the closest other base is that is online, and the two teams play their game of virtual tag. Note that the user of the other base could have a different base than the one run by the current user as the closest base to him, in which case, he would be in two simultaneous battles, one with each base. When they go offline, the state of their players is saved on a server somewhere. Game logic calls for the players to have some automaton-logic of some sort, so they can fend for themselves in a limited way using basic rules, until their user goes online again. The user doesn't control the players' movements directly, but issues general directives that influence the players' movement logic. I think this analogy is good enough to frame my question. What sort of platforms are available to develop this sort of game? I've been looking at smartfoxserver, but I'm not convinced yet that it is the best option or even that it will work at all. One possibility, of course, would be to roll out my own web server, but I'd rather not do that if there is an existing service out there already that I could tap into. I will be developing for iOS devices at first. So any suggestions would be greatly appreciated. I think I need to establish the architecture first before proceeding with this project. Note that darbase is not the game I intend to implement, but, upon reflection, that might not be a bad idea either.

    Read the article

  • itunes iphone sync stuck on "Waiting for items to copy"

    - by lindes
    superuser crowd, I've found a number of threads about this, but have yet to find a satisfactory answer. Perhaps I'll have better luck on this site?? I hope so... I'm having a problem with iTunes where syncing my iPhone ends up seeming to basically finish, but it says "Waiting for items to copy" after everything else, and just stays there for... well, a very long time, if I let it. Oh, and I'm not sure this is always the case, but at the moment, it's doing this while "Syncing Genius Data to [my iPhone] (Step 8 of 8)". If I click the little x in iTunes, it then gets stuck on "Canceling sync", for an equally indefinite sort of period. If I simply unplug the phone, everything seems to be synced and happy, but the next time I sync, it happens again. I presume this is a bug of some sort on Apple's part, but it seems like people have found workarounds... I'm just having trouble tracking one down that (a) is well described enough that I can actually follow it, (b) has enough detail that I can do it without losing data (i.e. tells me pathnames that I might want to copy first, or the like, before telling me to remove something) (note: see also point (a) -- I can't remove it if it's telling me to remove something that I don't know where it is!), and (c) otherwise seems sane. I'm hoping that here perhaps I'll get better luck -- with either a workaround, and/or debugging tips for figuring out how to find myself a workaround. Note: I'm busy with some other things at the moment, but can try to add some additional information later, if necessary.

    Read the article

  • Windows Server 2008 - RAID 5 Fails on Reboot

    - by Adam
    Hey, I've got an install of Windows Server 2008 Enterprise. It's running software RAID-5 with five disks. The disks were originally formatted under Windows Server 2003, but came up fine once I installed Windows Server 2008. The issue I'm having is that every time I reboot the server, the RAID comes up with a "Failed Redundancy" - the data stays available. I have 4 disks on a PCI SATA controller, and one of the disks connected to the motherboard's on-board SATA ports. (The other on-board port has the system disk connected.) I was having Disk #4 fail consistently, so I tried swapping the cables on the controller end. I swapped the on-board RAID disk with one on the PCI controller. Same issue now, expect with disk #1. Once the system's up, I can reactivate the RAID, it will resync for a while, then go to "Healthy", and will stay that way for an indefinite amount of time - until I reboot. As soon as I reboot, the disk drops again. I've ruled out disk + cable with the recabling. I don't believe it would be the controller as it seems to work fine most of the time - only failing on reboot, and the other port on the same controller connects the system disk - which is clearly working. I did look in the event log, but didn't see anything particularly relevant (although I didn't know what I was looking for - just looked for anything with a "Warning" or "Error" symbol that looked disk-related :)). I'm not particularly familiar with RAID on Windows, does anyone have any idea why this might be doing this? Any idea how to fix it? Any suggestions appreciated! -- Adam

    Read the article

  • linux shutdown hang with wifi cifs mounts

    - by Sirex
    Since fedora 15 (and now with 16) it seems that wireless clients take a long while to shutdown when they have network filesystems mounted at shutdown time. I've pushed out a cifs mount via puppet, and all clients have it, including those on wireless. If say a laptop is on a wired connection it shuts down just fine, but if its on the wifi at the time (and no wired connection) it'll hang at the fedora f logo. I'm not sure if its indefinite or just a really long while, but ill give it a test when i shut this machine down in a second. Needless to say its pretty annoying, so is there a way of causing the machine to shutdown even if network connectivity has been lost at unmount time, -- or an official way to reorder events so the wireless card is kept up until after the unmount happens during the shut down process (short of writing a custom script for shutdowns which is a bit of a kludge) ? It does this on multiple machines, and all started doing it when we went from fedora 14 to 15. It was such an obvious issue i'd kind of assumed someone must have reported it or there was an easy fix, but i've not discovered anything yet. Additional info: I can confirm that manually unmounting the mounts then shutting down (sudo shutdown or the xfce shutdown button) will shutdown just fine, it only hangs if the mounts are still mounted The puppet config that sets the mount looks like this (now with the _netdev entry that is indeed pushed to clients successfully, but makes no difference): file { "/mnt/share": ensure = directory,} mount { "/mnt/share": atboot = true, ensure = mounted, remounts = false, fstype = cifs, device = "//srv/share", options = "user,gid=shareusers,uid=${user},file_mode=0700,dir_mode=0700,credentials=/root/.smbcreds,_netdev", require = [ File["/mnt/share"], Group["shareusers"] ], } }

    Read the article

  • Relation between HTTP Keep Alive duration and TCP timeout duration

    - by Suresh Kumar
    I am trying to understand the relation between TCP/IP and HTTP timeout values. Are these two timeout values different or same? Most Web servers allow users to set the HTTP Keep Alive timeout value through some configuration. How is this value used by the Web servers? is this value just set on the underlying TCP/IP socket i.e is the HTTP Keep Alive timeout and TCP/IP Keep Alive Timeout same? or are they treated differently? My understanding is (maybe incorrect): The Web server uses the default timeout on the underlying TCP socket (i.e. indefinite) regardless of the configured HTTP Keep Alive timeout and creates a Worker thread that counts down the specified HTTP timeout interval. When the Worker thread hits zero, it closes the connection. EDIT: My question is about the relation or difference between the two timeout durations i.e. what will happen when HTTP keep-alive timeout duration and the timeout on the Socket (SO_TIMEOUT) which the Web server uses is different? should I even worry about these two being same or not?

    Read the article

  • using a While loop to control the value of a keyframe in a timeline in Javafx

    - by dragoknight
    i want to create an animation where the ball will move in one of the 4 sectors of a circle randomly.this will happen 5 times.so,i create a while loop(i<5) and call the random function.i then create an if loop and attach some x and y values according to the random fn value.i then create an timeline object inside the while loop and in the key frame value,i use these x and y values.but when i run the program,what happens is that all the values are being created(x and y values seen through println) but only the last x and y value(for i=5)is being rendered in the screen.the animations for the previous values(1<=i<=4)are not being rendered.Please advise where have i gone wrong? public function run(args:String[]) { var i=0; while(i<5) { var z=gety(); println(z); // var relativeTime:Duration=0s; if(z==1) {xbind=120; ybind=80; } else if(z==2) {xbind=120; ybind=120; } else if(z==3) { xbind=80; ybind=120; } else if(z==4) { xbind=80; ybind=80; } var t:Timeline=Timeline{ //time: bind pos with inverse; repeatCount: Timeline.INDEFINITE autoReverse: true keyFrames: [ KeyFrame{ time: 0s values: [ x => 100.0,y => 100.0]}, KeyFrame{time: 2s values:[x => xbind tween Interpolator.LINEAR, y => ybind tween Interpolator.LINEAR,] }, ] }//end timeline i++; t.play(); Thread.sleep(2000); }//end while }

    Read the article

  • Sun Ray Hardware Last Order Dates & Extension of Premier Support for Desktop Virtualization Software

    - by Adam Hawley
    In light of the recent announcement  to end new feature development for Oracle Virtual Desktop Infrastructure Software (VDI), Oracle Sun Ray Software (SRS), Oracle Virtual Desktop Client (OVDC) Software, and Oracle Sun Ray Client hardware (3, 3i, and 3 Plus), there have been questions and concerns regarding what this means in terms of customers with new or existing deployments.  The following updates clarify some of these commonly asked questions. Extension of Premier Support for Software Though there will be no new feature additions to these products, customers will have access to maintenance update releases for Oracle Virtual Desktop Infrastructure and Sun Ray Software, including Oracle Virtual Desktop Client and Sun Ray Operating Software (SROS) until Premier Support Ends.  To ensure that customer investments for these products are protected, Oracle  Premier Support for these products has been extended by 3 years to following dates: Sun Ray Software - November 2017 Oracle Virtual Desktop Infrastructure - March 2017 Note that OVDC support is also extended to the above dates since OVDC is licensed by default as part the SRS and VDI products.   As a reminder, this only affects the products listed above.  Oracle Secure Global Desktop and Oracle VM VirtualBox will continue to be enhanced with new features from time-to-time and, as a result, they are not affected by the changes detailed in this message. The extension of support means that customers under a support contract will still be able to file service requests through Oracle Support, and Oracle will continue to provide the utmost level of support to our customers as expected,  until the published Premier Support end date.  Following the end of Premier Support, Sustaining Support remains an 'indefinite' period of time.   Sun Ray 3 Series Clients - Last Order Dates For Sun Ray Client hardware, customers can continue to purchase Sun Ray Client devices until the following last order dates: Product Marketing Part Number Last Order Date Last Ship Date Sun Ray 3 Plus TC3-P0Z-00, TC3-PTZ-00 (TAA) September 13, 2013 February 28, 2014 Sun Ray 3 Client TC3-00Z-00 February 28, 2014 August 31, 2014 Sun Ray 3i Client TC3-I0Z-00 February 28, 2014 August 31, 2014 Payflex Smart Cards X1403A-N, X1404A-N February 28, 2014 August 31, 2014 Note the difference in the Last Order Date for the Sun Ray 3 Plus (September 13, 2013) compared to the other products that have a Last Order Date of February 28, 2014. The rapidly approaching date for Sun Ray 3 Plus is due to a supplier phasing-out production of a key component of the 3 Plus.   Given September 13 is unfortunately quite soon, we strongly encourage you to place your last time buy as soon as possible to maximize Oracle's ability fulfill your order. Keep in mind you can schedule shipments to be delivered as late as the end of February 2014, but the last day to order is September 13, 2013. Customers wishing to purchase other models - Sun Ray 3 Clients and/or Sun Ray 3i Clients - have additional time (until February 28, 2014) to assess their needs and to allow fulfillment of last time orders.  Please note that availability of supply cannot be absolutely guaranteed up to the last order dates and we strongly recommend placing last time buys as early as possible.  Warranty replacements for Sun Ray Client hardware for customers covered by Oracle Hardware Systems Support contracts will be available beyond last order dates, per Oracle's policy found on Oracle.com here.  Per that policy, Oracle intends to provide replacement hardware for up to 5 years beyond the last ship date, but hardware may not be available beyond the 5 year period after the last ship date for reasons beyond Oracle's control. In any case, by design, Sun Ray Clients have an extremely long lifespan  and mean time between failures (MTBF) - much longer than PCs, and over the years we have continued to see first- and second generations of Sun Rays still in daily use.  This is no different for the Sun Ray 3, 3i, and 3 Plus.   Because of this, and in addition to Oracle's continued support for SRS, VDI, and SROS, Sun Ray and Oracle VDI deployments can continue to expand and exist as a viable solution for some time in the future. Continued Availability of Product Licenses and Support Oracle will continue to offer all existing software licenses, and software and hardware support including: Product licenses and Premier Support for Sun Ray Software and Oracle Virtual Desktop Infrastructure Premier Support for Operating Systems (for Sun Ray Operating Software maintenance upgrades/support)  Premier Support for Systems (for Sun Ray Operating Software maintenance upgrades/support and hardware warranty) Support renewals For More Information For more information, please refer to the following documents for specific dates and policies associated with the support of these products: Document 1478170.1 - Oracle Desktop Virtualization Software and Hardware Lifetime Support Schedule Document 1450710.1 - Sun Ray Client Hardware Lifetime schedule Document 1568808.1 - Document Support Policies for Discontinued Oracle Virtual Desktop Infrastructure, Sun Ray Software and Hardware and Oracle Virtual Desktop Client Development For Sales Orders and Questions Please contact your Oracle Sales Representative or Saurabh Vijay ([email protected])

    Read the article

  • About Solaris 11 and UltraSPARC II/III/IV/IV+

    - by nospam(at)example.com (Joerg Moellenkamp)
    I know that I will get the usual amount of comments like "Oh, Jörg ? you can't be negative about Oracle" for this article. However as usual I want to explain the logic behind my reasoning. Yes ? I know that there is a lot of UltraSPARC III, IV and IV+ gear out there. But there are some very basic questions: Does your application you are currently running on this gear stops running just because you can't run Solaris 11 on it? What is the need to upgrade a system already in production to Solaris 11? I have the impression, that some people think that the systems get useless in the moment Oracle releases Solaris 11. I know that Sun sold UltraSPARC IV+ systems until 2009. The Sun SF490 introduced 2004 for example, that was a Sun SF480 with UltraSPARC IV and later with UltraSPARC IV+. And yes, Sun made some speedbumps. At that time the systems of the UltraSPARC III to IV+ generations were supported on Solaris 8, on Solaris 9 and on Solaris 10. However from my perspective we sold them to customers, which weren't able to migrate to Solaris 10 because they used applications not supported on Solaris 9 or who just didn't wanted to migrate to Solaris 10. Believe it or not ? I personally know two customers that migrated core systems to Solaris 10 in ? well 2008/9. This was especially true when the M3000 was announced in 2008 when it closed the darned single socket gap. It may be different at you site, however that's what I remember about that time when talking with customers. At first: Just because there is no Solaris 11 for UltraSPARC III, IV and IV+, it doesn't mean that Solaris 10 will go away anytime soon. I just want to point you to "Expect Lifetime Support - Hardware and Operating Systems". It states about Premier Support:Maintenance and software upgrades are included for Oracle operating systems and Oracle VM for a minimum of eight years from the general availability date.GA for Solaris 10 was in 2005. Plus 8 years ? 2013 ? at minimum. Then you can still opt for 3 years of "Extended Support" ? 2016 ? at minimum. 2016 your systems purchased in 2009 are 7 years old. Even on systems purchased at the very end of the lifetime of that system generation. That are the rules as written in the linked document. I said minimum The actual dates are even further in the future: Premier Support for Solaris 10 ends in 2015, Extended support ends 2018. Sustaining support ? indefinite. You will find this in the document "Oracle Lifetime Support Policy: Oracle Hardware and Operating Systems".So I don't understand when some people write, that Oracle is less protective about hardware investments than Sun. And for hardware it's the same as with Sun: Service 5 years after EOL as part of Premier Support. I would like to write about a different perspective as well: I have to be a little cautious here, because this is going in the roadmap area, so I will mention the public sources here: John Fowler told last year that we have to expect at at least 3x the single thread performance of T3 for T4. We have 8 cores in T4, as stated by Rick Hetherington. Let's assume for a moment that a T4 core will have the performance of a UltraSPARC core (just to simplify math and not to disclosing anything about the performance, all existing SPARC cores are considered equal). So given this pieces of information, you could consolidate 8 V215, 4 or 8 V245, 2 full blown V445,2 full blown 490, 2 full blown M3000 on a single T4 SPARC processor. The Fowler roadmap prezo talked about 4-socket systems with T4. So 32 V215, 16 to 8 V245, 8 fullblown V445, 8 full blown V490, 8 full blown M3000 in a system image. I think you get the idea. That said, most of the systems we are talking about have already amortized and perhaps it's just time to invest in new systems to yield other advantages like reduced space consumptions, like reduced power consumption, like some of the neat features sun4v gives you, and yes ? reduced number of processor licenses for Oracle and less money for Oracle HW/SW support. As much as I dislike it myself that my own UltraSPARC III and UltraSPARC II based systems won't run on Solaris 11 (and I have quite a few of them in my personal lab), I really think that the impact on production environments will be much less than most people think now. By the way: The reason for this move is a quite significant new feature. I will tell you that it was this feature, when it's out. I assume, telling just a word more could lead to much more time to blog.

    Read the article

  • ProgressDialog does not display until after AsyncTask completes

    - by tedwards
    I am trying to display an indefinite ProgressDialog, while an AsyncTask binds to a RemoteService. The RemoteService builds a list of the users contacts when the service is first created. For a long list of contacts this may take 5~10 seconds. The problem I am having, is that the ProgressDialog does not display until after the RemoteService has built it's list of contacts. I even tried putting a Thread.sleep in to give the ProgressDialog time to show up. With the sleep statement the ProgressDialog loads and starts spinning, but then locks up as soon as the RemoteService starts doing it's work. If I just turn the AsyncTask into dummy code, and just let it sleep for a while, everything works fine. But when the task has to do actual work, it is like the UI just sits and waits. Any ideas on what Im doing wrong ? @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Log.d(IM,"Start Me UP!!"); setContentView(R.layout.main); Log.d(IM, "Building List View for Contacts"); restoreMe(); if (myContacts==null){ myContacts = new ArrayList<Contact>(); this.contactAdapter = new ContactAdapter(this, R.layout.contactlist, myContacts); setListAdapter(this.contactAdapter); new BindAsync().execute(); } else{ this.contactAdapter = new ContactAdapter(this, R.layout.contactlist, myContacts); setListAdapter(this.contactAdapter); } } private class BindAsync extends AsyncTask<Void, Void, RemoteServiceConnection>{ @Override protected void onPreExecute(){ super.onPreExecute(); Log.d(IM,"Showing Dialog"); showDialog(DIALOG_CONTACTS); } @Override protected RemoteServiceConnection doInBackground(Void... v) { Log.d(IM,"Binding to service in BindAsync"); try{ Thread.sleep(2000); } catch (InterruptedException e){ } RemoteServiceConnection myCon; myCon = new RemoteServiceConnection(); Intent i = new Intent(imandroid.this,MyRemoteService.class); bindService(i, myCon, Context.BIND_AUTO_CREATE); startService(i); Log.d(IM,"Bound to remote service"); return myCon; } @Override protected void onPostExecute(RemoteServiceConnection newConn){ super.onPostExecute(newConn); Log.d(IM,"Storing remote connection"); conn=newConn; } };

    Read the article

  • Why is FLD1 loading NaN instead?

    - by Bernd Jendrissek
    I have a one-liner C function that is just return value * pow(1.+rate, -delay); - it discounts a future value to a present value. The interesting part of the disassembly is 0x080555b9 : neg %eax 0x080555bb : push %eax 0x080555bc : fildl (%esp) 0x080555bf : lea 0x4(%esp),%esp 0x080555c3 : fldl 0xfffffff0(%ebp) 0x080555c6 : fld1 0x080555c8 : faddp %st,%st(1) 0x080555ca : fxch %st(1) 0x080555cc : fstpl 0x8(%esp) 0x080555d0 : fstpl (%esp) 0x080555d3 : call 0x8051ce0 0x080555d8 : fmull 0xfffffff8(%ebp) While single-stepping through this function, gdb says (rate is 0.02, delay is 2; you can see them on the stack): (gdb) si 0x080555c6 30 return value * pow(1.+rate, -delay); (gdb) info float R7: Valid 0x4004a6c28f5c28f5c000 +41.68999999999999773 R6: Valid 0x4004e15c28f5c28f6000 +56.34000000000000341 R5: Valid 0x4004dceb851eb851e800 +55.22999999999999687 R4: Valid 0xc0008000000000000000 -2 =R3: Valid 0x3ff9a3d70a3d70a3d800 +0.02000000000000000042 R2: Valid 0x4004ff147ae147ae1800 +63.77000000000000313 R1: Valid 0x4004e17ae147ae147800 +56.36999999999999744 R0: Valid 0x4004efb851eb851eb800 +59.92999999999999972 Status Word: 0x1861 IE PE SF TOP: 3 Control Word: 0x037f IM DM ZM OM UM PM PC: Extended Precision (64-bits) RC: Round to nearest Tag Word: 0x0000 Instruction Pointer: 0x73:0x080555c3 Operand Pointer: 0x7b:0xbff41d78 Opcode: 0xdd45 And after the fld1: (gdb) si 0x080555c8 30 return value * pow(1.+rate, -delay); (gdb) info float R7: Valid 0x4004a6c28f5c28f5c000 +41.68999999999999773 R6: Valid 0x4004e15c28f5c28f6000 +56.34000000000000341 R5: Valid 0x4004dceb851eb851e800 +55.22999999999999687 R4: Valid 0xc0008000000000000000 -2 R3: Valid 0x3ff9a3d70a3d70a3d800 +0.02000000000000000042 =R2: Special 0xffffc000000000000000 Real Indefinite (QNaN) R1: Valid 0x4004e17ae147ae147800 +56.36999999999999744 R0: Valid 0x4004efb851eb851eb800 +59.92999999999999972 Status Word: 0x1261 IE PE SF C1 TOP: 2 Control Word: 0x037f IM DM ZM OM UM PM PC: Extended Precision (64-bits) RC: Round to nearest Tag Word: 0x0020 Instruction Pointer: 0x73:0x080555c6 Operand Pointer: 0x7b:0xbff41d78 Opcode: 0xd9e8 After this, everything goes to hell. Things get grossly over or undervalued, so even if there were no other bugs in my freeciv AI attempt, it would choose all the wrong strategies. Like sending the whole army to the arctic. (Sigh, if only I were getting that far.) I must be missing something obvious, or getting blinded by something, because I can't believe that fld1 should ever possibly fail. Even less that it should fail only after a handful of passes through this function. On earlier passes the FPU correctly loads 1 into ST(0). The bytes at 0x080555c6 definitely encode fld1 - checked with x/... on the running process. What gives?

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections

    - by Hammer Bro.
    Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections [closed]

    - by Hammer Bro.
    Note: I accidentally originally posted this question over at SuperUser, and I still think the issue is caused by some low-level networking practice of Windows 7, but I think the expertise here would be more apt to figuring it out. Apologies for the cross-post. Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

< Previous Page | 1 2