Search Results

Search found 1014 results on 41 pages for 'iteration'.

Page 35/41 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Trying to keep up with Technology and Blogging

    - by Dave Campbell
    A little bit of everything... The heading above got changed a bunch during writing and I finally settled on that because this has become a 'stream of consciousness' post... or maybe a stream of UNconsciousness :) If you've noticed, my blogging has been a tad slow this fall. There's been a lot going on personally. But then again, I haven't skipped anybody either. Rather than go through ALL the blogs I have aggregated, and take a week to get to the bottom, at some point in the last year, I had moved the lists around so I now have "SilverlightMVPs", "Very Prolific", "WP7", and "Top Checks". This is a total of about 250 of the more prolific bloggers. Those 250 bloggers have kept me very busy up through about //BUILD. Sometimes it would take all week to go through just that list putting out 13 posts per blog per day... but not anymore. This weekend I made it all the way through the BIG list... close to 700 blogs, and if you read my blog, you know I had one medium day (Saturday), and yesterday was very short. Why is this? To be honest, I don't know... is everybody busy re-tooling, or churning waiting for direction? I have a short list of WinRT/Metro/W8 folks... maybe I need to be pointed to more of them... but my old favorites are not pumping out posts as they have in the past. I said before that I am attracted to Metro, and I've already got My first Metro app post out there, and were it not for working with the new site, I'd have had another out last weekend... so definitely look for more from me in that area. New Site? Did I say 'new site' ? oops... didn't mean to do that, but now that the cat is out of the bag, I may as well continue... While at //BUILD, I discussed a re-tooling of SilverlightCream with lots of folks... probably more than wanted to hear about it to be honest! ... it's needed a facelift, and there's stuff on there that never worked right, plus there's a lot of manual effort that goes into a blog post. In an effort to alleviate all the above, Michael Washington and I have been working on the next iteration of SilverlightCream. Not wanting to lose that branding or mess with any saved links, I decided to change from a somewhat funky name to something more professional. I also decided to put my blog on the site, and tie my main announcement twitter feed to the site as well. The way things sit today, there are 3 different names in those locations and it's gotta be confusing for folks just stumbling in. We're going to do a series of posts talking about the site and the new backend processing (hint: Michael Washington is responsible for it, so you can take a guess at the technology), but for now, we'd like some eyes on the front end of the site, and some submittals using it to see if it falls over somewhere that we haven't tried. So... I'm going to give it up... the new site is Windows Dev News. The Twitter feed is @WindowsDevNews, and the blog will be on the site as well at Windows Dev News Blog. I've got the RSS Feed on Feedburner too, so I think all the nuts and bolts are good to go. The submittal and search pages work, as does the blog page. You'll notice we used the MasterPage from SilverlightCream to get started. That will probably change, but it's just the visual... the content is the important part. Other missing things are the tracking and 'Skim' page that we will eventually have up and running. There are some formatting issues with the blog posts but if you hang in there with me, those will be taken care of. If you're a blogger, please submit through the site and let me know if you find any problems. If you're a reader, please add this feed and site. I'll be duplicating the effort for a while but at some point will stop that foolishness. We won't lose the data from SilverlightCream though, so keep using that as a search resource... I have hopes to pull that database over to WindowsDevNews, or link to it in some manner... that part isn't set in jello yet, but it will not be lost. So there it is... let me know what you think, send me your WinRT/Metro/W8 postings along with your Silverlight and WP7 posts... it's not that different, it's just more. Stay in the 'Light

    Read the article

  • Measuring Usability with Common Industry Format (CIF) Usability Tests

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience A User-centered Research and Design Process The Oracle Fusion Applications user experience was five years in the making. The development of this suite included an extensive and comprehensive user experience design process: ethnographic research, low-fidelity workflow prototyping, high fidelity user interface (UI) prototyping, iterative formative usability testing, development feedback and iteration, and sales and customer evaluation throughout the design cycle. However, this process does not stop when our products are released. We conduct summative usability testing using the ISO 25062 Common Industry Format (CIF) for usability test reports as an organizational framework. CIF tests allow us to measure the overall usability of our released products.  These studies provide benchmarks that allow for comparisons of a specific product release against previous versions of our product and against other products in the marketplace. What Is a CIF Usability Test? CIF refers to the internationally standardized method for reporting usability test findings used by the software industry. The CIF is based on a formal, lab-based test that is used to benchmark the usability of a product in terms of human performance and subjective data. The CIF was developed and is endorsed by more than 375 software customer and vendor organizations led by the National Institute for Standards and Technology (NIST), a US government entity. NIST sponsored the CIF through the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) standards-making processes. Oracle played a key role in developing the CIF. The CIF report format and metrics are consistent with the ISO 9241-11 definition of usability: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” Our goal in conducting CIF tests is to measure performance and satisfaction of a representative sample of users on a set of core tasks and to help predict how usable a product will be with the larger population of customers. Why Do We Perform CIF Testing? The overarching purpose of the CIF for usability test reports is to promote incorporation of usability as part of the procurement decision-making process for interactive products. CIF provides a common format for vendors to report the methods and results of usability tests to customer organizations, and enables customers to compare the usability of our software to that of other suppliers. CIF also enables us to compare our current software with previous versions of our software. CIF Testing for Fusion Applications Oracle Fusion Applications comprises more than 100 modules in seven different product families. These modules encompass more than 400 task flows and 400 user roles. Due to resource constraints, we cannot perform comprehensive CIF testing across the entire product suite. Therefore, we had to develop meaningful inclusion criteria and work with other stakeholders across the applications development organization to prioritize product areas for testing. Ultimately, we want to test the product areas for which customers might be most interested in seeing CIF data. We also want to build credibility with customers; we need to be able to make the case to current and prospective customers that the product areas tested are representative of the product suite as a whole. Our goal is to test the top use cases for each product. The primary activity in the scoping process was to work with the individual product teams to identify the key products and business process task flows in each product to test. We prioritized these products and flows through a series of negotiations among the user experience managers, product strategy, and product management directors for each of the primary product families within the Oracle Fusion Applications suite (Human Capital Management, Supply Chain Management, Customer Relationship Management, Financials, Projects, and Procurement). The end result of the scoping exercise was a list of 47 proposed CIF tests for the Fusion Applications product suite.  Figure 1. A participant completes tasks during a usability test in Oracle’s Usability Labs Fusion Supplier Portal CIF Test The first Fusion CIF test was completed on the Supplier Portal application in July of 2011.  Fusion Supplier Portal is part of an integrated suite of Procurement applications that helps supplier companies manage orders, schedules, shipments, invoices, negotiations and payments. The user roles targeted for the usability study were Supplier Account Receivables Specialists and Supplier Sales Representatives, including both experienced and inexperienced users across a wide demographic range.  The test specifically focused on the following functionality and features: Manage payments – view payments Manage invoices – view invoice status and create invoices Manage account information – create new contact, review bank account information Manage agreements – find and view agreement, upload agreement lines, confirm status of agreement lines upload Manage purchase orders (PO) – view history of PO, request change to PO, find orders Manage negotiations – respond to request for a quote, check the status of a negotiation response These product areas were selected to represent the most important subset of features and functionality of the flow, in terms of frequency and criticality of use by customers. A total of 20 users participated in the usability study. The results of the Supplier Portal evaluation were favorable and exceeded our expectations. Figure 2. Fusion Supplier Portal Next Studies We plan to conduct two Fusion CIF usability studies per product family over the next nine months. The next product to be tested will be Self-service Procurement. End users are currently being recruited to participate in this usability study, and the test sessions are scheduled to begin during the last week of November.

    Read the article

  • BPM Suite 11gR1 Released

    - by Manoj Das
    This morning (April 27th, 2010), Oracle BPM Suite 11gR1 became available for download from OTN and eDelivery. If you have been following our plans in this area, you know that this is the release unifying BEA ALBPM product, which became Oracle BPM10gR3, with the Oracle stack. Some of the highlights of this release are: BPMN 2.0 modeling and simulation Web based Process Composer for BPMN and Rules authoring Zero-code environment with full access to Oracle SOA Suite’s rich set of application and other adapters Process Spaces – Out-of-box integration with Web Center Suite Process Analytics – Native process cubes as well as integration with Oracle BAM You can learn more about this release from the documentation. Notes about downloading and installing Please note that Oracle BPM Suite 11gR1 is delivered and installed as part of SOA 11.1.1.3.0, which is a sparse release (only incremental patch). To install: Download and install SOA 11.1.1.2.0, which is a full release (you can find the bits at the above location) Download and install SOA 11.1.1.3.0 During configure step (using the Fusion Middleware configuration wizard), use the Oracle Business Process Management template supplied with the SOA Suite11g (11.1.1.3.0) If you plan to use Process Spaces, also install Web Center 11.1.1.3.0, which also is delivered as a sparse release and needs to be installed on top of Web Center 11.1.1.2.0 Some early feedback We have been receiving very encouraging feedback on this release. Some quotes from partners are included below: “I just attended a preview workshop on BPM Studio, Oracle's BPMN 2.0 tool, held by Clemens Utschig Utschig from Oracle HQ. The usability and ease to get started are impressive. In the business view analysts can intuitively start modeling, then developers refine in their own, more technical view. The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends. With BPM Studio we architects have a new modeling and development IDE that gives us interesting design challenges to grasp and elaborate, since many things BPMN 2.0 are different from good ol' BPEL. For example, for simple transformations, you don't use BPEL "assign" any more, but add the transformation directly to the service call. There is much less XPath involved. And, there is no translation from model to BPEL code anymore, so the awkward process model to BPEL roundtrip, which never really worked as well as it looked on marketing slides, is obsolete: With BPMN 2.0 "the model is the code". Now, these are great times to start the journey into BPM! Some tips: Start Projects smoothly, with initial processes being not overly complex and not using the more esoteric areas of BPMN, to manage the learning path and to stay successful with each iteration. Verify non functional requirements by conducting performance and load tests early. As mentioned above, separate all technical integration logic into SOA Suite or Oracle Service Bus. And - share your experience!” Hajo Normann, SOA Architect - Oracle ACE Director - Co-Leader DOAG SIG SOA   "Reuse of components across the Oracle 11G Fusion Middleware stack, like for instance a Database Adapter, is essential. It improves stability and predictability of the solution. BPM just is one of the components plugging into the stack and reuses all other components." Mr. Leon Smiers, Oracle Solution Architect, Capgemini   “I had the opportunity to follow a hands-on workshop held by Clemens for Oracle partners and I was really impressed of the overall offering of BPM11g. BPM11g allows the execution of BPMN 2.0 processes, without having to transform/translate them first to BPEL in order to be executable. The fact that BPMN uses the same underlying service infrastructure of SOA Suite 11g has a lot of benefits for us already familiar with SOA Suite 11g. BPMN is just another SCA component within a SCA composite and can (re)use all the existing components like Rules, Human Workflow, Adapters and Mediator. I also like the fact that BPMN runs on the same service engine as BPEL. By that all known best practices for making a BPEL  process reliable are valid for BPMN processes as well. Last but not least, BPMN is integrated into the superior end-to-end tracing of SOA Suite 11g. With BPM11g, Oracle offers a very competitive product which will have a big effect on the IT market. Clemens and Jürgen: Thanks for the great workshop! I’m really looking forward to my first project using Oracle BPM11g!” Guido Schmutz, Technology Manager / Oracle ACE Director for Fusion Middleware and SOA, Company:  Trivadis Some earlier feedback were summarized in this post.

    Read the article

  • LINQ und ArcObjects

    - by Marko Apfel
    LINQ und ArcObjects Motivation LINQ1 (language integrated query) ist eine Komponente des Microsoft .NET Frameworks seit der Version 3.5. Es erlaubt eine SQL-ähnliche Abfrage zu verschiedenen Datenquellen wie SQL, XML u.v.m. Wie SQL auch, bietet LINQ dazu eine deklarative Notation der Problemlösung - d.h. man muss nicht im Detail beschreiben wie eine Aufgabe, sondern was überhaupt zu lösen ist. Das befreit den Entwickler abfrageseitig von fehleranfälligen Iterator-Konstrukten. Ideal wäre es natürlich auf diese Möglichkeiten auch in der ArcObjects-Programmierung mit Features zugreifen zu können. Denkbar wäre dann folgendes Konstrukt: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; bzw. dessen Äquivalent als Lambda-Expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); Dazu muss ein entsprechender Provider zu Verfügung stehen, der die entsprechende Iterator-Logik managt. Dies ist leichter als man auf den ersten Blick denkt - man muss nur die gewünschten Entitäten als IEnumerable<IFeature> liefern. (Anm.: nicht wundern - die Methoden GetValue() und ToDouble() habe ich nebenbei als Erweiterungsmethoden deklariert.) Im Hintergrund baut LINQ selbständig eine Zustandsmaschine (state machine)2 auf deren Ausführung verzögert ist (deferred execution)3 - d.h. dass erst beim tatsächlichen Anfordern von Entitäten (foreach, Count(), ToList(), ..) eine Instanziierung und Verarbeitung stattfindet, obwohl die Zuweisung schon an ganz anderer Stelle erfolgte. Insbesondere bei mehrfacher Iteration durch die Entitäten reibt man sich bei den ersten Debuggings verwundert die Augen wenn der Ausführungszeiger wie von Geisterhand wieder in die Iterator-Logik springt. Realisierung Eine ganz knappe Logik zum Konstruieren von IEnumerable<IFeature> lässt sich mittels Durchlaufen eines IFeatureCursor realisieren. Dazu werden die einzelnen Feature mit yield ausgegeben. Der einfachen Verwendung wegen, habe ich die Logik in eine Erweiterungsmethode GetFeatures() für IFeatureClass aufgenommen: public static IEnumerable GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } Damit kann man sich nun ganz einfach die IEnumerable<IFeature> erzeugen lassen: IEnumerable features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); Etwas aufpassen muss man bei der Verwendung des "Recycling-Cursors". Nach einer verzögerten Ausführung darf im selben Kontext nicht erneut über die Features iteriert werden. In diesem Fall wird nämlich nur noch der Inhalt des letzten (recycelten) Features geliefert und alle Features sind innerhalb der Menge gleich. Kritisch würde daher das Konstrukt largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); weil ToList() schon einmal durch die Liste iteriert und der Cursor somit einmal durch die Features bewegt wurde. Die Erweiterungsmethode ForEach liefert dann immer dasselbe Feature. In derartigen Situationen darf also kein Cursor mit Recycling verwendet werden. Ein mehrfaches Ausführen von foreach ist hingegen kein Problem weil dafür jedes Mal die Zustandsmaschine neu instanziiert wird und somit der Cursor neu durchlaufen wird – das ist die oben schon erwähnte Magie. Ausblick Nun kann man auch einen Schritt weiter gehen und ganz eigene Implementierungen für die Schnittstelle IEnumerable<IFeature> in Angriff nehmen. Dazu müssen nur die Methode und das Property zum Zugriff auf den Enumerator ausprogrammiert werden. Im Enumerator selbst veranlasst man in der Reset()-Methode das erneute Ausführen der Suche – dazu übergibt man beispielsweise ein entsprechendes Delegate in den Konstruktur: new FeatureEnumerator( _featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); und ruft dieses beim Reset auf: public void Reset() {     _featureCursor = _resetCursor(_t); } Auf diese Art und Weise können Enumeratoren für völlig verschiedene Szenarien implementiert werden, die clientseitig restlos identisch nach obigen Schema verwendet werden. Damit verschmelzen Cursors, SelectionSets u.s.w. zu einer einzigen Materie und die Wiederverwendbarkeit von Code steigt immens. Obendrein lässt sich ein IEnumerable in automatisierten Unit-Tests sehr einfach mocken - ein großer Schritt in Richtung höherer Software-Qualität.4 Fazit Nichtsdestotrotz ist Vorsicht mit diesen Konstrukten in performance-relevante Abfragen geboten. Dadurch dass im Hintergrund eine Zustandsmaschine verwalten wird, entsteht einiges an Overhead dessen Verarbeitung zusätzliche Zeit kostet - ca. 20 bis 100 Prozent. Darüber hinaus ist auch das Arbeiten ohne Recycling schnell ein Performance-Gap. Allerdings ist deklarativer LINQ-Code viel eleganter, fehlerfreier und wartungsfreundlicher als das manuelle Iterieren, Vergleichen und Aufbauen einer Ergebnisliste. Der Code-Umfang verringert sich erfahrungsgemäß im Schnitt um 75 bis 90 Prozent! Dafür warte ich gerne ein paar Millisekunden länger. Wie so oft muss abgewogen werden zwischen Wartbarkeit und Performance - wobei für mich Wartbarkeit zunehmend an Priorität gewinnt. Zumeist ist sowieso nicht der Code sondern der Anwender die Bremse im Prozess. Demo-Quellcode support.esri.de   [1] Wikipedia: LINQ http://de.wikipedia.org/wiki/LINQ [2] Wikipedia: Zustandsmaschine http://de.wikipedia.org/wiki/Endlicher_Automat [3] Charlie Calverts Blog: LINQ and Deferred Execution http://blogs.msdn.com/b/charlie/archive/2007/12/09/deferred-execution.aspx [4] Clean Code Developer - gelber Grad/Automatisierte Unit Tests http://www.clean-code-developer.de/Gelber-Grad.ashx#Automatisierte_Unit_Tests_8

    Read the article

  • Oracle Social Network Developer Challenge Winners

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Now that OpenWorld 2012 has wrapped, I have time to tell you all about what happened. Maybe you recall that Noel (@noelportugal) and I were running a modified hackathon during the show, the Oracle Social Network Developer Challenge. Without further ado, congratulations to Dimitri Gielis (@dgielis) and Martin Giffy D’Souza (@martindsouza) on their winning entry, an integration between Oracle APEX and Oracle Social Network that integrates feedback and bug submission with Oracle Social Network Conversations, allowing developers, end-users and project leaders to view and discuss the feedback on their APEX applications from within Oracle Social Network. Update: Bob Rhubart of OTN (@brhubart) interviewed Dimitri and Martin right after their big win. Money quote from Dimitri when asked what he’d buy with the $500 in Amazon gift cards, “Oracle Social Network.” Nice one. In their own words: In the developers perspective it’s important to get feedback soon, so after a first iteration and end-users start to test, they can give feedback of the application. Previously it stopped there, and it was up to the developer to communicate further with email, phone etc. With OSN every feedback and communication gets logged and other people can see the discussion immediately as well. For the end users perspective he can now communicate in a more efficient way to not only the developers, but also between themselves. Maybe many end-users (in different locations) would like to change some behaviour, by using OSN they can see the entry somebody put in with a screenshot and they can just start to chat about it. Some key technical end users can have lighten the tasks of the development team by looking at the feedback first and start to communicate with their peers. For the project manager he has now the ability to really see what communication has taken place in certain areas and can make decisions on that. Later, if things come up again, he can always go back in OSN and see what was said at that moment in time. Integrating OSN in the APEX applications enhances the user experience, makes the lives of the developers easier and gives a better overview to project managers. Incidentally, you may already know Dimitri and Martin, since both are Oracle Ace Directors. I ran into Martin at the Ace Director briefings Friday before the conference started, and at that point, he wasn’t sure he’d have time to enter the Challenge. After some coaxing, he and Dimitri agreed to give it a go and banged out their entry on Tuesday night, or more accurately, very early Wednesday morning, the day of the Challenge judging. I think they said it took them about four hours of hardcore coding to get it done, very much like a traditional hackathon, which is essentially a code sprint from idea to finished product. Here are some screenshots of the workflow they built. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } I love this idea, i.e. closing the loop between web developers and users, a very common pain point, and so did our judges. Speaking of, special thanks to our panel of three judges: Reggie Bradford (@reggiebradford), serial entrepreneur, founder of Vitrue and SVP of Cloud Product Development at Oracle Robert Hipps (@roberthipps), VP of Development for Oracle Social Network and my former boss Roland Smart (@rsmartx), VP of Social Marketing and the brains behind the Oracle Social Developer Community Finally, thanks to everyone who made this possible, including: The three other teams from HarQen (@harqen), TEAM Informatics (@teaminformatics) and Fishbowl Solutions (@fishbowle20) featuring Friend of the ‘Lab John Sim (@jrsim_uix), who finished and presented entries. I’ll be posting the details of their work this week. The one guy who finished an entry, but couldn’t make the judging, Bex Huff (@bex). Bex rallied from a hospitalization due to an allergic reaction during the show; he’s fine, don’t worry. I’ll post details of his work next week, too. The 40-plus people who registered to compete in the Challenge. Noel for all his hard work, sample code, and flying monkey target, more on that to come. The Oracle Social Network development team for supporting this event. Everyone in legal and the beta program office for their help. And finally, the Oracle Technology Network (@oracletechnet) for hosting the event and providing countless hours of operational and moral support. Sorry if I’ve missed some people, since this was a huge team effort. This event was a big success, and we plan to do similar events in the future. Stay tuned to this channel for more. 

    Read the article

  • Some non-generic collections

    - by Simon Cooper
    Although the collections classes introduced in .NET 2, 3.5 and 4 cover most scenarios, there are still some .NET 1 collections that don't have generic counterparts. In this post, I'll be examining what they do, why you might use them, and some things you'll need to bear in mind when doing so. BitArray System.Collections.BitArray is conceptually the same as a List<bool>, but whereas List<bool> stores each boolean in a single byte (as that's what the backing bool[] does), BitArray uses a single bit to store each value, and uses various bitmasks to access each bit individually. This means that BitArray is eight times smaller than a List<bool>. Furthermore, BitArray has some useful functions for bitmasks, like And, Xor and Not, and it's not limited to 32 or 64 bits; a BitArray can hold as many bits as you need. However, it's not all roses and kittens. There are some fundamental limitations you have to bear in mind when using BitArray: It's a non-generic collection. The enumerator returns object (a boxed boolean), rather than an unboxed bool. This means that if you do this: foreach (bool b in bitArray) { ... } Every single boolean value will be boxed, then unboxed. And if you do this: foreach (var b in bitArray) { ... } you'll have to manually unbox b on every iteration, as it'll come out of the enumerator an object. Instead, you should manually iterate over the collection using a for loop: for (int i=0; i<bitArray.Length; i++) { bool b = bitArray[i]; ... } Following on from that, if you want to use BitArray in the context of an IEnumerable<bool>, ICollection<bool> or IList<bool>, you'll need to write a wrapper class, or use the Enumerable.Cast<bool> extension method (although Cast would box and unbox every value you get out of it). There is no Add or Remove method. You specify the number of bits you need in the constructor, and that's what you get. You can change the length yourself using the Length property setter though. It doesn't implement IList. Although not really important if you're writing a generic wrapper around it, it is something to bear in mind if you're using it with pre-generic code. However, if you use BitArray carefully, it can provide significant gains over a List<bool> for functionality and efficiency of space. OrderedDictionary System.Collections.Specialized.OrderedDictionary does exactly what you would expect - it's an IDictionary that maintains items in the order they are added. It does this by storing key/value pairs in a Hashtable (to get O(1) key lookup) and an ArrayList (to maintain the order). You can access values by key or index, and insert or remove items at a particular index. The enumerator returns items in index order. However, the Keys and Values properties return ICollection, not IList, as you might expect; CopyTo doesn't maintain the same ordering, as it copies from the backing Hashtable, not ArrayList; and any operations that insert or remove items from the middle of the collection are O(n), just like a normal list. In short; don't use this class. If you need some sort of ordered dictionary, it would be better to write your own generic dictionary combining a Dictionary<TKey, TValue> and List<KeyValuePair<TKey, TValue>> or List<TKey> for your specific situation. ListDictionary and HybridDictionary To look at why you might want to use ListDictionary or HybridDictionary, we need to examine the performance of these dictionaries compared to Hashtable and Dictionary<object, object>. For this test, I added n items to each collection, then randomly accessed n/2 items: So, what's going on here? Well, ListDictionary is implemented as a linked list of key/value pairs; all operations on the dictionary require an O(n) search through the list. However, for small n, the constant factor that big-o notation doesn't measure is much lower than the hashing overhead of Hashtable or Dictionary. HybridDictionary combines a Hashtable and ListDictionary; for small n, it uses a backing ListDictionary, but switches to a Hashtable when it gets to 9 items (you can see the point it switches from a ListDictionary to Hashtable in the graph). Apart from that, it's got very similar performance to Hashtable. So why would you want to use either of these? In short, you wouldn't. Any gain in performance by using ListDictionary over Dictionary<TKey, TValue> would be offset by the generic dictionary not having to cast or box the items you store, something the graphs above don't measure. Only if the performance of the dictionary is vital, the dictionary will hold less than 30 items, and you don't need type safety, would you use ListDictionary over the generic Dictionary. And even then, there's probably more useful performance gains you can make elsewhere.

    Read the article

  • C#/.NET Little Wonders &ndash; Cross Calling Constructors

    - by James Michael Hare
    Just a small post today, it’s the final iteration before our release and things are crazy here!  This is another little tidbit that I love using, and it should be fairly common knowledge, yet I’ve noticed many times that less experienced developers tend to have redundant constructor code when they overload their constructors. The Problem – repetitive code is less maintainable Let’s say you were designing a messaging system, and so you want to create a class to represent the properties for a Receiver, so perhaps you design a ReceiverProperties class to represent this collection of properties. Perhaps, you decide to make ReceiverProperties immutable, and so you have several constructors that you can use for alternative construction: 1: // Constructs a set of receiver properties. 2: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable, bool isBuffered) 3: { 4: ReceiverType = receiverType; 5: Source = source; 6: IsDurable = isDurable; 7: IsBuffered = isBuffered; 8: } 9: 10: // Constructs a set of receiver properties with buffering on by default. 11: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable) 12: { 13: ReceiverType = receiverType; 14: Source = source; 15: IsDurable = isDurable; 16: IsBuffered = true; 17: } 18:  19: // Constructs a set of receiver properties with buffering on and durability off. 20: public ReceiverProperties(ReceiverType receiverType, string source) 21: { 22: ReceiverType = receiverType; 23: Source = source; 24: IsDurable = false; 25: IsBuffered = true; 26: } Note: keep in mind this is just a simple example for illustration, and in same cases default parameters can also help clean this up, but they have issues of their own. While strictly speaking, there is nothing wrong with this code, logically, it suffers from maintainability flaws.  Consider what happens if you add a new property to the class?  You have to remember to guarantee that it is set appropriately in every constructor call. This can cause subtle bugs and becomes even uglier when the constructors do more complex logic, error handling, or there are numerous potential overloads (especially if you can’t easily see them all on one screen’s height). The Solution – cross-calling constructors I’d wager nearly everyone knows how to call your base class’s constructor, but you can also cross-call to one of the constructors in the same class by using the this keyword in the same way you use base to call a base constructor. 1: // Constructs a set of receiver properties. 2: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable, bool isBuffered) 3: { 4: ReceiverType = receiverType; 5: Source = source; 6: IsDurable = isDurable; 7: IsBuffered = isBuffered; 8: } 9: 10: // Constructs a set of receiver properties with buffering on by default. 11: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable) 12: : this(receiverType, source, isDurable, true) 13: { 14: } 15:  16: // Constructs a set of receiver properties with buffering on and durability off. 17: public ReceiverProperties(ReceiverType receiverType, string source) 18: : this(receiverType, source, false, true) 19: { 20: } Notice, there is much less code.  In addition, the code you have has no repetitive logic.  You can define the main constructor that takes all arguments, and the remaining constructors with defaults simply cross-call the main constructor, passing in the defaults. Yes, in some cases default parameters can ease some of this for you, but default parameters only work for compile-time constants (null, string and number literals).  For example, if you were creating a TradingDataAdapter that relied on an implementation of ITradingDao which is the data access object to retreive records from the database, you might want two constructors: one that takes an ITradingDao reference, and a default constructor which constructs a specific ITradingDao for ease of use: 1: public TradingDataAdapter(ITradingDao dao) 2: { 3: _tradingDao = dao; 4:  5: // other constructor logic 6: } 7:  8: public TradingDataAdapter() 9: { 10: _tradingDao = new SqlTradingDao(); 11:  12: // same constructor logic as above 13: }   As you can see, this isn’t something we can solve with a default parameter, but we could with cross-calling constructors: 1: public TradingDataAdapter(ITradingDao dao) 2: { 3: _tradingDao = dao; 4:  5: // other constructor logic 6: } 7:  8: public TradingDataAdapter() 9: : this(new SqlTradingDao()) 10: { 11: }   So in cases like this where you have constructors with non compiler-time constant defaults, default parameters can’t help you and cross-calling constructors is one of your best options. Summary When you have just one constructor doing the job of initializing the class, you can consolidate all your logic and error-handling in one place, thus ensuring that your behavior will be consistent across the constructor calls. This makes the code more maintainable and even easier to read.  There will be some cases where cross-calling constructors may be sub-optimal or not possible (if, for example, the overloaded constructors take completely different types and are not just “defaulting” behaviors). You can also use default parameters, of course, but default parameter behavior in a class hierarchy can be problematic (default values are not inherited and in fact can differ) so sometimes multiple constructors are actually preferable. Regardless of why you may need to have multiple constructors, consider cross-calling where you can to reduce redundant logic and clean up the code.   Technorati Tags: C#,.NET,Little Wonders

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • SSIS Reporting Pack v0.4 – Execution Report updated

    - by jamiet
    SSIS Reporting Pack is a suite of reports that I maintain at http://ssisreportingpack.codeplex.com/ that provide visualisation over the SSIS Catalog in SQL Server 2012 and attempt to add value over the reports that ship in the box. Work on the reports has stalled (my last SSIS Reporting Pack blog post was on 4th September 2011) as I’ve had rather more important things going on my life of late however I have recently checked-in a fix that couldn’t really be delayed. I discovered a problem with the Execution report that was causing the report to effectively hang, it was caused by this bit of SQL hidden away in the report definition: [generated_executables] AS (   SELECT  [new_executable].[execution_path],[new_executable].[parent_execution_path]   FROM    (           SELECT  [execution_path] = SUBSTRING([loop_iteration].[execution_path] ,1, [loop_iteration].length_exec_path - [loop_iteration].[char_index_close_square] + 1)           ,       [parent_execution_path] = SUBSTRING([loop_iteration].[execution_path] ,1, [loop_iteration].length_exec_path - [loop_iteration].[char_index_open_square])           FROM    (                   SELECT  [execution_path]                   ,       [char_index_open_square] = CHARINDEX('[',REVERSE([execution_path]),1)                   ,       [char_index_close_square] = CHARINDEX(']',REVERSE([execution_path]),1)                   ,       [length_exec_path] = LEN([execution_path])                   FROM    [exec_stats] es                   WHERE   execution_path LIKE '%\[%]%'  ESCAPE '\'                   )AS [loop_iteration]           ) AS [new_executable]   GROUP   BY [new_executable].[execution_path],[new_executable].[parent_execution_path]) It was there because SSIS does not currently treat a loop iteration as an executable yet I figured there was still value in being able to view it as such – this SQL essentially “invents” new executables for those loop iterations; its what enabled the following visualisation: where each of the three iterations of a For Each Loop called “FEL Loop over top performing regions” appear in the report. Unfortunately, as I alluded, this could under certain circumstances (most likely when there were many loop iterations) cause the report to hang as it waited for the results to be constructed and returned. The change that I have made eradicates this generation of “fake” executables and thus produces this visualisation instead: Notice that the three “children” of the For Each Loop are no longer the three iterations but actually the task (“EPT Call Data Export Package”) contained within that For Each Loop. The problem here is of course that there is no longer a visual distinction between those three iterations; I have instead made the full execution path viewable via a tooltip:   If you preferred the “old” way of presenting this information and are happy to put up with the performance degradation then I have kept the old version of the report hanging around in the reporting pack as “execution loop with iterations” however none of the other reports link to it so you will have to browse to it manually if you want to use it. Please let me know if you ARE using it – I would be very interested to hear about your experiences.   The last change to make you aware of in the execution report is that by default I no longer show OnPreValidate or OnPostValidate messages as I consider them to be superfluous and only serve to clutter up the results. If you want to put them back, well, its open source so go right ahead!   The latest release of SSIS Reporting Pack that contains all of these changes is v0.4 and can be downloaded from http://ssisreportingpack.codeplex.com/releases/view/88178   Feedback on all of the above changes would be very much appreciated. @Jamiet

    Read the article

  • SSIS Reporting Pack v0.4 – Execution Report updated

    - by jamiet
    SSIS Reporting Pack is a suite of reports that I maintain at http://ssisreportingpack.codeplex.com/ that provide visualisation over the SSIS Catalog in SQL Server 2012 and attempt to add value over the reports that ship in the box. Work on the reports has stalled (my last SSIS Reporting Pack blog post was on 4th September 2011) as I’ve had rather more important things going on my life of late however I have recently checked-in a fix that couldn’t really be delayed. I discovered a problem with the Execution report that was causing the report to effectively hang, it was caused by this bit of SQL hidden away in the report definition: [generated_executables] AS (   SELECT  [new_executable].[execution_path],[new_executable].[parent_execution_path]   FROM    (           SELECT  [execution_path] = SUBSTRING([loop_iteration].[execution_path] ,1, [loop_iteration].length_exec_path - [loop_iteration].[char_index_close_square] + 1)           ,       [parent_execution_path] = SUBSTRING([loop_iteration].[execution_path] ,1, [loop_iteration].length_exec_path - [loop_iteration].[char_index_open_square])           FROM    (                   SELECT  [execution_path]                   ,       [char_index_open_square] = CHARINDEX('[',REVERSE([execution_path]),1)                   ,       [char_index_close_square] = CHARINDEX(']',REVERSE([execution_path]),1)                   ,       [length_exec_path] = LEN([execution_path])                   FROM    [exec_stats] es                   WHERE   execution_path LIKE '%\[%]%'  ESCAPE '\'                   )AS [loop_iteration]           ) AS [new_executable]   GROUP   BY [new_executable].[execution_path],[new_executable].[parent_execution_path]) It was there because SSIS does not currently treat a loop iteration as an executable yet I figured there was still value in being able to view it as such – this SQL essentially “invents” new executables for those loop iterations; its what enabled the following visualisation: where each of the three iterations of a For Each Loop called “FEL Loop over top performing regions” appear in the report. Unfortunately, as I alluded, this could under certain circumstances (most likely when there were many loop iterations) cause the report to hang as it waited for the results to be constructed and returned. The change that I have made eradicates this generation of “fake” executables and thus produces this visualisation instead: Notice that the three “children” of the For Each Loop are no longer the three iterations but actually the task (“EPT Call Data Export Package”) contained within that For Each Loop. The problem here is of course that there is no longer a visual distinction between those three iterations; I have instead made the full execution path viewable via a tooltip:   If you preferred the “old” way of presenting this information and are happy to put up with the performance degradation then I have kept the old version of the report hanging around in the reporting pack as “execution loop with iterations” however none of the other reports link to it so you will have to browse to it manually if you want to use it. Please let me know if you ARE using it – I would be very interested to hear about your experiences.   The last change to make you aware of in the execution report is that by default I no longer show OnPreValidate or OnPostValidate messages as I consider them to be superfluous and only serve to clutter up the results. If you want to put them back, well, its open source so go right ahead!   The latest release of SSIS Reporting Pack that contains all of these changes is v0.4 and can be downloaded from http://ssisreportingpack.codeplex.com/releases/view/88178   Feedback on all of the above changes would be very much appreciated. @Jamiet

    Read the article

  • How to Inspect Javascript Object

    - by Madhan ayyasamy
    You can inspect any JavaScript objects and list them as indented, ordered by levels.It shows you type and property name. If an object property can't be accessed, an error message will be shown.Here the snippets for inspect javascript object.function inspect(obj, maxLevels, level){  var str = '', type, msg;    // Start Input Validations    // Don't touch, we start iterating at level zero    if(level == null)  level = 0;    // At least you want to show the first level    if(maxLevels == null) maxLevels = 1;    if(maxLevels < 1)             return '<font color="red">Error: Levels number must be > 0</font>';    // We start with a non null object    if(obj == null)    return '<font color="red">Error: Object <b>NULL</b></font>';    // End Input Validations    // Each Iteration must be indented    str += '<ul>';    // Start iterations for all objects in obj    for(property in obj)    {      try      {          // Show "property" and "type property"          type =  typeof(obj[property]);          str += '<li>(' + type + ') ' + property +                  ( (obj[property]==null)?(': <b>null</b>'):('')) + '</li>';          // We keep iterating if this property is an Object, non null          // and we are inside the required number of levels          if((type == 'object') && (obj[property] != null) && (level+1 < maxLevels))          str += inspect(obj[property], maxLevels, level+1);      }      catch(err)      {        // Is there some properties in obj we can't access? Print it red.        if(typeof(err) == 'string') msg = err;        else if(err.message)        msg = err.message;        else if(err.description)    msg = err.description;        else                        msg = 'Unknown';        str += '<li><font color="red">(Error) ' + property + ': ' + msg +'</font></li>';      }    }      // Close indent      str += '</ul>';    return str;}Method Call:function inspect(obj [, maxLevels [, level]]) Input Vars * obj: Object to inspect * maxLevels: Optional. Number of levels you will inspect inside the object. Default MaxLevels=1 * level: RESERVED for internal use of the functionReturn ValueHTML formatted string containing all values of inspected object obj.

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • Silverlight Confirm Dialog to Pause Thread

    - by AlishahNovin
    I'm trying to do a confirmation dialog using Silverlight's ChildWindow object. Ideally, I'd like it to work like MessageBox.Show(), where the entire application halts until an input is received from the user. For example: for(int i=0;i<5;i++) { if (i==3 && MessageBox.Show("Exit early?", "Iterator", MessageBoxButton.OKCancel) == MessageBoxResult.OK) { break; } } Would stop the iteration at 3 if the user hits OK... However, if I were to do something along the lines: ChildWindow confirm = new ChildWindow(); confirm.Title = "Iterator"; confirm.HasCloseButton = false; Grid container = new Grid(); Button closeBtn = new Button(); closeBtn.Content = "Exit early"; closeBtn.Click += delegate { confirm.DialogResult = true; confirm.Close(); }; container.Children.Add(closeBtn); Button continueBtn = new Button(); continueBtn.Content = "Continue!"; continueBtn.Click += delegate { confirm.DialogResult = false; confirm.Close(); }; container.Children.Add(continueBtn); confirm.Content = container; for(int i=0;i<5;i++) { if (i==3) { confirm.Show(); if (confirm.DialogResult.HasResult && (bool)confirm.DialogResult) { break; } } } This clearly would not work, as the thread isn't halted... confirm.DialogResult.HasResult would be false, and the loop would continue past 3. I'm just wondering, how I could go about this properly. Silverlight is single-threaded, so I can't just put the thread to sleep and then wake it up when I'm ready, so I'm just wondering if there's anything else that people could recommend? I've considered reversing the logic - ie, passing the actions I want to occur to the Yes/No events, but in my specific case this wouldn't quite work. Thanks in advance!

    Read the article

  • Why do many software projects fail today?

    - by TomTom
    As long as there are software projects, the world is wondering why they fail so often. I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years. You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved". EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time. In my opinion the 10% + / - is a good range for an offer / tender. EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams? When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...) I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"? EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement. Clean Code Developers live principles like SOLID every day. A professional developer is the biggest reviewer of his own work. And he has an internal value system which helps him to improve and become better. Check it out on: Clean Code Developer EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.

    Read the article

  • Prevent recursive CTE visiting nodes multiple times

    - by bacar
    Consider the following simple DAG: 1->2->3->4 And a table, #bar, describing this (I'm using SQL Server 2005): parent_id child_id 1 2 2 3 3 4 //... other edges, not connected to the subgraph above Now imagine that I have some other arbitrary criteria that select the first and last edges, i.e. 1-2 and 3-4. I want to use these to find the rest of my graph. I can write a recursive CTE as follows (I'm using terminology from MSDN): with foo(parent_id,child_id) as ( // anchor member that happens to select first and last edges: select parent_id,child_id from #bar where parent_id in (1,3) union all // recursive member: select #bar.* from #bar join foo on #bar.parent_id = foo.child_id ) select parent_id,child_id from foo However, this results in edge 3-4 being selected twice: parent_id child_id 1 2 3 4 2 3 3 4 // 2nd appearance! How can I prevent the query from recursing into subgraphs that have already been described? I could achieve this if, in my "recursive member" part of the query, I could reference all data that has been retrieved by the recursive CTE so far (and supply a predicate indicating in the recursive member excluding nodes already visited). However, I think I can access data that was returned by the last iteration of the recursive member only. This will not scale well when there is a lot of such repetition. Is there a way of preventing this unnecessary additional recursion? Note that I could use "select distinct" in the last line of my statement to achieve the desired results, but this seems to be applied after all the (repeated) recursion is done, so I don't think this is an ideal solution. Edit - hainstech suggests stopping recursion by adding a predicate to exclude recursing down paths that were explicitly in the starting set, i.e. recurse only where foo.child_id not in (1,3). That works for the case above only because it simple - all the repeated sections begin within the anchor set of nodes. It doesn't solve the general case where they may not be. e.g., consider adding edges 1-4 and 4-5 to the above set. Edge 4-5 will be captured twice, even with the suggested predicate. :(

    Read the article

  • Storing objects in the array

    - by Ockonal
    Hello, I want to save boost signals objects in the map (association: signal name ? signal object). The signals signature is different, so the second type of map should be boost::any. map<string, any> mSignalAssociation; The question is how to store objects without defining type of new signal signature? typedef boost::signals2::signal<void (int KeyCode)> sigKeyPressed; mSignalAssociation.insert(make_pair("KeyPressed", sigKeyPressed())); // This is what I need: passing object without type definition mSignalAssociation["KeyPressed"] = (typename boost::signals2::signal<void (int KeyCode)>()); // One more trying which won't work. And I don't want use this sigKeyPressed mKeyPressed; mSignalAssociation["KeyPressed"] = mKeyPressed; All this tryings throw the error: /usr/include/boost/noncopyable.hpp: In copy constructor ‘boost::signals2::signal_base::signal_base(const boost::signals2::signal_base&)’: In file included from /usr/include/boost/signals2/detail/signals_common.hpp:17:0, /usr/include/boost/noncopyable.hpp:27:7: error: ‘boost::noncopyable_::noncopyable::noncopyable(const boost::noncopyable_::noncopyable&)’ is private /usr/include/boost/signals2/signal_base.hpp:22:5: error: within this context ---------- /usr/include/boost/signals2/detail/signal_template.hpp: In copy constructor ‘boost::signals2::signal1<void, int&, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>::signal1(const boost::signals2::signal1<void, int, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>&)’: In file included from /usr/include/boost/preprocessor/iteration/detail/iter/forward1.hpp:52:0, /usr/include/boost/signals2/detail/signal_template.hpp:578:5: note: synthesized method ‘boost::signals2::signal_base::signal_base(const boost::signals2::signal_base&)’ first required here from /usr/include/boost/signals2.hpp:16, --------- /usr/include/boost/signals2/preprocessed_signal.hpp: In copy constructor ‘boost::signals2::signal<void(int)>::signal(const boost::signals2::signal<void(int)>&)’: In file included from /usr/include/boost/signals2/signal.hpp:36:0, /usr/include/boost/signals2/preprocessed_signal.hpp:42:5: note: synthesized method ‘boost::signals2::signal1<void, int, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>::signal1(const boost::signals2::signal1<void, int, boost::signals2::optional_last_value<void>, int, std::less<int>, boost::function<void(int)>, boost::function<void(const boost::signals2::connection&, int)>, boost::signals2::mutex>&)’ first required here from /home/ockonal/Workspace/Projects/Pseudoform-2/include/Core/Systems.hpp:6,

    Read the article

  • C# 4.0 'dynamic' and foreach statement

    - by ControlFlow
    Not long time before I've discovered, that new dynamic keyword doesn't work well with the C#'s foreach statement: using System; sealed class Foo { public struct FooEnumerator { int value; public bool MoveNext() { return true; } public int Current { get { return value++; } } } public FooEnumerator GetEnumerator() { return new FooEnumerator(); } static void Main() { foreach (int x in new Foo()) { Console.WriteLine(x); if (x >= 100) break; } foreach (int x in (dynamic)new Foo()) { // :) Console.WriteLine(x); if (x >= 100) break; } } } I've expected that iterating over the dynamic variable should work completely as if the type of collection variable is known at compile time. I've discovered that the second loop actually is looked like this when is compiled: foreach (object x in (IEnumerable) /* dynamic cast */ (object) new Foo()) { ... } and every access to the x variable results with the dynamic lookup/cast so C# ignores that I've specify the correct x's type in the foreach statement - that was a bit surprising for me... And also, C# compiler completely ignores that collection from dynamically typed variable may implements IEnumerable<T> interface! The full foreach statement behavior is described in the C# 4.0 specification 8.8.4 The foreach statement article. But... It's perfectly possible to implement the same behavior at runtime! It's possible to add an extra CSharpBinderFlags.ForEachCast flag, correct the emmited code to looks like: foreach (int x in (IEnumerable<int>) /* dynamic cast with the CSharpBinderFlags.ForEachCast flag */ (object) new Foo()) { ... } And add some extra logic to CSharpConvertBinder: Wrap IEnumerable collections and IEnumerator's to IEnumerable<T>/IEnumerator<T>. Wrap collections doesn't implementing Ienumerable<T>/IEnumerator<T> to implement this interfaces. So today foreach statement iterates over dynamic completely different from iterating over statically known collection variable and completely ignores the type information, specified by user. All that results with the different iteration behavior (IEnumarble<T>-implementing collections is being iterated as only IEnumerable-implementing) and more than 150x slowdown when iterating over dynamic. Simple fix will results a much better performance: foreach (int x in (IEnumerable<int>) dynamicVariable) { But why I should write code like this? It's very nicely to see that sometimes C# 4.0 dynamic works completely the same if the type will be known at compile-time, but it's very sadly to see that dynamic works completely different where IT CAN works the same as statically typed code. So my question is: why foreach over dynamic works different from foreach over anything else?

    Read the article

  • Python - calculate multinomial probability density functions on large dataset?

    - by Seafoid
    Hi, I originally intended to use MATLAB to tackle this problem but the inbuilt functions has limitations that do not suit my goal. The same limitation occurs in NumPy. I have two tab-delimited files. The first is a file showing amino acid residue, frequency and count for an in-house database of protein structures, i.e. A 0.25 1 S 0.25 1 T 0.25 1 P 0.25 1 The second file consists of quadruplets of amino acids and the number of times they occur, i.e. ASTP 1 Note, there are 8,000 such quadruplets. Based on the background frequency of occurence of each amino acid and the count of quadruplets, I aim to calculate the multinomial probability density function for each quadruplet and subsequently use it as the expected value in a maximum likelihood calculation. The multinomial distribution is as follows: f(x|n, p) = n!/(x1!*x2!*...*xk!)*((p1^x1)*(p2^x2)*...*(pk^xk)) where x is the number of each of k outcomes in n trials with fixed probabilities p. n is 4 four in all cases in my calculation. I have created three functions to calculate this distribution. # functions for multinomial distribution def expected_quadruplets(x, y): expected = x*y return expected # calculates the probabilities of occurence raised to the number of occurrences def prod_prob(p1, a, p2, b, p3, c, p4, d): prob_prod = (pow(p1, a))*(pow(p2, b))*(pow(p3, c))*(pow(p4, d)) return prob_prod # factorial() and multinomial_coefficient() work in tandem to calculate C, the multinomial coefficient def factorial(n): if n <= 1: return 1 return n*factorial(n-1) def multinomial_coefficient(a, b, c, d): n = 24.0 multi_coeff = (n/(factorial(a) * factorial(b) * factorial(c) * factorial(d))) return multi_coeff The problem is how best to structure the data in order to tackle the calculation most efficiently, in a manner that I can read (you guys write some cryptic code :-)) and that will not create an overflow or runtime error. To data my data is represented as nested lists. amino_acids = [['A', '0.25', '1'], ['S', '0.25', '1'], ['T', '0.25', '1'], ['P', '0.25', '1']] quadruplets = [['ASTP', '1']] I initially intended calling these functions within a nested for loop but this resulted in runtime errors or overfloe errors. I know that I can reset the recursion limit but I would rather do this more elegantly. I had the following: for i in quadruplets: quad = i[0].split(' ') for j in amino_acids: for k in quadruplets: for v in k: if j[0] == v: multinomial_coefficient(int(j[2]), int(j[2]), int(j[2]), int(j[2])) I haven'te really gotten to how to incorporate the other functions yet. I think that my current nested list arrangement is sub optimal. I wish to compare the each letter within the string 'ASTP' with the first component of each sub list in amino_acids. Where a match exists, I wish to pass the appropriate numeric values to the functions using indices. Is their a better way? Can I append the appropriate numbers for each amino acid and quadruplet to a temporary data structure within a loop, pass this to the functions and clear it for the next iteration? Thanks, S :-)

    Read the article

  • DynamicMethod for ConstructorInfo.Invoke, what do I need to consider?

    - by Lasse V. Karlsen
    My question is this: If I'm going to build a DynamicMethod object, corresponding to a ConstructorInfo.Invoke call, what types of IL do I need to implement in order to cope with all (or most) types of arguments, when I can guarantee that the right type and number of arguments is going to be passed in before I make the call? Background I am on my 3rd iteration of my IoC container, and currently doing some profiling to figure out if there are any areas where I can easily shave off large amounts of time being used. One thing I noticed is that when resolving to a concrete type, ultimately I end up with a constructor being called, using ConstructorInfo.Invoke, passing in an array of arguments that I've worked out. What I noticed is that the invoke method has quite a bit of overhead, and I'm wondering if most of this is just different implementations of the same checks I do. For instance, due to the constructor matching code I have, to find a matching constructor for the predefined parameter names, types, and values that I have passed in, there's no way this particular invoke call will not end up with something it should be able to cope with, like the correct number of arguments, in the right order, of the right type, and with appropriate values. When doing a profiling session containing a million calls to my resolve method, and then replacing it with a DynamicMethod implementation that mimics the Invoke call, the profiling timings was like this: ConstructorInfo.Invoke: 1973ms DynamicMethod: 93ms This accounts for around 20% of the total runtime of this profiling application. In other words, by replacing the ConstructorInfo.Invoke call with a DynamicMethod that does the same, I am able to shave off 20% runtime when dealing with basic factory-scoped services (ie. all resolution calls end up with a constructor call). I think this is fairly substantial, and warrants a closer look at how much work it would be to build a stable DynamicMethod generator for constructors in this context. So, the dynamic method would take in an object array, and return the constructed object, and I already know the ConstructorInfo object in question. Therefore, it looks like the dynamic method would be made up of the following IL: l001: ldarg.0 ; the object array containing the arguments l002: ldc.i4.0 ; the index of the first argument l003: ldelem.ref ; get the value of the first argument l004: castclass T ; cast to the right type of argument (only if not "Object") (repeat l001-l004 for all parameters, l004 only for non-Object types, varying l002 constant from 0 and up for each index) l005: newobj ci ; call the constructor l006: ret Is there anything else I need to consider? Note that I'm aware that creating dynamic methods will probably not be available when running the application in "reduced access mode" (sometimes the brain just won't give up those terms), but in that case I can easily detect that and just calling the original constructor as before, with the overhead and all.

    Read the article

  • How to do role-based access control for a franchise business?

    - by FreshCode
    I'm building the 2nd iteration of a web-based CRM+CMS for a franchise service business in ASP.NET MVC 2. I need to control access to each franchise's services based on the roles a user is assigned for that franchise. 4 examples: Receptionist should be able to book service jobs in for her "Atlantic Seaboard" franchise, but not do any reporting. Technician should be able to alter service jobs, but not modify invoices. Managers should be able to apply discount to invoices for jobs within their stores. Owner should be able to pull reports for any franchises he owns. Where should franchise-level access control fit in between the Data - Services - Web layer? If it belongs in my Controllers, how should I best implement it? Partial Schema Roles class int ID { get; set; } // primary key for Role string Name { get; set; } Partial Franchises class short ID { get; set; } // primary key for Franchise string Slug { get; set; } // unique key for URL access, eg /{franchise}/{job} string Name { get; set; } UserRoles mapping short FranchiseID; // related to franchises table Guid UserID; // related to Users table int RoleID; // related to Roles table DateTime ValidFrom; DateTime ValidUntil; Background I built the previous CRM in classic ASP and it runs the business well, but it's time for an upgrade to speed up workflow and leave less room for error. For the sake of proper testing and better separation between data and presentation, I decided to implement the repository pattern as seen in Rob Conery's MVC Storefront series. Controller Implementation Access Control with [Authorize] attribute If there was just one franchise involved, I could simply limit access to a controller action like so: [Authorize(Roles="Receptionist, Technician, Manager, Owner")] public ActionResult CreateJob(Job job) { ... } And since franchises don't just pop up over night, perhaps this is a strong case to use the new Areas feature in ASP.NET MVC 2? Or would this lead to duplicate Views? Controllers, URL Routing & Areas Assuming Areas aren't used, what would be the best way to determine which franchise's data is being accessed? I thought of this: {franchise}/{controller}/{action}/{id} or is it better to determine a job's franchise in a Details(...) action and limit a user's action with [Authorize]: {job}/{id}/{action}/{subaction} {invoice}/{id}/{action}/{subaction} which makes more sense if any user could potentially have access to more than one franchise without cluttering the URL with a {franchise} parameter. Any input is appreciated.

    Read the article

  • Populate Multiple PDFs

    - by gmcalab
    I am using itextsharp to populate my PDFs. I have no issues with this. Basically what I am doing is getting the PDF and populating the fields in memory then passing back the MemoryStream to be displayed on a webpage. All this is working with a single document PDF. What I am trying to figure out now, is merging multiple PDFs into one MemoryStream. The part I cant figure out is, the documents I am populating are identical. So for example, I have a List<Person> that contains 5 persons. I want to fill out a PDF for each person and merge them all into one, in memory. Bare in mind I am going to fill out the same type of document for each person. The problem I am getting is that when I try to add a second copy of the same PDF to be filled out for the second iteration, it just overwrites the first populated PDF, since it's the same document, therefore not adding a second copy for the second Person at all. So basically if I had the 5 people, I would end up with a single page with the data of the 5th person, instead of a PDF with 5 like pages that contain the data of each person respectively. Here's some code... MemoryStream ms = ms = new MemoryStream(); PdfReader docReader = null; PdfStamper Stamper = null; List<Person> persons = new List<Person>() { new Person("Larry", "David"), new Person("Dustin", "Byfuglien"), new Person("Patrick", "Kane"), new Person("Johnathan", "Toews"), new Person("Marian", "Hossa") }; try { // Iterate thru all persons and populate a PDF for each foreach(var person in persons){ PdfCopyFields Copier = new PdfCopyFields(ms); Copier.AddDocument(GetReader("Person.pdf")); Copier.Close(); docReader = new PdfReader(ms.ToArray()); Stamper = new PdfStamper(docReader, ms); AcroFields Fields = Stamper.AcroFields; Fields.SetField("FirstName", person.FirstName); } }catch(Exception e){ // handle error }finally{ if (Stamper != null) { Stamper.Close(); } if (docReader != null) { docReader.Close(); } }

    Read the article

  • Problems doing asynch operations in C# using Mutex.

    - by firoso
    I've tried this MANY ways, here is the current iteration. I think I've just implemented this all wrong. What I'm trying to accomplish is to treat this Asynch result in such a way that until it returns AND I finish with my add-thumbnail call, I will not request another call to imageProvider.BeginGetImage. To Clarify, my question is two-fold. Why does what I'm doing never seem to halt at my Mutex.WaitOne() call, and what is the proper way to handle this scenario? /// <summary> /// re-creates a list of thumbnails from a list of TreeElementViewModels (directories) /// </summary> /// <param name="list">the list of TreeElementViewModels to process</param> public void BeginLayout(List<AiTreeElementViewModel> list) { // *removed code for canceling and cleanup from previous calls* // Starts the processing of all folders in parallel. Task.Factory.StartNew(() => { thumbnailRequests = Parallel.ForEach<AiTreeElementViewModel>(list, options, ProcessFolder); }); } /// <summary> /// Processes a folder for all of it's image paths and loads them from disk. /// </summary> /// <param name="element">the tree element to process</param> private void ProcessFolder(AiTreeElementViewModel element) { try { var images = ImageCrawler.GetImagePaths(element.Path); AsyncCallback callback = AddThumbnail; foreach (var image in images) { Console.WriteLine("Attempting Enter"); synchMutex.WaitOne(); Console.WriteLine("Entered"); var result = imageProvider.BeginGetImage(callback, image); } } catch (Exception exc) { Console.WriteLine(exc.ToString()); // TODO: Do Something here. } } /// <summary> /// Adds a thumbnail to the Browser /// </summary> /// <param name="result">an async result used for retrieving state data from the load task.</param> private void AddThumbnail(IAsyncResult result) { lock (Thumbnails) { try { Stream image = imageProvider.EndGetImage(result); string filename = imageProvider.GetImageName(result); string imagePath = imageProvider.GetImagePath(result); var imageviewmodel = new AiImageThumbnailViewModel(image, filename, imagePath); thumbnailHash[imagePath] = imageviewmodel; HostInvoke(() => Thumbnails.Add(imageviewmodel)); UpdateChildZoom(); //synchMutex.ReleaseMutex(); Console.WriteLine("Exited"); } catch (Exception exc) { Console.WriteLine(exc.ToString()); // TODO: Do Something here. } } }

    Read the article

  • What am I missing about WCF?

    - by Bigtoe
    I've been developing in MS technologies for longer than I care to remember at this stage. When .NET arrived on the scene I thought they hit the nail on the head and with each iteration and version I thought their technologies were getting stronger and stronger and looked forward to each release. However, having had to work with WCF for the last year I must say I found the technology very difficult to work with and understand. Initially it's quite appealing but when you start getting into the guts of it, configuration is a nightmare, having to override behaviours for message sizes, number of objects contained in a messages, the complexity of the security model, disposing of proxies when faulted and finally moving back to defining interfaces in code rather than in XML. It just does not work out of the box and I think it should. We found all of the above issues while either testing ourselves or else when our products were out on site. I do understand the rationale behind it all, but surely they could have come up with simpler implementation mechanism. I suppose what I'm asking is, Am I looking at WCF the wrong way? What strengths does it have over the alternatives? Under what circumstances should I choose to use WCF? OK Folks, Sorry about the delay in responding, work does have a nasty habbit of get in the way somethimes :) Some clarifications My main paint point with WCF I suppose falls down into the following areas While it does work out of the box, your left with some major surprises under the hood. As pointed out above basic things are restricted until they are overridden Size of string than can be passed can't be over 8K Number of objects that can be passed in a single message is restricted Proxies not automatically recovering from failures The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it. Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembley, but it's full of rubbish and certain code generators choke on it. I know the world moves on, I've moved on a number of times over the last (ahem 22 years I've been developing) and am actively using WCF, so don't get me wrong, I do understand what it's for and where it's heading. I just think there should be simplier configuration/deployment options available, easier set-up and better management for configuration (SQL config provider maybe, rahter than just the web.config/app.config files). OK, back to the daily grid. Thanks for all your replies so far. Kind Regards Noel

    Read the article

  • Determining what frequencies correspond to the x axis in aurioTouch sample application

    - by eagle
    I'm looking at the aurioTouch sample application for the iPhone SDK. It has a basic spectrum analyzer implemented when you choose the "FFT" option. One of the things the app is lacking is X axis labels (i.e. the frequency labels). In the aurioTouchAppDelegate.mm file, in the function - (void)drawOscilloscope at line 652, it has the following code: if (displayMode == aurioTouchDisplayModeOscilloscopeFFT) { if (fftBufferManager->HasNewAudioData()) { if (fftBufferManager->ComputeFFT(l_fftData)) [self setFFTData:l_fftData length:fftBufferManager->GetNumberFrames() / 2]; else hasNewFFTData = NO; } if (hasNewFFTData) { int y, maxY; maxY = drawBufferLen; for (y=0; y<maxY; y++) { CGFloat yFract = (CGFloat)y / (CGFloat)(maxY - 1); CGFloat fftIdx = yFract * ((CGFloat)fftLength); double fftIdx_i, fftIdx_f; fftIdx_f = modf(fftIdx, &fftIdx_i); SInt8 fft_l, fft_r; CGFloat fft_l_fl, fft_r_fl; CGFloat interpVal; fft_l = (fftData[(int)fftIdx_i] & 0xFF000000) >> 24; fft_r = (fftData[(int)fftIdx_i + 1] & 0xFF000000) >> 24; fft_l_fl = (CGFloat)(fft_l + 80) / 64.; fft_r_fl = (CGFloat)(fft_r + 80) / 64.; interpVal = fft_l_fl * (1. - fftIdx_f) + fft_r_fl * fftIdx_f; interpVal = CLAMP(0., interpVal, 1.); drawBuffers[0][y] = (interpVal * 120); } cycleOscilloscopeLines(); } } From my understanding, this part of the code is what is used to decide which magnitude to draw for each frequency in the UI. My question is how can I determine what frequency each iteration (or y value) represents inside the for loop. For example, if I want to know what the magnitude is for 6kHz, I'm thinking of adding a line similar to the following: if (yValueRepresentskHz(y, 6)) NSLog(@"The magnitude for 6kHz is %f", (interpVal * 120)); Please note that although they chose to use the variable name y, from what I understand, it actually represents the x-axis in the visual graph of the spectrum analyzer, and the value of the drawBuffers[0][y] represents the y-axis.

    Read the article

  • Most efficient method to query a Young Tableau

    - by Matthieu M.
    A Young Tableau is a 2D matrix A of dimensions M*N such that: i,j in [0,M)x[0,N): for each p in (i,M), A[i,j] <= A[p,j] for each q in (j,N), A[i,j] <= A[i,q] That is, it's sorted row-wise and column-wise. Since it may contain less than M*N numbers, the bottom-right values might be represented either as missing or using (in algorithm theory) infinity to denote their absence. Now the (elementary) question: how to check if a given number is contained in the Young Tableau ? Well, it's trivial to produce an algorithm in O(M*N) time of course, but what's interesting is that it is very easy to provide an algorithm in O(M+N) time: Bottom-Left search: Let x be the number we look for, initialize i,j as M-1, 0 (bottom left corner) If x == A[i,j], return true If x < A[i,j], then if i is 0, return false else decrement i and go to 2. Else, if j is N-1, return false else increment j This algorithm does not make more than M+N moves. The correctness is left as an exercise. It is possible though to obtain a better asymptotic runtime. Pivot Search: Let x be the number we look for, initialize i,j as floor(M/2), floor(N/2) If x == A[i,j], return true If x < A[i,j], search (recursively) in A[0:i-1, 0:j-1], A[i:M-1, 0:j-1] and A[0:i-1, j:N-1] Else search (recursively) in A[i+1:M-1, 0:j], A[i+1:M-1, j+1:N-1] and A[0:i, j+1:N-1] This algorithm proceed by discarding one of the 4 quadrants at each iteration and running recursively on the 3 left (divide and conquer), the master theorem yields a complexity of O((N+M)**(log 3 / log 4)) which is better asymptotically. However, this is only a big-O estimation... So, here are the questions: Do you know (or can think of) an algorithm with a better asymptotical runtime ? Like introsort prove, sometimes it's worth switching algorithms depending on the input size or input topology... do you think it would be possible here ? For 2., I am notably thinking that for small size inputs, the bottom-left search should be faster because of its O(1) space requirement / lower constant term.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41  | Next Page >