Search Results

Search found 59194 results on 2368 pages for 'depth first search'.

Page 391/2368 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • .NET to iOS: From WinForms to the iPad

    - by RobertChipperfield
    One of the great things about working at Red Gate is getting to play with new technology - and right now, that means mobile. A few weeks ago, we decided that a little research into the tablet computing arena was due, and purely from a numbers point of view, that suggested the iPad as a good target device. A quick trip to iPhoneDevCon in San Diego later, and Marine and I came back full of ideas, and with some concept of how iOS development was meant to work. Here's how we went from there to the release of Stacks & Heaps, our geeky take on the classic "Snakes & Ladders" game. Step 1: Buy a Mac I've played with many operating systems in my time: from the original BBC Model B, through DOS, Windows, Linux, and others, but I'd so far managed to avoid buying fruit-flavoured computer hardware! If you want to develop for the iPhone, iPad or iPod Touch, that's the first thing that needs to change. If you've not used OS X before, the first thing you'll realise is that everything is different! In the interests of avoiding a flame war in the comments section, I'll only go so far as to say that a lot of my Windows-flavoured muscle memory no longer worked. If you're in the UK, you'll also realise your keyboard is lacking a # key, and that " and @ are the other way around from normal. The wonderful Ukelele keyboard layout editor restores some sanity here, as long as you don't look at the keyboard when you're typing. I couldn't give up the PC entirely, but a handy application called Synergy comes to the rescue - it lets you share a single keyboard and mouse between multiple machines. There's a few limitations: Alt-Tab always seems to go to the Mac, and Windows 7's UAC dialogs require the local mouse for security reasons, but it gets you a long way at least. Step 2: Register as an Apple Developer You can register as an Apple Developer free of charge, and that lets you download XCode and the iOS SDK. You also get the iPhone / iPad emulator, which is handy, since you'll need to be a paid member before you can deploy your apps to a real device. You can either enroll as an individual, or as a company. They both cost the same ($99/year), but there's a few differences between them. If you register as a company, you can add multiple developers to your team (all for the same $99 - not $99 per developer), and you get to use your company name in the App Store. However, you'll need to send off significantly more documentation to Apple, and I suspect the process takes rather longer than for an individual, where they just need to verify some credit card details. Here's a tip: if you're registering as a company, do so as early as possible. The approval process can take a while to complete, so get the application in in plenty of time. Step 3: Learn to love the square brackets! Objective-C is the language of the iPad. C and C++ are also supported, and if you're doing some serious game development, you'll probably spend most of your time in C++ talking OpenGL, but for forms-based apps, you'll be interacting with a lot of the Objective-C SDK. Like shifting from Ctrl-C to Cmd-C, it feels a little odd at first, with the familiar string.format(.) turning into: NSString *myString = [NSString stringWithFormat:@"Hello world, it's %@", [NSDate date]]; Thankfully XCode's auto-complete is normally passable, if not up to Visual Studio's standards, which coupled with a huge amount of content on Stack Overflow means you'll soon get to grips with the API. You'll need to get used to some terminology changes, though; here's an incomplete approximation: Coming from a .NET background, there's some luxuries you no longer have developing Objective C in XCode: Generics! Remember back in .NET 1.1, when all collections were just objects? Yup, we're back there now. ReSharper. Or, more generally, very much refactoring support. The not-many-keystrokes to rename a class, its file, and al references to it in Visual Studio turns into a much more painful experience in XCode. Garbage collection. This is actually rather less of an issue than you might expect: if you follow the rules, the reference counting provided by Objective C gets you a long way without too much pain. Circular references are their usual problematic self, though. Decent exception handling. You do have exceptions, but they're nowhere near as widely used. Generally, if something goes wrong, you get nil (see translation table above) back. Which brings me on to. Calling a method on a nil object isn't a failure - it just returns nil itself! There's many arguments for and against this, but personally I fall into the "stuff should fail as quickly and explicitly as possible" camp. Less specifically, I found that there's more chance of code failing at runtime rather than getting caught at compile-time: using the @selector(.) syntax to pass a method signature isn't (can't be) checked at compile-time, so the first you know about a typo is a crash when you try and call it. The solution to this is of course lots of great testing, both automated and manual, but I still find comfort in provably correct type safety being enforced in addition to testing. Step 4: Submit to the App Store Assuming you want to distribute to more than a handful of devices, you're going to need to submit your app to the Apple App Store. There's a few gotchas in terms of getting builds signed with the right certificates, and you'll be bouncing around between XCode and iTunes Connect a fair bit, but eventually you get everything checked off the to-do list, and are ready to upload your first binary! With some amount of anticipation, I pressed the Upload button in XCode, ready to release our creation into the world, but was instead greeted by an error informing me my XML file was malformed. Uh. A little Googling later, and it turned out that a simple rename from "Stacks&Heaps.app" to "StacksAndHeaps.app" worked around an XML escaping bug, and we were good to go. The next step is to wait for approval (or otherwise). After a couple of weeks of intensive development, this part is agonising. Did we make it? The Apple jury is still out at the moment, but our fingers are firmly crossed! In the meantime, you can see some screenshots and leave us your email address if you'd like us to get in touch when it does go live at the MobileFoo website. Step 5: Profit! Actually, that wasn't the idea here: Stacks & Heaps is free; there's no adverts, and we're not going to sell all your data either. So why did we do it? We wanted to get an idea of what it's like to move from coding for a desktop environment, to something completely different. We don't know whether in a year's time, the iPad will still be the dominant force, or whether Android will have smoothed out some bugs, tweaked the performance, and polished the UI, but I think it's a fairly sure bet that the tablet form factor is here to stay. We want to meet people who are using it, start chatting to them, and find out about some of the pain they're feeling. What better way to do that than do it ourselves, and get to write a cool game in the process?

    Read the article

  • MySQL Cluster 7.3 Labs Release – Foreign Keys Are In!

    - by Mat Keep
    0 0 1 1097 6254 Homework 52 14 7337 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary (aka TL/DR): Support for Foreign Key constraints has been one of the most requested feature enhancements for MySQL Cluster. We are therefore extremely excited to announce that Foreign Keys are part of the first Labs Release of MySQL Cluster 7.3 – available for download, evaluation and feedback now! (Select the mysql-cluster-7.3-labs-June-2012 build) In this blog, I will attempt to discuss the design rationale, implementation, configuration and steps to get started in evaluating the first MySQL Cluster 7.3 Labs Release. Pace of Innovation It was only a couple of months ago that we announced the General Availability (GA) of MySQL Cluster 7.2, delivering 1 billion Queries per Minute, with 70x higher cross-shard JOIN performance, Memcached NoSQL key-value API and cross-data center replication.  This release has been a huge hit, with downloads and deployments quickly reaching record levels. The announcement of the first MySQL Cluster 7.3 Early Access lab release at today's MySQL Innovation Day event demonstrates the continued pace in Cluster development, and provides an opportunity for the community to evaluate and feedback on new features they want to see. What’s the Plan for MySQL Cluster 7.3? Well, Foreign Keys, as you may have gathered by now (!), and this is the focus of this first Labs Release. As with MySQL Cluster 7.2, we plan to publish a series of preview releases for 7.3 that will incrementally add new candidate features for a final GA release (subject to usual safe harbor statement below*), including: - New NoSQL APIs; - Features to automate the configuration and provisioning of multi-node clusters, on premise or in the cloud; - Performance and scalability enhancements; - Taking advantage of features in the latest MySQL 5.x Server GA. Design Rationale MySQL Cluster is designed as a “Not-Only-SQL” database. It combines attributes that enable users to blend the best of both relational and NoSQL technologies into solutions that deliver web scalability with 99.999% availability and real-time performance, including: Concurrent NoSQL and SQL access to the database; Auto-sharding with simple scale-out across commodity hardware; Multi-master replication with failover and recovery both within and across data centers; Shared-nothing architecture with no single point of failure; Online scaling and schema changes; ACID compliance and support for complex queries, across shards. Native support for Foreign Key constraints enables users to extend the benefits of MySQL Cluster into a broader range of use-cases, including: - Packaged applications in areas such as eCommerce and Web Content Management that prescribe databases with Foreign Key support. - In-house developments benefiting from Foreign Key constraints to simplify data models and eliminate the additional application logic needed to maintain data consistency and integrity between tables. Implementation The Foreign Key functionality is implemented directly within MySQL Cluster’s data nodes, allowing any client API accessing the cluster to benefit from them – whether using SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA or HTTP/REST.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation. An important difference to note with the Foreign Key implementation in InnoDB is that MySQL Cluster does not support the updating of Primary Keys from within the Data Nodes themselves - instead the UPDATE is emulated with a DELETE followed by an INSERT operation. Therefore an UPDATE operation will return an error if the parent reference is using a Primary Key, unless using CASCADE action, in which case the delete operation will result in the corresponding rows in the child table being deleted. The Engineering team plans to change this behavior in a subsequent preview release. Also note that when using InnoDB "NO ACTION" is identical to "RESTRICT". In the case of MySQL Cluster “NO ACTION” means “deferred check”, i.e. the constraint is checked before commit, allowing user-defined triggers to automatically make changes in order to satisfy the Foreign Key constraints. Configuration There is nothing special you have to do here – Foreign Key constraint checking is enabled by default. If you intend to migrate existing tables from another database or storage engine, for example from InnoDB, there are a couple of best practices to observe: 1. Analyze the structure of the Foreign Key graph and run the ALTER TABLE ENGINE=NDB in the correct sequence to ensure constraints are enforced 2. Alternatively drop the Foreign Key constraints prior to the import process and then recreate when complete. Getting Started Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  You can download MySQL Cluster 7.3 Labs Release with Foreign Keys today - (select the mysql-cluster-7.3-labs-June-2012 build) If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3) Post any questions to the MySQL Cluster forum where our Engineering team will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section of this blog. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. This first Labs Release of MySQL Cluster 7.3 gives you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases. * Safe Harbor Statement This information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • How to Achieve OC4J RMI Load Balancing

    - by fip
    This is an old, Oracle SOA and OC4J 10G topic. In fact this is not even a SOA topic per se. Questions of RMI load balancing arise when you developed custom web applications accessing human tasks running off a remote SOA 10G cluster. Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusions in the field how OC4J RMI load balancing work. Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public. Here is the tech note: Overview A typical use case in Oracle SOA is that you are building web based, custom human tasks UI that will interact with the task services housed in a remote BPEL 10G cluster. Or, in a more generic way, you are just building a web based application in Java that needs to interact with the EJBs in a remote OC4J cluster. In either case, you are talking to an OC4J cluster as RMI client. Then immediately you must ask yourself the following questions: 1. How do I make sure that the web application, as an RMI client, even distribute its load against all the nodes in the remote OC4J cluster? 2. How do I make sure that the web application, as an RMI client, is resilient to the node failures in the remote OC4J cluster, so that in the unlikely case when one of the remote OC4J nodes fail, my web application will continue to function? That is the topic of how to achieve load balancing with OC4J RMI client. Solutions You need to configure and code RMI load balancing in two places: 1. Provider URL can be specified with a comma separated list of URLs, so that the initial lookup will land to one of the available URLs. 2. Choose a proper value for the oracle.j2ee.rmi.loadBalance property, which, along side with the PROVIDER_URL property, is one of the JNDI properties passed to the JNDI lookup.(http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI) More details below: About the PROVIDER_URL The JNDI property java.name.provider.url's job is, when the client looks up for a new context at the very first time in the client session, to provide a list of RMI context The value of the JNDI property java.name.provider.url goes by the format of a single URL, or a comma separate list of URLs. A single URL. For example: opmn:ormi://host1:6003:oc4j_instance1/appName1 A comma separated list of multiple URLs. For examples:  opmn:ormi://host1:6003:oc4j_instanc1/appName, opmn:ormi://host2:6003:oc4j_instance1/appName, opmn:ormi://host3:6003:oc4j_instance1/appName When the client looks up for a new Context the very first time in the client session, it sends a query against the OPMN referenced by the provider URL. The OPMN host and port specifies the destination of such query, and the OC4J instance name and appName are actually the “where clause” of the query. When the PROVIDER URL reference a single OPMN server Let's consider the case when the provider url only reference a single OPMN server of the destination cluster. In this case, that single OPMN server receives the query and returns a list of the qualified Contexts from all OC4Js within the cluster, even though there is a single OPMN server in the provider URL. A context represent a particular starting point at a particular server for subsequent object lookup. For example, if the URL is opmn:ormi://host1:6003:oc4j_instance1/appName, then, OPMN will return the following contexts: appName on oc4j_instance1 on host1 appName on oc4j_instance1 on host2, appName on oc4j_instance1 on host3,  (provided that host1, host2, host3 are all in the same cluster) Please note that One OPMN will be sufficient to find the list of all contexts from the entire cluster that satisfy the JNDI lookup query. You can do an experiment by shutting down appName on host1, and observe that OPMN on host1 will still be able to return you appname on host2 and appName on host3. When the PROVIDER URL reference a comma separated list of multiple OPMN servers When the JNDI propery java.naming.provider.url references a comma separated list of multiple URLs, the lookup will return the exact same things as with the single OPMN server: a list of qualified Contexts from the cluster. The purpose of having multiple OPMN servers is to provide high availability in the initial context creation, such that if OPMN at host1 is unavailable, client will try the lookup via OPMN on host2, and so on. After the initial lookup returns and cache a list of contexts, the JNDI URL(s) are no longer used in the same client session. That explains why removing the 3rd URL from the list of JNDI URLs will not stop the client from getting the EJB on the 3rd server. About the oracle.j2ee.rmi.loadBalance Property After the client acquires the list of contexts, it will cache it at the client side as “list of available RMI contexts”.  This list includes all the servers in the destination cluster. This list will stay in the cache until the client session (JVM) ends. The RMI load balancing against the destination cluster is happening at the client side, as the client is switching between the members of the list. Whether and how often the client will fresh the Context from the list of Context is based on the value of the  oracle.j2ee.rmi.loadBalance. The documentation at http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI list all the available values for the oracle.j2ee.rmi.loadBalance. Value Description client If specified, the client interacts with the OC4J process that was initially chosen at the first lookup for the entire conversation. context Used for a Web client (servlet or JSP) that will access EJBs in a clustered OC4J environment. If specified, a new Context object for a randomly-selected OC4J instance will be returned each time InitialContext() is invoked. lookup Used for a standalone client that will access EJBs in a clustered OC4J environment. If specified, a new Context object for a randomly-selected OC4J instance will be created each time the client calls Context.lookup(). Please note the regardless of the setting of oracle.j2ee.rmi.loadBalance property, the “refresh” only occurs at the client. The client can only choose from the "list of available context" that was returned and cached from the very first lookup. That is, the client will merely get a new Context object from the “list of available RMI contexts” from the cache at the client side. The client will NOT go to the OPMN server again to get the list. That also implies that if you are adding a node to the server cluster AFTER the client’s initial lookup, the client would not know it because neither the server nor the client will initiate a refresh of the “list of available servers” to reflect the new node. About High Availability (i.e. Resilience Against Node Failure of Remote OC4J Cluster) What we have discussed above is about load balancing. Let's also discuss high availability. This is how the High Availability works in RMI: when the client use the context but get an exception such as socket is closed, it knows that the server referenced by that Context is problematic and will try to get another unused Context from the “list of available contexts”. Again, this list is the list that was returned and cached at the very first lookup in the entire client session.

    Read the article

  • Improved Performance on PeopleSoft Combined Benchmark using SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved a world record 18,000 concurrent users experiencing subsecond response time while executing a PeopleSoft Payroll batch job of 500,000 employees in 32.4 minutes. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 47% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. Performance Landscape Results are presented for the PeopleSoft HRMS Self-Service and Payroll combined benchmark. The new result with 128 streams shows significant improvement in the payroll batch processing time with little impact on the self-service component response time. PeopleSoft HRMS Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.988 0.539 32.4 128 SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.944 0.503 43.3 64 The following results are for the PeopleSoft HRMS Self-Service benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the payroll component. PeopleSoft HRMS Self-Service 9.1 Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) 2x SPARC T4-2 (db) 18,000 1.048 0.742 N/A N/A The following results are for the PeopleSoft Payroll benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the self-service component. PeopleSoft Payroll (N.A.) 9.1 - 500K Employees (7 Million SQL PayCalc, Unicode) Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-4 (db) N/A N/A N/A 30.84 96 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 PeopleTools 8.52 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Micro Focus Server Express (COBOL v 5.1.00) Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. A total of 128 PeopleSoft streams server processes where used on the database node to complete payroll batch job of 500,000 employees in 32.4 minutes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Managementoracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 8 November 2012.

    Read the article

  • CodePlex Daily Summary for Sunday, March 28, 2010

    CodePlex Daily Summary for Sunday, March 28, 2010New ProjectsFeed Tracker: Feed Tracker allows you to track your favorite feeds (RSS 2.0 and Atom 1.0) and open them up directly in your browser.FIM 2010 Resource Management Client: The Forefront Identity Management 2010 Resource Management Client is a library to communicate with the FIM 2010 web service. The development langu...Infection Protection: A game about controlling disease outbreak in a city. Developed for OGPC 2010, using Qt.OrthoLab: Homepage of Orthocone open-source laboratory.Paragliding ThermalMarker: Paragliding / Hanggliding Windows Application that receives waypoint files and returns only the thermals that get triggered more often in a place.RSSFalls: RssFalls makes it easier for developers to download RSS or Podcast enclosures.String Library for C++ Language: StrLib++ is a string library for C++ language. for now it support only ANSI strings, later Unicode support will added for UT8, UTF16 and UTF32 for...Sweeper: Sweeper is a Visual Studio 2008 add-in for C# that takes care of many of the trivial code-formatting issues that developers run into - particularly...System.Common: A .Net library that provides methods, properties and more that the .Net Framework doesn't provide.Tiveriad: The framework is designed to help you more easily build modular Windows applicationT-Shirts Online: Online shop build in Silverlight 4 using DIBS as payment module.New ReleasesArkSwitch: ArkSwitch v1.1.3: This release has some important changes. Thanks to MichyPrima for helping with some of the code. 1. Improved theming to more easily support multip...Catharsis: Catharsis 2.5 on catarsa.com: The Catharsis framework has finally its own portal http://catarsa.com The latest release version is 2.5 - string names of properties are not any ...EffiProz - A Pure C# Database: EffiProz CF 1.0: EffiProz for .Net compact framework.Encrypted Notes: Encrypted Notes 1.6: This is the latest version of Encrypted Notes (1.6). It has an installer - it will create a directory 'CPascoe' in My Documents. Once you have ext...Extend SmallBasic: Teaching Extensions v.009: Added Pentagon Crazy Recipe QuizGapi.NET - .NET (C#) wrapper for Google API: Gapi.NET 0.5.0.0: - Fixed some minor bugs. - Add minor features. - Performance improvement. See code check-ins for detailed informationHouseFly experimental controls: HouseFly experimental control: Alpha version of HouseFly experimental controlsiTuner - The iTunes Companion: iTuner 1.2.3738 Beta 2: V1.2 allows you to synchronize one or more iTunes playlists to a USB MP3 player. Beta 2 resolves all known issues. This continues the evolution ye...jQuery Library for SharePoint Web Services: SPServices 0.5.4: IMPORTANT NOTE: This release is in an alpha state. You should only download it if you know what you are getting and are interested in testing it f...JSINQ - LINQ to Objects for JavaScript: JSINQ 1.0: This is the first stable release of JSINQ. It is fully compatible with the 0.9 beta release. It contains the following new features: Now supports ...MSBuild Mercurial Tasks: 1.0.0 Beta: First release of the application. This version integrates all the basic functionalities of Mercurial as defined in the Use Case 1.Open Portal Foundation: Open Portal Foundation V1.4.2: What's news? noscript template was updated naming convention for layout autogenerated controls now use the "master" prefix. The documentation ce...Open Portal Foundation: Open Portal Foundation V1.4.4: What's news? ASP .NET Master page support for custom aspx integrated pages New usercontrols for ASCX, ASPX and Master page integration : Link: f...Paint.NET PSD Plugin: 1.5.0: RLE compression is now working fully on save. File sizes are now competitive with Photoshop's. Saving takes about twice as long with RLE compressi...Paragliding ThermalMarker: ThermalMarker_Alfa0.1: Release Alfa 1Simple Service Locator: Simple Service Locator v0.7: The Simple Service Locator is an easy-to-use Inversion of Control library that is a complete implementation of the Common Service Locator interface...String Library for C++ Language: Release 0.9: version 0.9 beta release DO NOT USE IN SERIOUS PROJECTS this release use default application heap, and because visual studio is using special debug...Sweeper: Sweeper Alpha 1: SweeperA Visual Studio Add-in for C# Code Formatting - Visual Studio 2008 Includes: A UI for options, Enable or disable any specific task you want ...T-Shirts Online: 1.0: First release of the online shop.Twilio Server Library for .NET (TSL.NET): v0.1.0 Beta: This is the first release of TSL.NET. This v0.1.0 release is a Beta. Subsequent builds will be posted as v0.1.x and release-candidate Betas will be...Vr30 OS: Facebook 1.0: Connect you to Facebook without your web browser.Vr30 OS: SkyBlog 1.0: SkyBlog without web browser.Vr30 OS: YouTube 1.0: Youtube without web browserWeb Image Resize Handler: Web Image Resize, Zoom, Rotate and Greyscale v.1.0: Efficient Web Image Resize, Zoom, Rotate and Greyscale cacheing handler for ASP.Net.WinXound: WinXound 3.3.0 Beta 1 for Mac OsX: This is the first Beta release for Apple Mac OsX (Universal Binary). DEBUG HELP NEEDED ! Please signal bugs, suggestions or feedback to: stefano_b...Most Popular ProjectsMetaSharpRawrWBFS ManagerASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesManaged Extensibility FrameworkBlogEngine.NETMicrosoft Biology Foundationpatterns & practices: Composite WPF and SilverlightLINQ to TwitterFarseer Physics EngineTable2ClassNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • SortedDictionary and SortedList

    - by Simon Cooper
    Apart from Dictionary<TKey, TValue>, there's two other dictionaries in the BCL - SortedDictionary<TKey, TValue> and SortedList<TKey, TValue>. On the face of it, these two classes do the same thing - provide an IDictionary<TKey, TValue> interface where the iterator returns the items sorted by the key. So what's the difference between them, and when should you use one rather than the other? (as in my previous post, I'll assume you have some basic algorithm & datastructure knowledge) SortedDictionary We'll first cover SortedDictionary. This is implemented as a special sort of binary tree called a red-black tree. Essentially, it's a binary tree that uses various constraints on how the nodes of the tree can be arranged to ensure the tree is always roughly balanced (for more gory algorithmical details, see the wikipedia link above). What I'm concerned about in this post is how the .NET SortedDictionary is actually implemented. In .NET 4, behind the scenes, the actual implementation of the tree is delegated to a SortedSet<KeyValuePair<TKey, TValue>>. One example tree might look like this: Each node in the above tree is stored as a separate SortedSet<T>.Node object (remember, in a SortedDictionary, T is instantiated to KeyValuePair<TKey, TValue>): class Node { public bool IsRed; public T Item; public SortedSet<T>.Node Left; public SortedSet<T>.Node Right; } The SortedSet only stores a reference to the root node; all the data in the tree is accessed by traversing the Left and Right node references until you reach the node you're looking for. Each individual node can be physically stored anywhere in memory; what's important is the relationship between the nodes. This is also why there is no constructor to SortedDictionary or SortedSet that takes an integer representing the capacity; there are no internal arrays that need to be created and resized. This may seen trivial, but it's an important distinction between SortedDictionary and SortedList that I'll cover later on. And that's pretty much it; it's a standard red-black tree. Plenty of webpages and datastructure books cover the algorithms behind the tree itself far better than I could. What's interesting is the comparions between SortedDictionary and SortedList, which I'll cover at the end. As a side point, SortedDictionary has existed in the BCL ever since .NET 2. That means that, all through .NET 2, 3, and 3.5, there has been a bona-fide sorted set class in the BCL (called TreeSet). However, it was internal, so it couldn't be used outside System.dll. Only in .NET 4 was this class exposed as SortedSet. SortedList Whereas SortedDictionary didn't use any backing arrays, SortedList does. It is implemented just as the name suggests; two arrays, one containing the keys, and one the values (I've just used random letters for the values): The items in the keys array are always guarenteed to be stored in sorted order, and the value corresponding to each key is stored in the same index as the key in the values array. In this example, the value for key item 5 is 'z', and for key item 8 is 'm'. Whenever an item is inserted or removed from the SortedList, a binary search is run on the keys array to find the correct index, then all the items in the arrays are shifted to accomodate the new or removed item. For example, if the key 3 was removed, a binary search would be run to find the array index the item was at, then everything above that index would be moved down by one: and then if the key/value pair {7, 'f'} was added, a binary search would be run on the keys to find the index to insert the new item, and everything above that index would be moved up to accomodate the new item: If another item was then added, both arrays would be resized (to a length of 10) before the new item was added to the arrays. As you can see, any insertions or removals in the middle of the list require a proportion of the array contents to be moved; an O(n) operation. However, if the insertion or removal is at the end of the array (ie the largest key), then it's only O(log n); the cost of the binary search to determine it does actually need to be added to the end (excluding the occasional O(n) cost of resizing the arrays to fit more items). As a side effect of using backing arrays, SortedList offers IList Keys and Values views that simply use the backing keys or values arrays, as well as various methods utilising the array index of stored items, which SortedDictionary does not (and cannot) offer. The Comparison So, when should you use one and not the other? Well, here's the important differences: Memory usage SortedDictionary and SortedList have got very different memory profiles. SortedDictionary... has a memory overhead of one object instance, a bool, and two references per item. On 64-bit systems, this adds up to ~40 bytes, not including the stored item and the reference to it from the Node object. stores the items in separate objects that can be spread all over the heap. This helps to keep memory fragmentation low, as the individual node objects can be allocated wherever there's a spare 60 bytes. In contrast, SortedList... has no additional overhead per item (only the reference to it in the array entries), however the backing arrays can be significantly larger than you need; every time the arrays are resized they double in size. That means that if you add 513 items to a SortedList, the backing arrays will each have a length of 1024. To conteract this, the TrimExcess method resizes the arrays back down to the actual size needed, or you can simply assign list.Capacity = list.Count. stores its items in a continuous block in memory. If the list stores thousands of items, this can cause significant problems with Large Object Heap memory fragmentation as the array resizes, which SortedDictionary doesn't have. Performance Operations on a SortedDictionary always have O(log n) performance, regardless of where in the collection you're adding or removing items. In contrast, SortedList has O(n) performance when you're altering the middle of the collection. If you're adding or removing from the end (ie the largest item), then performance is O(log n), same as SortedDictionary (in practice, it will likely be slightly faster, due to the array items all being in the same area in memory, also called locality of reference). So, when should you use one and not the other? As always with these sort of things, there are no hard-and-fast rules. But generally, if you: need to access items using their index within the collection are populating the dictionary all at once from sorted data aren't adding or removing keys once it's populated then use a SortedList. But if you: don't know how many items are going to be in the dictionary are populating the dictionary from random, unsorted data are adding & removing items randomly then use a SortedDictionary. The default (again, there's no definite rules on these sort of things!) should be to use SortedDictionary, unless there's a good reason to use SortedList, due to the bad performance of SortedList when altering the middle of the collection.

    Read the article

  • Hi, I have a C hashing routine which is behaving strangely?

    - by aks
    Hi, In this hashing routine: 1.) I am able to add strings. 2.) I am able to view my added strings. 3.) When i try to add a duplicate string, it throws me an error saying already present. 4.) But, when i try to delete the same string which is already present in hash table, then the lookup_routine calls hash function to get an index. At this time, it throws a different hash index to the one it was already added. Hence, my delete routine is failing? I am able to understand the reason why for same string, hash fucntion calculates a different index each time (whereas the same logic works in view hash table routine)? Can someone help me? This is the Output, i am getting: $ ./a Press 1 to add an element to the hashtable Press 2 to delete an element from the hashtable Press 3 to search the hashtable Press 4 to view the hashtable Press 5 to exit Please enter your choice: 1 Please enter the string :gaura enters in add_string DEBUG purpose in hash function: str passed = gaura Hashval returned in hash func= 1 hashval = 1 enters in lookup_string str in lookup_string = gaura DEBUG purpose in hash function: str passed = gaura Hashval returned in hash func= 1 hashval = 1 DEBUG ERROR :element not found in lookup string DEBUG Purpose NULL Inserting... DEBUG1 : enters here hashval = 1 String added successfully Press 1 to add an element to the hashtable Press 2 to delete an element from the hashtable Press 3 to search the hashtable Press 4 to view the hashtable Press 5 to exit Please enter your choice: 1 Please enter the string :ayu enters in add_string DEBUG purpose in hash function: str passed = ayu Hashval returned in hash func= 1 hashval = 1 enters in lookup_string str in lookup_string = ayu DEBUG purpose in hash function: str passed = ayu Hashval returned in hash func= 1 hashval = 1 returns NULL in lookup_string DEBUG Purpose NULL Inserting... DEBUG2 : enters here hashval = 1 String added successfully Press 1 to add an element to the hashtable Press 2 to delete an element from the hashtable Press 3 to search the hashtable Press 4 to view the hashtable Press 5 to exit Please enter your choice: 1 Please enter the string :gaurava enters in add_string DEBUG purpose in hash function: str passed = gaurava Hashval returned in hash func= 7 hashval = 7 enters in lookup_string str in lookup_string = gaurava DEBUG purpose in hash function: str passed = gaurava Hashval returned in hash func= 7 hashval = 7 DEBUG ERROR :element not found in lookup string DEBUG Purpose NULL Inserting... DEBUG1 : enters here hashval = 7 String added successfully Press 1 to add an element to the hashtable Press 2 to delete an element from the hashtable Press 3 to search the hashtable Press 4 to view the hashtable Press 5 to exit Please enter your choice: 4 Index : i = 1 String = gaura ayu Index : i = 7 String = gaurava Press 1 to add an element to the hashtable Press 2 to delete an element from the hashtable Press 3 to search the hashtable Press 4 to view the hashtable Press 5 to exit Please enter your choice: 2 Please enter the string you want to delete :gaura String entered = gaura enters in delete_string DEBUG purpose in hash function: str passed = gaura Hashval returned in hash func= 0 hashval = 0 enters in lookup_string str in lookup_string = gaura DEBUG purpose in hash function: str passed = gaura Hashval returned in hash func= 0 hashval = 0 DEBUG ERROR :element not found in lookup string DEBUG Purpose Item not present. So, cannot be deleted Item found in list: Deletion failed Press 1 to add an element to the hashtable Press 2 to delete an element from the hashtable Press 3 to search the hashtable Press 4 to view the hashtable Press 5 to exit Please enter your choice: My routine is pasted below: include include struct list { char *string; struct list *next; }; struct hash_table { int size; /* the size of the table */ struct list *table; / the table elements */ }; struct hash_table * hashtable = NULL; struct hash_table *create_hash_table(int size) { struct hash_table *new_table; int i; if (size<1) return NULL; /* invalid size for table */ /* Attempt to allocate memory for the table structure */ if ((new_table = malloc(sizeof(struct hash_table))) == NULL) { return NULL; } /* Attempt to allocate memory for the table itself */ if ((new_table->table = malloc(sizeof(struct list *) * size)) == NULL) { return NULL; } /* Initialize the elements of the table */ for(i=0; i<size; i++) new_table->table[i] = '\0'; /* Set the table's size */ new_table->size = size; return new_table; } unsigned int hash(struct hash_table *hashtable, char *str) { printf("\n DEBUG purpose in hash function:\n"); printf("\n str passed = %s", str); unsigned int hashval = 0; int i = 0; for(; *str != '\0'; str++) { hashval += str[i]; i++; } hashval = hashval % 10; printf("\n Hashval returned in hash func= %d", hashval); return hashval; } struct list *lookup_string(struct hash_table *hashtable, char *str) { printf("\n enters in lookup_string \n"); printf("\n str in lookup_string = %s",str); struct list * new_list; unsigned int hashval = hash(hashtable, str); printf("\n hashval = %d \n", hashval); if(hashtable->table[hashval] == NULL) { printf("\n DEBUG ERROR :element not found in lookup string \n"); return NULL; } /* Go to the correct list based on the hash value and see if str is * in the list. If it is, return return a pointer to the list element. * If it isn't, the item isn't in the table, so return NULL. */ for(new_list = hashtable->table[hashval]; new_list != NULL;new_list = new_list->next) { if (strcmp(str, new_list->string) == 0) return new_list; } printf("\n returns NULL in lookup_string \n"); return NULL; } int add_string(struct hash_table *hashtable, char *str) { printf("\n enters in add_string \n"); struct list *new_list; struct list *current_list; unsigned int hashval = hash(hashtable, str); printf("\n hashval = %d", hashval); /* Attempt to allocate memory for list */ if ((new_list = malloc(sizeof(struct list))) == NULL) { printf("\n enters here \n"); return 1; } /* Does item already exist? */ current_list = lookup_string(hashtable, str); if (current_list == NULL) { printf("\n DEBUG Purpose \n"); printf("\n NULL \n"); } /* item already exists, don't insert it again. */ if (current_list != NULL) { printf("\n Item already present...\n"); return 2; } /* Insert into list */ printf("\n Inserting...\n"); new_list->string = strdup(str); new_list->next = NULL; //new_list->next = hashtable->table[hashval]; if(hashtable->table[hashval] == NULL) { printf("\n DEBUG1 : enters here \n"); printf("\n hashval = %d", hashval); hashtable->table[hashval] = new_list; } else { printf("\n DEBUG2 : enters here \n"); printf("\n hashval = %d", hashval); struct list * temp_list = hashtable->table[hashval]; while(temp_list->next!=NULL) temp_list = temp_list->next; temp_list->next = new_list; // hashtable->table[hashval] = new_list; } return 0; } int delete_string(struct hash_table *hashtable, char *str) { printf("\n enters in delete_string \n"); struct list *new_list; struct list *current_list; unsigned int hashval = hash(hashtable, str); printf("\n hashval = %d", hashval); /* Does item already exist? */ current_list = lookup_string(hashtable, str); if (current_list == NULL) { printf("\n DEBUG Purpose \n"); printf("\n Item not present. So, cannot be deleted \n"); return 1; } /* item exists, delete it. */ if (current_list != NULL) { struct list * temp = hashtable->table[hashval]; if(strcmp(temp->string,str) == 0) { if(temp->next == NULL) { hashtable->table[hashval] = NULL; free(temp); } else { hashtable->table[hashval] = temp->next; free(temp); } } else { struct list * temp1; while(temp->next != NULL) { temp1 = temp; if(strcmp(temp->string, str) == 0) { break; } else { temp = temp->next; } } if(temp->next == NULL) { temp1->next = NULL; free(temp); } else { temp1->next = temp->next; free(temp); } } } return 0; } void free_table(struct hash_table *hashtable) { int i; struct list *new_list, *temp_list; if (hashtable==NULL) return; /* Free the memory for every item in the table, including the * strings themselves. */ for(i=0; i<hashtable->size; i++) { new_list = hashtable->table[i]; while(new_list!=NULL) { temp_list = new_list; new_list = new_list->next; free(temp_list->string); free(temp_list); } } /* Free the table itself */ free(hashtable->table); free(hashtable); } void view_hashtable(struct hash_table * hashtable) { int i = 0; if(hashtable == NULL) return; for(i =0; i < hashtable->size; i++) { if((hashtable->table[i] == 0) || (strcmp(hashtable->table[i]->string, "*") == 0)) { continue; } printf(" Index : i = %d\t String = %s",i, hashtable->table[i]->string); struct list * temp = hashtable->table[i]->next; while(temp != NULL) { printf("\t %s",temp->string); temp = temp->next; } printf("\n"); } } int main() { hashtable = create_hash_table(10); if(hashtable == NULL) { printf("\n Memory allocation failure during creation of hash table \n"); return 0; } int flag = 1; while(flag) { int choice; printf("\n Press 1 to add an element to the hashtable\n"); printf("\n Press 2 to delete an element from the hashtable\n"); printf("\n Press 3 to search the hashtable\n"); printf("\n Press 4 to view the hashtable\n"); printf("\n Press 5 to exit \n"); printf("\n Please enter your choice: "); scanf("%d",&choice); if(choice == 5) flag = 0; else if(choice == 1) { char str[20]; printf("\n Please enter the string :"); scanf("%s",&str); int i; i = add_string(hashtable,str); if(i == 1) { printf("\n Memory allocation failure:Choice 1 \n"); return 0; } else if(i == 2) { printf("\n String already prsent in hash table : Couldnot add it again\n"); return 0; } else { printf("\n String added successfully \n"); } } else if(choice == 2) { int i; struct list * temp_list; char str[20]; printf("\n Please enter the string you want to delete :"); scanf("%s",&str); printf("\n String entered = %s", str); i = delete_string(hashtable,str); if(i == 0) { printf("\n Item found in list: Deletion success \n"); } else printf("\n Item found in list: Deletion failed \n"); } else if(choice == 3) { struct list * temp_list; char str[20]; printf("\n Please enter the string :"); scanf("%s",&str); temp_list = lookup_string(hashtable,str); if(!temp_list) { printf("\n Item not found in list: Deletion failed \n"); return 0; } printf("\n Item found: Present in Hash Table \n"); } else if(choice == 4) { view_hashtable(hashtable); } else if(choice == 5) { printf("\n Exiting ...."); return 0; } else printf("\n Invalid choice:"); }; free_table(hashtable); return 0; }

    Read the article

  • Converting projects to use Automatic NuGet restore

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2014/06/11/converting-projects-to-use-automatic-nuget-restore.aspxDownload tool In version 2.7 of NuGet automatic nuget restore was introduced, meaning you no longer need to distort your msbuild project files with nuget target information.   Visual Studio and TFS 2013 build have this enabled by default.  However, if your project was created before this was introduced, and/or if you have used the “Enable NuGet Package Restore” afterwards, you now have a series of unwanted things in your projects, and a series of project files that have been modified – and – you no longer neither want nor need this !  You might also get into some unwanted issues due to these modifications.  This is a MSBuild modification that was needed only before NuGet 2.7 ! So: DON’T USE THIS FUNCTION !!! There is an issue https://nuget.codeplex.com/workitem/4019 on this on the NuGet project site to get this function removed, renamed or at least moved farther away from the top level (please help vote it up!).  The response seems to be that it WILL BE removed, around version 3.0. This function does nothing you need after the introduction of NuGet 2.7.  What is also unfortunate is the naming of it – it implies that it is needed, it is not, and what is worse, there is no corresponding function to remove what it does ! So to fix this use the tool named IFix, that will fix this issue for you   - all free of course, and the code is open source.  Also report issues there:  https://github.com/OsirisTerje/IFix    IFix information DOWNLOAD HERE This command line tool installs using an MSI, and add itself to the system path.  If you work in a team, you will probably need to use the  tool multiple times.  Anyone in the team may at any time use the “Enable NuGet Package Restore” function and mess up your project again.  The IFix program can be run either in a  check modus, where it does not write anything back – it only checks if you have any issues, or in a Fix mode, where it will also perform the necessary fixes for you. The IFix program is used like this: IFix <command> [-c/--check] [-f/--fix]  [-v/--verbose] The command in this case is “nugetrestore”.  It will do a check from the location where it is being called, and run through all subfolders from that location. So  “IFix nugetrestore  --check” , will do the check ,  and “IFix nugetrestore  --fix”  will perform the changes, for all files and folders below the current working directory. (Note that --check  can be replaced with only –c, and --fix with –f, and so on. ) BEWARE: When you run the fix option, all solutions to be affected must be closed in Visual Studio ! So, if you just want to DO it, then: IFix nugetrestore --check to see if you have issues then IFix nugetrestore  --fix to fix them. How does it work IFix nugetrestore  checks and optionally fixes four issues that the older enabling of nuget restore did.  The issues are related to the MSBuild projess, and are: Deleting the nuget.targets file. Deleting the nuget.exe that is located under the .nuget folder Removing all references to nuget.targets in the solution file Removing all properties and target imports of nuget.targets inside the csproj files. IFix fixes these issues in the same sequence. The first step, removing the nuget.targets file is the most critical one, and all instances of the nuget.targets file within the scope of a solution has to be removed, and in addition it has to be done with the solution closed in Visual Studio.  If Visual Studio finds a nuget.targets file, the csproj files will be automatically messed up again. This means the removal process above might need to be done multiple times, specially when you’re working with a team, and that solution context menu still has the “Enable NuGet Package Restore” function.  Someone on the team might inadvertently do this at any time. It can be a good idea to add this check to a checkin policy – if you run TFS standard version control, but that will have no effect if you use TFS Git version control of course. So, better be prepared to run the IFix check from time to time. Or, even better, install IFix on your build servers, and add a call to IFix nugetrestore --check in the TFS Build script.    How does it look As a first example I have run the IFix program from the top of a set of git repositories, so it spans multiple repositories with multiple solutions. The result from the check option is as follows: We see the four red lines, there is one for each of the four checks we talked about in the previous section. The fact that they are red, means we have that particular issue. The first section (above the first red text line) is the nuget targets section.  Notice  No.1, it says it has found no paths to copy.  What IFix does here is to check if there are any defined paths to other nuget galleries.  If there are, then those are copied over to the nuget.config file, where is where it should be in version 2.7 and above.   No.2 says it has found the particular nuget.targets file,  No.3  states it HAS found some other nuget galleries defines in the targets file, which then it would like to copy to the config.file. No.4 is the section for nuget.exe files, and list those it has found, and which it would like to delete. No 5 states it has found a reference to nuget.targets in the solution file.  This reference comes from the fact that the .nuget folder is a solution folder, and the items within are described in the solution file. It then checks the csproj files, and as can be seen from the last red line, it ha found issues in 96 out of 198 csproj files.  There are two possible issues in a csproj files.  No.6 is the first one, and the most common and most important one, an “Import project” section.  This is the section that calls the nuget.targets files.  No.7 is another issue, which seems to sometimes be there, sometimes not, it is a RestorePackages property, which also should go away. Now, if we run the IFix nugetrestore –fix command, and then the check again after that, the result is: All green !

    Read the article

  • More Fun with C# Iterators and Generators

    - by James Michael Hare
    In my last post, I talked quite a bit about iterators and how they can be really powerful tools for filtering a list of items down to a subset of items.  This had both pros and cons over returning a full collection, which, in summary, were:   Pros: If traversal is only partial, does not have to visit rest of collection. If evaluation method is costly, only incurs that cost on elements visited. Adds little to no garbage collection pressure.    Cons: Very slight performance impact if you know caller will always consume all items in collection. And as we saw in the last post, that con for the cost was very, very small and only really became evident on very tight loops consuming very large lists completely.    One of the key items to note, though, is the garbage!  In the traditional (return a new collection) method, if you have a 1,000,000 element collection, and wish to transform or filter it in some way, you have to allocate space for that copy of the collection.  That is, say you have a collection of 1,000,000 items and you want to double every item in the collection.  Well, that means you have to allocate a collection to hold those 1,000,000 items to return, which is a lot especially if you are just going to use it once and toss it.   Iterators, though, don't have this problem.  Each time you visit the node, it would return the doubled value of the node (in this example) and not allocate a second collection of 1,000,000 doubled items.  Do you see the distinction?  In both cases, we're consuming 1,000,000 items.  But in one case we pass back each doubled item which is just an int (for example's sake) on the stack and in the other case, we allocate a list containing 1,000,000 items which then must be garbage collected.   So iterators in C# are pretty cool, eh?  Well, here's one more thing a C# iterator can do that a traditional "return a new collection" transformation can't!   It can return **unbounded** collections!   I know, I know, that smells a lot like an infinite loop, eh?  Yes and no.  Basically, you're relying on the caller to put the bounds on the list, and as long as the caller doesn't you keep going.  Consider this example:   public static class Fibonacci {     // returns the infinite fibonacci sequence     public static IEnumerable<int> Sequence()     {         int iteration = 0;         int first = 1;         int second = 1;         int current = 0;         while (true)         {             if (iteration++ < 2)             {                 current = 1;             }             else             {                 current = first + second;                 second = first;                 first = current;             }             yield return current;         }     } }   Whoa, you say!  Yes, that's an infinite loop!  What the heck is going on there?  Yes, that was intentional.  Would it be better to have a fibonacci sequence that returns only a specific number of items?  Perhaps, but that wouldn't give you the power to defer the execution to the caller.   The beauty of this function is it is as infinite as the sequence itself!  The fibonacci sequence is unbounded, and so is this method.  It will continue to return fibonacci numbers for as long as you ask for them.  Now that's not something you can do with a traditional method that would return a collection of ints representing each number.  In that case you would eventually run out of memory as you got to higher and higher numbers.  This method, though, never runs out of memory.   Now, that said, you do have to know when you use it that it is an infinite collection and bound it appropriately.  Fortunately, Linq provides a lot of these extension methods for you!   Let's say you only want the first 10 fibonacci numbers:       foreach(var fib in Fibonacci.Sequence().Take(10))     {         Console.WriteLine(fib);     }   Or let's say you only want the fibonacci numbers that are less than 100:       foreach(var fib in Fibonacci.Sequence().TakeWhile(f => f < 100))     {         Console.WriteLine(fib);     }   So, you see, one of the nice things about iterators is their power to work with virtually any size (even infinite) collections without adding the garbage collection overhead of making new collections.    You can also do fun things like this to make a more "fluent" interface for for loops:   // A set of integer generator extension methods public static class IntExtensions {     // Begins counting to inifity, use To() to range this.     public static IEnumerable<int> Every(this int start)     {         // deliberately avoiding condition because keeps going         // to infinity for as long as values are pulled.         for (var i = start; ; ++i)         {             yield return i;         }     }     // Begins counting to infinity by the given step value, use To() to     public static IEnumerable<int> Every(this int start, int byEvery)     {         // deliberately avoiding condition because keeps going         // to infinity for as long as values are pulled.         for (var i = start; ; i += byEvery)         {             yield return i;         }     }     // Begins counting to inifity, use To() to range this.     public static IEnumerable<int> To(this int start, int end)     {         for (var i = start; i <= end; ++i)         {             yield return i;         }     }     // Ranges the count by specifying the upper range of the count.     public static IEnumerable<int> To(this IEnumerable<int> collection, int end)     {         return collection.TakeWhile(item => item <= end);     } }   Note that there are two versions of each method.  One that starts with an int and one that starts with an IEnumerable<int>.  This is to allow more power in chaining from either an existing collection or from an int.  This lets you do things like:   // count from 1 to 30 foreach(var i in 1.To(30)) {     Console.WriteLine(i); }     // count from 1 to 10 by 2s foreach(var i in 0.Every(2).To(10)) {     Console.WriteLine(i); }     // or, if you want an infinite sequence counting by 5s until something inside breaks you out... foreach(var i in 0.Every(5)) {     if (someCondition)     {         break;     }     ... }     Yes, those are kinda play functions and not particularly useful, but they show some of the power of generators and extension methods to form a fluid interface.   So what do you think?  What are some of your favorite generators and iterators?

    Read the article

  • Summit Time!

    - by Ajarn Mark Caldwell
    Boy, how time flies!  I can hardly believe that the 2011 PASS Summit is just one week away.  Maybe it snuck up on me because it’s a few weeks earlier than last year.  Whatever the cause, I am really looking forward to next week.  The PASS Summit is the largest SQL Server conference in the world and a fantastic networking opportunity thrown in for no additional charge.  Here are a few thoughts to help you maximize the week. Networking As Karen Lopez (blog | @DataChick) mentioned in her presentation for the Professional Development Virtual Chapter just a couple of weeks ago, “Don’t wait until you need a new job to start networking.”  You should always be working on your professional network.  Some people, especially technical-minded people, get confused by the term networking.  The first image that used to pop into my head was the image of some guy standing, awkwardly, off to the side of a cocktail party, trying to shmooze those around him.  That’s not what I’m talking about.  If you’re good at that sort of thing, and you can strike up a conversation with some stranger and learn all about them in 5 minutes, and walk away with your next business deal all but approved by the lawyers, then congratulations.  But if you’re not, and most of us are not, I have two suggestions for you.  First, register for Don Gabor’s 2-hour session on Tuesday at the Summit called Networking to Build Business Contacts.  Don is a master at small talk, and at teaching others, and in just those two short hours will help you with important tips about breaking the ice, remembering names, and smooth transitions into and out of conversations.  Then go put that great training to work right away at the Tuesday night Welcome Reception and meet some new people; which is really my second suggestion…just meet a few new people.  You see, “networking” is about meeting new people and being friendly without trying to “work it” to get something out of the relationship at this point.  In fact, Don will tell you that a better way to build the connection with someone is to look for some way that you can help them, not how they can help you. There are a ton of opportunities as long as you follow this one key point: Don’t stay in your hotel!  At the least, get out and go to the free events such as the Tuesday night Welcome Reception, the Wednesday night Exhibitor Reception, and the Thursday night Community Appreciation Party.  All three of these are perfect opportunities to meet other professionals with a similar job or interest as you, and you never know how that may help you out in the future.  Maybe you just meet someone to say HI to at breakfast the next day instead of eating alone.  Or maybe you cross paths several times throughout the Summit and compare notes on different sessions you attended.  And you just might make new friends that you look forward to seeing year after year at the Summit.  Who knows, it might even turn out that you have some specific experience that will help out that other person a few months’ from now when they run into the same challenge that you just overcame, or vice-versa.  But the point is, if you don’t get out and meet people, you’ll never have the chance for anything else to happen in the future. One more tip for shy attendees of the Summit…if you can’t bring yourself to strike up conversation with strangers at these events, then at the least, after you sit through a good session that helps you out, go up to the speaker and introduce yourself and thank them for taking the time and effort to put together their presentation.  Ideally, when you do this, tell them WHY it was beneficial to you (e.g. “Now I have a new idea of how to tackle a problem back at the office.”)  I know you think the speakers are all full of confidence and are always receiving a ton of accolades and applause, but you’re wrong.  Most of them will be very happy to hear first-hand that all the work they put into getting ready for their presentation is paying off for somebody. Training With over 170 technical sessions at the Summit, training is what it’s all about, and the training is fantastic!  Of course there are the big-name trainers like Paul Randall, Kimberly Tripp, Kalen Delaney, Itzik Ben-Gan and several others, but I am always impressed by the quality of the training put on by so many other “regular” members of the SQL Server community.  It is amazing how you don’t have to be a published author or otherwise recognized as an “expert” in an area in order to make a big impact on others just by sharing your personal experience and lessons learned.  I would rather hear the story of, and lessons learned from, “some guy or gal” who has actually been through an issue and came out the other side, than I would a trained professor who is speaking just from theory or an intellectual understanding of a topic. In addition to the three full days of regular sessions, there are also two days of pre-conference intensive training available.  There is an extra cost to this, but it is a fantastic opportunity.  Think about it…you’re already coming to this area for training, so why not extend your stay a little bit and get some in-depth training on a particular topic or two?  I did this for the first time last year.  I attended one day of extra training and it was well worth the time and money.  One of the best reasons for it is that I am extremely busy at home with my regular job and family, that it was hard to carve out the time to learn about the topic on my own.  It worked out so well last year that I am doubling up and doing two days or “pre-cons” this year. And then there are the DVDs.  I think these are another great option.  I used the online schedule builder to get ready and have an idea of which sessions I want to attend and when they are (much better than trying to figure this out at the last minute every day).  But the problem that I have run into (seems this happens every year) is that nearly every session block has two different sessions that I would like to attend.  And some of them have three!  ACK!  That won’t work!  What is a guy supposed to do?  Well, one option is to purchase the DVDs which are recordings of the audio and projected images from each session so you can continue to attend sessions long after the Summit is officially over.  Yes, many (possibly all) of these also get posted online and attendees can access those for no extra charge, but those are not necessarily all available as quickly as the DVD recording are, and the DVDs are often more convenient than downloading, especially if you want to share the training with someone who was not able to attend in person. Remember, I don’t make any money or get any other benefit if you buy the DVDs or from anything else that I have recommended here.  These are just my own thoughts, trying to help out based on my experiences from the 8 or so Summits I have attended.  There is nothing like the Summit.  It is an awesome experience, fantastic training, and a whole lot of fun which is just compounded if you’ll take advantage of the first part of this article and make some new friends along the way.

    Read the article

  • CodePlex Daily Summary for Friday, August 22, 2014

    CodePlex Daily Summary for Friday, August 22, 2014Popular ReleasesQuickMon: Version 3.22: This release add two important changes. 1. Config variables at the monitor pack level (global to entire monitor pack for all Collectors) 2. The QuickMon (Windows) service now automatically reloads monitor packs that have been changed since it was started. This means you don't have to restart the service for changes to take effect.SSIS ReportGeneratorTask: ReportGenerator Task 1.8: New version of the SSIS Report Generator Task that supports SQL Server 2008, 2012 and 2014. In addition to minor bug fixes Multi-Value Parameters and Execution Information were integrated. The complete variable and parameter assignment is now a string and can be set dynamically.Corefig for Windows Server 2012 Core and Hyper-V Server 2012: Corefig 1.1.2 ISO: FixesUpdated Hyper-V scripts to use version 2 of the WMI tree. Updated the Hyper-V check for saved VM to look for the proper identifier. Fixed text issues with the licensing tab (thanks to briangw for rooting this problem out). EnhancementsNew (and improved) version number in Corefig.psd1.Outlook 2013 Backup Add-In: Outlook Backup Add-In 1.3: Changelog for new version: Added button in config-window to reset the last backup-time (this will trigger the backup after closing outlook) Minimum interval set to 0 (backup at each closing of outlook) Catch exception when data store entry is corrupt Added two parameters (prefix and suffix) to automatically rename the backup file Updated VSTO-Runtime to 10.0.50325 Upgraded project to Visual Studio 2013 Added optional command to run after backup (e.g. pack backup files, ...) Add...babelua: 1.6.7.0: V1.6.7.0 - 2014.8.21New feature: add a file search window ( ctrl+1 or ALT+L ), like The file search in VC Assistant; Stability improvement: performance improvement when BabeLua load/unload; performance improvement when debugger load lua files;File Explorer for WPF: FileExplorer3_20August2014: Please see Aug14 Update.Open NFe: RDI Open NFe 3.0 (alpha): Atualização para o layout 3.10 da NFe.ODBC Connect: v1.0: ODBC Connect executables for both 32bit and 64bit ODBC data sourcesMSSQL Deployment Tool: Microsoft SQL Deploy Tool v1.3.1: MicrosoftSqlDeployTool: v1.3.1.38348 What's changed? Update namespace and assembly name. Bug fixing.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksMagick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...Fluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.New Projects1thManage: GDT for erevery oneCreateProjectOnCodePlex: This is the first project for CoderCamps.HEAD FIRST C# LAB 1 : A DAY AT THE RACES: This has been provided for educational purposes and general discussion to improve coding practices associated with the resources detailed within Head First C#.Introduce Audit logging to your EF application using Repository & Unit of Work: Introduce Auditing in your application that uses Entity Framework by utilizing the Repository and Unit of Work design patterns.License Registration (C++): Allow to create demo version, activate or not a module.MS Word SharepointWiki Plugin: Scope of the Plugin is to enable a Post to a Sharepoint Wiki from within MS Word with Formatted Text and Images.Send My Zip: This app will help you to send the files were zipped then send the email about password information. This project is currently in setup mode and only availablewinhttp: this is a project for http/https download.Wix Builder: WixBuilder focusses on easily generating a WiX script from a project ouput, compile and link it into msi installer using the WiX Toolset.XiamiSig: ????????。

    Read the article

  • Windows Phone Developer Spotlight: Nikolai Joukov

    - by Lori Lalonde
    Originally posted on: http://geekswithblogs.net/lorilalonde/archive/2014/06/04/windows-phone-developer-spotlight-nikolai-joukov.aspxAs part of an ongoing series, I plan to include a spotlight post on people within the community that are stars in their field and area of expertise. For my first spotlight post, I interviewed Nikolai Joukov, who is a regular attendee at my local area .NET User Group (CTTDNUG), and has also participated in many of the Mobile and Cloud workshops we have hosted over the past few years. Nikolai stood out immediately, because of his passion for developing mobile apps, his interest in continuous learning, and his drive to publish quality apps that people will find useful and entertaining. Background: Nikolai immigrated to Canada in 1995, and has been working in IT since 1997. He moved on to become an independent contractor in 2005, and has worked at various large scale organizations over the course of his career, including BMO, Enbridge, Economical Insurance, Equitable Life, Manulife and Sun Life. Nikolai is an accomplished Windows Phone and Windows Store publisher, with 11 published Windows Phone apps, and 8 published Windows Store apps. He has almost 6000 downloads and favourable reviews. Q & A with Nikolai How many years have you been developing Windows Phone apps? 2 years When did you develop your very first Windows Phone app, and what was it about? Actually, the very first app I wrote was for the Microsoft “Smart Phone” back in 2004. This phone was given to me by Microsoft during the Developers Days Conference in Toronto. It was some kind of experimental model named Smart Phone, but you had to use VB 3 to develop the applications. Needless to say, this was not very successful at that time. My app was a Stock Trades Calculator. Very primitive, but it was working for me. The phone was heavy and the battery barely lasted 4 hours. Microsoft stopped supporting it few months later and the phone stopped working shortly after, but I still have it as a souvenir. For Windows Phone, my first app was “Trip Packing Assistant”. This is a simple trip packing check list that allows you to list items by category, set required quantity of items, and mark off the item in the list when it is packed. I designed it for me and my wife Galina, since we love to travel and this program helps manage our list for us. How did you get started in Windows Phone development? I have to say thanks to our .NET User Group for introducing me to Windows Phone development. I was intrigued and decided to give it a try. In October 2012 during a 2 day training event that ObjectSharp hosted in London, I met Bruce Johnson. On his advice, I registered for Developer Movement, and it is was a good push to actually complete the apps that I started. You have a great series of travel guide apps both for Windows Phone and Windows Store. Tell us about how you came up with the idea to develop those apps and what process you went through to put it all together. Like I said earlier, my wife and I love to travel. Before I created Trip Packing Assistant, every time we were planning to travel somewhere new, Galina would spend 3-4 weeks doing research. She would create a Word document with all of the information. We didn’t want to have to carry our laptop with us all the time, so we printed out the Word document she created, and would take it with us. After we returned from the trip, we would bring back tons of pictures and materials. Then our friends started to ask us about our materials before they planned their trips to the same places we had visited. So I decided to give it a try and started making apps for Windows Phone and for Windows 8. I hope these applications will help people who are planning to travel. So, all of the pictures used in the travel apps you created were actually taken by you during these amazing trips? Yes Do you have another Windows Store/Windows Phone project in development right now? If so, can you give us a hint at what it will be about? I want to stay with travel apps for now. But this time I will try to write an app for us (Galina and I). Usually we go on the trip, then I write the apps after we have all this beautiful pictures in our hands. We are planning a trip to Rome. This app will not have the pictures, but I want to add a map with points of interest and all information that can be useful for us. Then we will go on our trip and test it on location. As well I am planning to work on my existing apps to make them better. What learning resources would you recommend for other developers that want to get started in Window Store and Windows Phone development? I would start with dev.windowsphone.com to get all tools and samples, also links to training materials. I like MVA (Microsoft Virtual Academy). Their videos are really useful and it is free. Pluralsight is good too but it is not free and I do not have a subscription anymore. Our .NET User Group meetings give good insights too. I went to all meetings and full day training events. When you start to develop your app, you need to do research for specific questions that arise during development. The Developer Portal and Nokia Developer are good resources too. Wrap Up Thanks Nikolai for participating in my first Spotlight blog post! Shown below is Nikolai’s publisher page in the Windows Phone Store and his publisher page in the Windows Store. Simply click on it to be taken to there to check out his portfolio of apps. Be sure to download his apps and try them out! They are all free! Nikolai’s Windows Phone apps   Nikolai’s Windows Store Apps

    Read the article

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

  • BRE (Business Rules Engine) Data Services is out...!!!

    - by Vishal
    A few months ago we at Tellago had open sourced the BizTalk Data Services. We were meanwhile working on other artifacts which comes along with BizTalk Server like the “Business Rules Engine”.  We are happy to announce the first version of BRE Data Services. BRE Data Services is a same concept which we covered through BTS Data Services, providing a RESTFul OData – based API to interact with the Business Rules Engine via HTTP using ATOM Publishing Protocol or JSON as the encoding mechanism.   In the first version release, we mainly focused on the browsing, querying and searching BRE artifacts via a RESTFul interface. Also along with that we provide the functionality to execute Business Rules by inserting the Facts for policies via the IUpdatable implementation of WCF Data Services.   The BRE Data Services API provides a lightweight interface for managing Business Rules Engine artifacts such as Policies, Rules, Vocabularies, Conditions, Actions, Facts etc. The following are some examples which details some of the available features in the current version of the API.   Basic Querying: Querying BRE Policies http://localhost/BREDataServices/BREMananagementService.svc/Policies Querying BRE Rules http://localhost/BREDataServices/BREMananagementService.svc/Rules Querying BRE Vocabularies http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies   Navigation: The BRE Data Services API also leverages WCF Data Services to enable navigation across related different BRE objects. Querying a specific Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies(‘PolicyName’) Querying a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules(‘RuleName’) Querying all Rules under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Rules Querying all Facts under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Facts Querying all Actions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying all Conditions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying a specific Vocabulary: http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies('VocabName')   Implementation: With the BRE Data Services, we also provide the functionality of executing a particular policy via HTTP. There are couple of ways you can do that though the API.   Ø First is though Service Operations feature of WCF Data Services in which you can execute the Facts by passing them in the URL itself. This is a very simple implementations of the executing the policies due to the limitations & restrictions (only primitive types of input parameters which can be passed) currently of the Service Operations of the WCF Data Services. Below is a code sample.                Below is a traced Request/Response message.                                 Ø Second is through the IUpdatable Interface of WCF Data Services. In this method, you can first query the rule which you want to execute and then inserts Facts for that particular Rules and finally when you perform the SaveChanges() call for the IUpdatable Interface API, it executes the policy with the facts which you inserted at runtime. Below is a sample of client side code. Due to the limitations of current version of WCF Data Services where there is no way you can return back the updates happening on the service side back to the client via the SaveChanges() method. Here we are executing the rule passing a serialized XML as Facts and there is no changes made to any data where we can query back to fetch the changes. This is overcome though the first way to executing the policies which is by executing it as a Service Operation call.     This actually generates a AtomPub message shown as below:   POST /Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/$batch HTTP/1.1 User-Agent: Microsoft ADO.NET Data Services DataServiceVersion: 1.0;NetFx MaxDataServiceVersion: 2.0;NetFx Accept: application/atom+xml,application/xml Accept-Charset: UTF-8 Content-Type: multipart/mixed; boundary=batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Host: localhost:8080 Content-Length: 1481 Expect: 100-continue   --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Content-Type: multipart/mixed; boundary=changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf   --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf Content-Type: application/http Content-Transfer-Encoding: binary   MERGE http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy') HTTP/1.1 Content-ID: 4 Content-Type: application/atom+xml;type=entry Content-Length: 927   <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" font-size: x-small"http://www.w3.org/2005/Atom">   <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="Tellago.BRE.REST.Resources.Fact" />   <title />   <author>     <name />   </author>   <updated>2011-01-31T20:09:15.0023982Z</updated>   <id>http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy')</id>   <content type="application/xml">     <m:properties>       <d:FactInstance>&lt;ns0:LoanStatus xmlns:ns0="http://tellago.com"&gt;&lt;Age&gt;10&lt;/Age&gt;&lt;Status&gt;true&lt;/Status&gt;&lt;/ns0:LoanStatus&gt;</d:FactInstance>       <d:FactType>TestSchema</d:FactType>       <d:ID>TestPolicy</d:ID>     </m:properties>   </content> </entry> --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf-- --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7—     Installation: The installation of the BRE Data Services is pretty straight forward. ·         Create a new IIS website say BREDataServices. ·         Download the SourceCode from TellagoCodeplex and copy the content from Tellago.BRE.REST.ServiceHost to the physical location of the above created website.     ·         The appPool account running the website should have admin access to the BizTalkRuleEngineDb database. ·         TheRight click the BREManagementService.svc in the IIS ContentView for the website and wala..     Conclusion: The BRE Data Services API is an experiment intended to bring the capabilities of RESTful/OData based services to the Traditional BTS/BRE Solutions. The future releases will target on technologies like BAM, ESB Toolkit. This version has been tested with various version of BizTalk Server and we have uploaded the source code to our Tellago's DevLabs workspace at Codeplex. I hope you guys enjoy this release. Keep an eye on our new releases @ Tellago Codeplex. We are working on various other Biztalk Artifacts like BAM, ESB Toolkit.     Till than happy BizzRuling…!!!     Thanks,   Vishal Mody

    Read the article

  • Sorting/Paginating/Filtering Complex Multi-AR Object Tables in Rails

    - by Matt Rogish
    I have a complex table pulled from a multi-ActiveRecord object array. This listing is a combined display of all of a particular user's "favorite" items (songs, messages, blog postings, whatever). Each of these items is a full-fledged AR object. My goal is to present the user with a simplified search, sort, and pagination interface. The user need not know that the Song has a singer, and that the Message has an author -- to the end user both entries in the table will be displayed as "User". Thus, the search box will simply be a dropdown list asking them which to search on (User name, created at, etc.). Internally, I would need to convert that to the appropriate object search, combine the results, and display. I can, separately, do pagination (mislav will_paginate), sorting, and filtering, but together I'm having some problems combining them. For example, if I paginate the combined list of items, the pagination plugin handles it just fine. It is not efficient since the pagination is happening in the app vs. the DB, but let's assume the intended use-case would indicate the vast majority of the users will have less than 30 favorited items and all other behavior, server capabilities, etc. indicates this will not be a bottleneck. However, if I wish to sort the list I cannot sort it via the pagination plugin because it relies on the assumption that the result set is derived from a single SQL query, and also that the field name is consistent throughout. Thus, I must sort the merged array via ruby, e.g. @items.sort_by{ |i| i.whatever } But, since the items do not share common names, I must first interrogate the object and then call the correct sort by. For example, if the user wishes to sort by user name, if the sorted object is a message, I sort by author but if the object is a song, I sort by singer. This is all very gross and feels quite un-ruby-like. This same problem comes into play with the filter. If the user filters on the "parent item" (the message's thread, the song's album), I must translate that to the appropriate collection object method. Also gross. This is not the exact set-up but is close enough. Note that this is a legacy app so changing it is quite difficult, although not impossible. Also, yes there is some DRY that can be done, but don't focus on the style or elegance of the following code. Style/elegance of the SOLUTION is important, however! :D models: class User < ActiveRecord::Base ... has_and_belongs_to_many :favorite_messages, :class_name => "Message" has_and_belongs_to_many :favorite_songs, :class_name => "Song" has_many :authored_messages, :class_name => "Message" has_many :sung_songs, :class_name => "Song" end class Message < ActiveRecord::Base has_and_belongs_to_many :favorite_messages belongs_to :author, :class_name => "User" belongs_to :thread end class Song < ActiveRecord::Base has_and_belongs_to_many :favorite_songs belongs_to :singer, :class_name => "User" belongs_to :album end controller: def show u = User.find 123 @items = Array.new @items << u.favorite_messages @items << u.favorite_songs # etc. etc. @items.flatten! @items = @items.sort_by{ |i| i.created_at } @items = @items.paginate :page => params[:page], :per_page => 20 end def search # Assume user is searching for username like 'Bob' u = User.find 123 @items = Array.new @items << u.favorite_messages.find( :all, :conditions => "LOWER( author ) LIKE LOWER('%bob%')" ) @items << u.favorite_songs.find( :all, :conditions => "LOWER( singer ) LIKE ... " ) # etc. etc. @items.flatten! @items = @items.sort_by{ |i| determine appropriate sorting based on user selection } @items = @items.paginate :page => params[:page], :per_page => 20 end view: #index.html.erb ... <table> <tr> <th>Title (sort ASC/DESC links)</th> <th>Created By (sort ASC/DESC links))</th> <th>Collection Title (sort ASC/DESC links)</th> <th>Created At (sort ASC/DESC links)</th> </tr> <% @items.each |item| do %> <%= render { :partial => "message", :locals => item } if item.is_a? Message %> <%= render { :partial => "song", :locals => item } if item.is_a? Song %> <%end%> ... </table> #message.html.erb # shorthand, not real ruby print out message title, author name, thread title, message created at #song.html.erb # shorthand print out song title, singer name, album title, song created at

    Read the article

  • ApiChange Is Released!

    - by Alois Kraus
    I have been working on little tool to simplify my life and perhaps yours as developer as well. It is basically a command line tool that allows you to execute queries on your compiled .NET code base. The main purpose is to find out how big the impact of an api change would be if you changed this or that.  Now you can do high level operations like Diff public types for breaking changes. Who uses a method? Who uses a type? Who uses implements an interface? Who references me? What format has the binary  (32/64, Managed C++, Pure IL, Unmanaged)? Search for all event subscribers and unsubscribers. A unique feature is to check for event subscription imbalances. Forgotten event subscriptions are the 90% cause of managed memory leaks. It is done at a per class level. If one class does subscribe to one event more often than it does unsubscribe it is treated as possible event subscription imbalance. Another unique ability is to search for users of string literals which allows you to track users of a string constant which is not possible otherwise. For incremental builds the ShowRebuildTargets command can be used to identify the dependant targets that need a rebuild after you did compile one assembly. It has some heuristics in place to determine the impact of breaking changes and finds out which targets need to be recompiled as well. It has a ton of other features and a an API to access these things programmatically so you can build upon these simple queries create even better tools. Perhaps we get a Visual Studio plug in? You can download it from CodePlex here. It works via XCopy deployment. Simply let it run and check the command line help out. The best feature in my opinion is that the output of nearly all commands can be piped to Excel for further analysis. Since it does read also the pdbs it can show you the source file name and line number as well for all matches. The following picture shows the output of a –WhousesType query. The following command checks where type from BaseLibraryV1.dll are used inside DependantLibV1.dll. All matches are printed out with the reason and matching item along with file and line number. There is even a hyper link to the match which will open Visual Studio. ApiChange -whousestype "*" BaseLibraryV1.dll -in DependantLibV1.dll –excel The "*” is the actual query which means all types. The syntax is the same like in C# just that placeholders are allowed ;-). More info's can be found at the Codeplex Documentation.     The tool was developed in a TDD style manner which means that it is heavily tested and already used by a quite large user base inside the company I do work for. Luckily for you I got the permission to make it public so you take advantage of it. It is fully instrumented with tracing. If you find bugs simply add the –trace command line switch to find out what is failing and send me the output. How is it done? Your first guess might be that it uses reflection. Wrong. It is based on Mono Cecil a free IL parser with a fantastic API to access all internals of a managed assembly. The speed is awesome and to make it even faster I did make the tool heavily multi threaded. The query above did execute in 1.8s with the Excel output. On a rather slow machine I can analyze over 1500 assemblies in less than 40s with a very low memory consumption. The true power of Mono Cecil is that I can load an assembly like any other data file. I have no problems unloading a file but if I would have used reflection I would need to unload a whole AppDomain just to get rid of one assembly in my memory. Just to give you a glimpse how ApiChange.Api.dll can be used I show you one of the unit tests:           public void Can_Find_GenericMethodInvocations_With_Type_Parameters()         { // 1. Create an aggregator to collect our matches             UsageQueryAggregator agg = new UsageQueryAggregator();   // 2. This is the type we want to search for. Load it via the type query             var decimalType = TypeQuery.GetTypeByName(TestConstants.MscorlibAssembly, "System.Decimal");   // 3. register the type query which searches for uses of the Decimal type             new WhoUsesType(agg, decimalType);   // 4. Search for all users of the Decimal type in the DependandLibV1Assembly             agg.Analyze(TestConstants.DependandLibV1Assembly);   // Extract matches and assert             Assert.AreEqual(2, agg.MethodMatches.Count, "Method match count");             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[0].Match.Name);             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[1].Match.Name);         } Many thanks go from here to Jb Evian for the creation of Mono.Cecil. Without this fantastic piece of code it would have been much much harder. There are other options around like the Common Compiler Infrastructure  Metadata Api which should do the same thing but it was not a real option since the Microsoft reader did fail on even simple assemblies (at least in September 2009 this was the case). Besides this I found the CCI Apis much harder to use. The only real competitor was Reflector which does support many things but does not let me access his cool high level analyze commands. So I decided to dig into the IL specs and as a result you can query your compiled binaries from the command line or programmatically. The best thing is you try it out for yourself and give me some feedback what you miss. If you want to contribute or have a cool idea what should be added drop me a mail at A Kraus1@___No [email protected]. There is much more inside the tool I did not talk about it (yet).

    Read the article

  • CodePlex Daily Summary for Thursday, September 06, 2012

    CodePlex Daily Summary for Thursday, September 06, 2012Popular Releasesmenu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.Active Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.Hidden Capture (HC): Hidden Capture 1.1: Hidden Capture 1.1 by Mohsen E.Dawatgar http://Hidden-Capture.blogfa.comExt Spec: Ext Spec 0.2.1: Refined examples and improved distribution options.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsAny-Service: AnyService is a .net 4.0 Windows service shell. It hosts any windows application in non-gui mode to run as a service.BabyCloudDrives - the multi cloud drive desktop's application: wpf ????BLACK ORANGE: Download The HPAD TEXT EDITOR and use it Wisely.. CodePlex New Release Checker: CodePlex New Release Checker is a small library that makes it easy to add, "New Version Available!" functionality to your CodePlex project.Collect: ????????!CSVManager: CSV??CSV?????,????CSV??,??????Exam Project: My Exam Project. Computer Vision, C and OpenCV-FTP: Hey guys thanks for checking out my ftp!Haushaltsbuch: 1ModMaker.Lua: ModMaker.Lua is an open source .NET library that parses and executes Lua code.MyJabbr: MyJabbr netduinoscope: Design shield and software to use netduino as oscilloscopeNetSurveillance Web Application: Net Surveillance Web ApplicationNiconicoApiHelper: ????API?????????OStega: A simple library for encrypt text into an bmp or png image.OURORM: ormTFS Cloud Deployment Toolkit: The TFS Cloud Deployment Toolkit is a set of tools that integrate with TFS 2010 to help manage configuration and deployment to various remote environments.The Visual Guide for Building Team Foundation Server 2012 Environments: A step-by-step guide for building Team Foundation Server 2012 environments that include SharePoint Server 2010, SQL Server 2012, Windows Server 2012 and more!WinRT LineChart: An attempt at creating an usable LineChart for everyone to use in his/her own Windows 8 Apps

    Read the article

  • Visual Studio 2010 Productivity Tips and Tricks&ndash;Part 1: Extensions

    - by ToStringTheory
    I don’t know about you, but when it comes to development, I prefer my environment to be as free of clutter as possible.  It may surprise you to know that I have tried ReSharper, and did not like it, for the reason that I stated above.  In my opinion, it had too much clutter.  Don’t get me wrong, there were a couple of features that I did like about it (inversion of if blocks, code feedback), but for the most part, I actually felt that it was slowing me down. Introduction Another large factor besides intrusiveness/speed in my choice to dislike ReSharper would probably be that I have become comfortable with my current setup and extensions.  I believe I have a good collection, and am quite happy with what I can accomplish in a short amount of time.  I figured that I would share some of my tips/findings regarding Visual Studio productivity here, and see what you had to say. The first section of things that I would like to cover, are Visual Studio Extensions.  In case you have been living under a rock for the past several years, Extensions are available under the Tools menu in Visual Studio: The extension manager enables integrated access to the Microsoft Visual Studio Gallery online with access to a few thousand different extensions.  I have tried many extensions, but for reasons of lack reliability, usability, or features, have uninstalled almost all of them.  However, I have come across several that I find I can not do without anymore: NuGet Package Manager (Microsoft) Perspectives (Adam Driscoll) Productivity Power Tools (Microsoft) Web Essentials (Mads Kristensen) Extensions NuGet Package Manager To be honest, I debated on whether or not to put this in here.  Most people seem to have it, however, there was a time when I didn’t, and was always confused when blogs/posts would say to right click and “Add Package Reference…” which with one of the latest updates is now “Manage NuGet Packages”.  So, if you haven’t downloaded the NuGet Package Manager yet, or don’t know what it is, I would highly suggest downloading it now! Features Simply put, the NuGet Package Manager gives you a GUI and command line to access different libraries that have been uploaded to NuGet. Some of its features include: Ability to search NuGet for packages via the GUI, with information in the detail bar on the right. Quick access to see what packages are in a solution, and what packages have updates available, with easy 1-click updating. If you download a package that requires references to work on other NuGet packages, they will be downloaded and referenced automatically. Productivity Tip If you use any type of source control in Visual Studio as well as using NuGet packages, be sure to right-click on the solution and click "Enable NuGet Package Restore". What this does is add a NuGet package to the solution so that it will be checked in along side your solution, as well as automatically grab packages from NuGet on build if needed. This is an extremely simple system to use to manage your package references, instead of having to manually go into TFS and add the Packages folder. Perspectives I can't stand developing with just one monitor. Especially if it comes to debugging. The great thing about Visual Studio 2010, is that all of the panels and windows are floatable, and can dock to other screens. The only bad thing is, I don't use the same toolset with everything that I am doing. By this, I mean that I don't use all of the same windows for debugging a web application, as I do for coding a WPF application. Only thing is, Visual Studio doesn't save the screen positions for all of the undocked windows. So, I got curious one day and decided to check and see if there was an extension to help out. This is where I found Perspectives. Features Perspectives gives you the ability to configure window positions across any or your monitors, and then to save the positions in a profile. Perspectives offers a Panel to manage different presets/favorites, and a toolbar to add to the toolbars at the top of Visual Studio. Ability to 'Favorite' a profile to add it to the perspectives toolbar. Productivity Tip Take the time to setup profiles for each of your scenarios - debugging web/winforms/xaml, coding, maintenance, etc. Try to remember to use the profiles for a few days, and at the end of a week, you may find that your productivity was never better. Productivity Power Tools Ah, the Productivity Power Tools... Quite possibly one of my most used extensions, if not my most used. The tool pack gives you a variety of enhancements ranging from key shortcuts, interface tweaks, and completely new features to Visual Studio 2010. Features I don't want to bore you with all of the features here, so here are my favorite: Quick Find - Unobtrusive search box in upper-right corner of the code window. Great for searching in general, especially in a file. Solution Navigator - The 'Solution Explorer' on steroids. Easy to search for files, see defined members/properties/methods in files, and my favorite feature is the 'set as root' option. Updated 'Add Reference...' Dialog - This is probably my favorite enhancement period... The 'Add Reference...' dialog redone in a manner that resembles the Extension/Package managers. I especially love the ability to search through all of the references. "Ctrl - Click" for Definition - I am still getting used to this as I usually try to use my keyboard for everything, but I love the ability to hold Ctrl and turn property/methods/variables into hyperlinks, that you click on to see their definitions. Great for travelling down a rabbit hole in an application to research problems. While there are other commands/utilities, I find these to be the ones that I lean on the most for the usefulness. Web Essentials If you have do any type of web development in ASP .Net, ASP .Net MVC, even HTML, I highly suggest grabbing the Web Essentials right NOW! This extension alone is great for productivity in web development, and greatly decreases my development time on new features. Features Some of its best features include: CSS Previews - I say 'previews' because of the multiple kinds of previews in CSS that you get font-family, color, background/background-image previews. This is great for just tweaking UI slightly in different ways and seeing how they look in the CSS window at a glance. Live Preview - One word - awesome! This goes well with my multi-monitor setup. I put the site on one monitor in a Live Preview panel, and then as I make changes to CSS/cshtml/aspx/html, the preview window will update with each save/build automatically. For CSS, you can even turn on live-update, so as you are tweaking CSS, the style changes in real time. Great for tweaking colors or font-sizes. Outlining - Small, but I like to be able to collapse regions/declarations that are in the way of new work, or are just distracting. Commenting Shortcuts - I don't know why it wasn't included by default, but it is nice to have the key shortcuts for commenting working in the CSS editor as well. Productivity Tip When working on a site, hit CTRL-ALT-ENTER to launch the Live Preview window. Dock it to another monitor. When you make changes to the document/css, just save and glance at the other monitor. No need to alt tab, then alt tab before continuing editing. Conclusion These extensions are only the most useful and least intrusive - ones that I use every day. The great thing about Visual Studio 2010 is the extensibility options that it gives developers to utilize. Have an extension that you use that isn't intrusive, but isn't listed here? Please, feel free to comment. I love trying new things, and am always looking for new additions to my toolset of the most useful. Finally, please keep an eye out for Part 2 on key shortcuts in Visual Studio. Also, if you are visiting my site (http://tostringtheory.com || http://geekswithblogs.net/tostringtheory) from an actual browser and not a feed, please let me know what you think of the new styling!

    Read the article

  • I Know What I Did This Summer: Put Down Trex Decking

    - by thatjeffsmith
    If you’re wondering why I would bore everyone with my pictures and frequent status updates/tweets from the past week – it’s so I could document the process of refurbishing my deck, or what some would call a porch. When we go to take a vacation, buy a car, do anything – we also read personal blogs to get the real story. So, if you’re curious about what it takes to tackle this sort of project, read on. Skills/Equipment/Manpower We Possessed I took the old decking out by myself. I’m about 230 lbs, more than 6′ tall, and I’m pretty healthy. This took about 8 hours over two afternoons. Three of us put the deck back together. My wife has two engineering degrees. Her father also has two engineering degrees. Lots of brainpower available here. Also, her dad ran the public works department for a country for more than 20 years – so lots and lots of practical experience on hand. We had a compound mitre saw, a skilsaw, 2-3 crowbars, a framing hammer, 3 cordless drills, a corded drill, lots of sawhorses, a power sander, an angle grinder, a 10×10 Coleman canopy tent, a Ford F-150 pickup truck, outdoor speakers and lots of iTunes playlists, plenty of water and cold beer. Why We Did This Our deck was relatively young – it was built in 2005. However, the pressure treated boards must not have been adequately maintained before we bought the house. I had powerwashed the deck every other year and had it stained a few times. The boards just rotted. We’re going to be in the house for a long time, and we wanted something that would look nice and require little maintenance. More bad deck boards The deck boards were in bad shape Things We Learned The two most important things: The hidden fasteners have to be put in JUST right. Wedge them into the grooved board, then bend down the bit that is screwed down. We didn’t do this on the first board and couldn’t get the second board to fit nearly close enough. Watching the official TREX YouTube video helped immensely, and we should have watched that first. When pre-drilling holes for the boards that need screwed down – DO NOT pre-drill through the underlying framing wood. ONLY pre-drill through the TREX itself. The screw won’t seat in the board properly. Instead of sitting down flush with the board, it will stop at the top of the board and just spin. I had to call the the place that sold me the screws to find this out. So about a third of our screws look like crap. If it doesn’t look or feel right – stop everything and pick up your computer or your phone. It’s not right, and it will be much easier to stop and find out why. We didn’t do this, and now I’m going to see every screw that’s not flush with the boards and get upset. Oh well. The Process How much time did it take? Well I spent about 8 hours taking the deck apart. And then the 3 of use spent 8 hours the first day, 10 hours the second day, 8 hours the third, and another 6 hours on the fourth day. That’s like 104 man-hours. We supposedly saved four or five thousand dollars in labor, but don’t do the math here or you might get a bit upset. The main thing is that we got what we wanted, and there won’t be any surprises later. Now for some pictures… This 6”+ pry bar made the destruction of the old deck much easier Most of the joists, once exposed, were OK. This joist wasn’t sitting on ANYTHING before. We think a lazy gas person cut the board to sneak a gas line in. Awesome… These monster lag bolts had to be accounted for when putting in the additional framing The border pattern Sheri wanted to put in required a lot more framing. These were the first boards to go down – we screwed them in as there was no way to attach clips I sat, kicked in the boards, and then drilled these clips in – but my wife was able to go MUCH faster by using her hands to lock the boards in and drill on her knees. I liked locking the board in with my feet when they needed to be ‘encouraged’ to go straight. The first board took FOREVER to go in, but then when we got rolling, we were able to put in a 20′ board in less than 10 minutes. This was end of construction day #2 – we got much further than we thought we would. Ah, the dreaded last 10% – what to do here? Remember those ‘floating’ stringers? Yeah, we fixed that up a bit, too. My wife used a website (and her brain) to calculate exactly how to cut the stringers to give us the rise/run we needed with the proper clearance and all that jazz. The stairs with stringers and toe kicks – this was worth the effort It started raining on us as I screwed down the steps – this we managed to get our shade tent up on the deck to protect us from the rain too The stairs, finished Finished, mostly Good corner shot The top of the stairs Stairs, looking down Celebratory beer In Summary There are a few things we’re not happy with. I think we can fix them up – but later. I have a few things left to finish, rewire the lighting, get the gas grille put back in, and rehang some screen doors. I was expecting this to be a lot worse than it was. If I didn’t have the help, I would have never done it myself. But I’m glad that I did have that help and did do that project. It’s not often you get to spend that kind of qualify time with family and building cool stuff.

    Read the article

  • CodePlex Daily Summary for Wednesday, September 05, 2012

    CodePlex Daily Summary for Wednesday, September 05, 2012Popular ReleasesDesktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.BLACK ORANGE: HPAD TEXT EDITOR 0.9 Beta: HOW TO RUN THE TEXT EDITOR Download the HPAD ARCHIVED FILES which is in .rar format Extract using Winrar Make sure that extracted files are in the same folder Double-Click on HPAD.exe application fileTelerikMvcGridCustomBindingHelper: Version 1.0.15.247-RC2: TelerikMvcGridCustomBindingHelper 1.0.15.247 RC2 Release notes: This is a RC version (hopefully the last one), please test and report any error or problem you encounter. This release is all about performance and fixes Support: "Or" and "Does Not contain" filter options Improved BooleanSubstitutes, Custom Aggregates and expressions-to-queryover Add EntityFramework examples in ExampleWebApplication Many other improvements and fixes Fix invalid cast on CustomAggregates Support for ...ServiceMon - Extensible Real-time, Service Monitoring Utility: ServiceMon Release 0.9.0.44: Auto-uploaded from build serverJavaScript Grid: Release 09-05-2012: Release 09-05-2012xUnit.net Contrib: xunitcontrib-dotCover 0.6.1 (dotCover 2.1 beta): xunitcontrib release 0.6.1 for dotCover 2.1 beta This release provides a test runner plugin for dotCover 2.1 beta, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release adds support for running xUnit.net tests to dotCover 2.1 beta's Visual Studio plugin. PLEASE NOTE: You do NOT need this if you also have ReSharper and the existing 0.6.1 release installed. DotCover will use ReSharper's test runners if available. This release includes th...B INI Sharp Library: B INI Sharp Library v1.0.0.0 Realsed: The frist realsedActive Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsA Simple Eng-Hindi CMS: A simple English- Hindi dual language content management system for small business/personal websites.Active Social Migrator: This project for managing the Active Social migration tool.ANSI Console User Control: Custom console control for .NET WinformsAutoSPInstallerGUI: GUI Configuration Tool for SPAutoInstaller Codeplex ProjectCode Documentation Checkin Policy: This checkin policy for Visual Studio 2012 checks if c# code is documented the way it's configured in the config of the policy. Code Dojo/Kata - Free Time Coding: Doing some katas of the Coding Dojo page. http://codingdojo.org/cgi-bin/wiki.pl?KataCataloguefjycUnifyShow: fjycUnifyShowHidden Capture (HC): HC is simple and easy utility to hidden and auto capture desktop or active windowHRC Integration Services: Fake SQL Server Integration Services. LOLKooboo CMS Sites Switcher: Kooboo CMS Sites SwitcherMod.CookieDetector: Orchard module for detecting whether cookies are enabledMyCodes: Created!MySQL Statement Monitor: MySQL Statement Monitor is a monitoring tool that monitors SQL statements transferred over the network.NeoModulusPIRandom: The idea with PI Random is to use easy string manipulation and simple math to generate a pseudo random number. Net Core Tech - Medical Record System: This is a Medical Record System ProjectOraPowerShell: PowerShell library for backup and maintenance of a Oracle Database environment under Microsoft Windows 2008PinDNN: PinDNN is a module that imparts Pinterest-like functionality to DotNetNuke sites. This module works with a MongoDB database and uses the built-in social relatioPyrogen Code Generator: PyroGen is a simple code generator accepting C# as the markup language.restMs: wil be deleted soonScript.NET: Script.NET is a script management utility for web forms and MVC, using ScriptJS-like features to link dependencies between scripts.SpringExample-Pagination: Simple Spring example with PaginationXNA and Component Based Design: This project includes code for XNA and Component Based Design

    Read the article

  • Java: Switch statements with methods involving arrays

    - by Shane
    Hi guys I'm currently creating a program that allows the user to create an array, search an array and delete an element from an array. Looking at the LibraryMenu Method, the first case where you create an array in the switch statement works fine, however the other ones create a "cannot find symbol error" when I try to compile. My question is I want the search and delete functions to refer to the first switch case - the create Library array. Any help is appreciated, even if its likely from a simple mistake. import java.util.*; public class EnterLibrary { public static void LibraryMenu() { java.util.Scanner scannerObject =new java.util.Scanner(System.in); LibraryMenu Menu = new LibraryMenu(); Menu.displayMenu(); switch (scannerObject.nextInt() ) { case '1': { System.out.println ("1 - Add Videos"); Library[] newLibrary; newLibrary = createLibrary(); } break; case '2': System.out.println ("2 - Search Videos"); searchLibrary(newLibrary); break; case '3': { System.out.println ("3 - Change Videos"); //Change video method TBA } break; case '4': System.out.println ("4 - Delete Videos"); deleteVideo(newLibrary); break; default: System.out.println ("Unrecognized option - please select options 1-3 "); break; } } public static Library[] createLibrary() { Library[] videos = new Library[4]; java.util.Scanner scannerObject =new java.util.Scanner(System.in); for (int i = 0; i < videos.length; i++) { //User enters values into set methods in Library class System.out.print("Enter video number: " + (i+1) + "\n"); String number = scannerObject.nextLine(); System.out.print("Enter video title: " + (i+1) + "\n"); String title = scannerObject.nextLine(); System.out.print("Enter video publisher: " + (i+1) + "\n"); String publisher = scannerObject.nextLine(); System.out.print("Enter video duration: " + (i+1) + "\n"); String duration = scannerObject.nextLine(); System.out.print("Enter video date: " + (i+1) + "\n"); String date= scannerObject.nextLine(); System.out.print("VIDEO " + (i+1) + " ENTRY ADDED " + "\n \n"); //Initialize arrays videos[i] = new Library (); videos[i].setVideo( number, title, publisher, duration, date ); } return videos; } public static void printVidLibrary( Library[] videos) { //Get methods to print results System.out.print("\n======VIDEO CATALOGUE====== \n"); for (int i = 0; i < videos.length; i++) { System.out.print("Video number " + (i+1) + ": \n" + videos[i].getNumber() + "\n "); System.out.print("Video title " + (i+1) + ": \n" + videos[i].getTitle() + "\n "); System.out.print("Video publisher " + (i+1) + ": \n" + videos[i].getPublisher() + "\n "); System.out.print("Video duration " + (i+1) + ": \n" + videos[i].getDuration() + "\n "); System.out.print("Video date " + (i+1) + ": \n" + videos[i].getDate() + "\n "); } } public static Library searchLibrary( Library[] videos) { //User enters values to setSearch Library titleResult = new Library(); java.util.Scanner scannerObject =new java.util.Scanner(System.in); for (int n = 0; n < videos.length; n++) { System.out.println("Search for video number:\n"); String newSearch = scannerObject.nextLine(); titleResult.getSearch( videos, newSearch); if (!titleResult.equals(-1)) { System.out.print("Match found!\n" + newSearch + "\n"); } else if (titleResult.equals(-1)) { System.out.print("Sorry, no matches found!\n"); } } return titleResult; } public static void deleteVideo( Library[] videos) { Library titleResult = new Library(); java.util.Scanner scannerObject =new java.util.Scanner(System.in); for (int n = 0; n < videos.length; n++) { System.out.println("Search for video number:\n"); String deleteSearch = scannerObject.nextLine(); titleResult.deleteVideo(videos, deleteSearch); System.out.print("Video deleted\n"); } } public static void main(String[] args) { Library[] newLibrary; new LibraryMenu(); } }

    Read the article

  • pagination panel should remain static

    - by fusion
    i've a search form in which a user enters the keyword and the results are displayed with pagination. everything works fine except for the fact that when the user clicks on the 'Next' button, the pagination panel disappears as well when the page loads to retrieve the data through ajax. how do i make the pagination panel static, while the data is being retrieved? search.html: <form name="myform" class="wrapper"> <input type="text" name="q" id="q" onkeyup="showPage();" class="txt_search"/> <input type="button" name="button" onclick="showPage();" class="button"/> <p> </p> <div id="txtHint"></div> </form> ajax: var url="search.php"; url += "?q="+str+"&page="+page+"&list="; url += "&sid="+Math.random(); xmlHttp.onreadystatechange=stateChanged; xmlHttp.open("GET",url,true); xmlHttp.send(null); function stateChanged(){ if (xmlHttp.readyState==4 || xmlHttp.readyState=="complete"){ document.getElementById("txtHint").innerHTML=xmlHttp.responseText; } //end if } //end function search.php: $self = $_SERVER['PHP_SELF']; $limit = 3; //Number of results per page $adjacents = 2; $numpages=ceil($totalrows/$limit); $query = $query." ORDER BY idQuotes LIMIT " . ($page-1)*$limit . ",$limit"; $result = mysql_query($query, $conn) or die('Error:' .mysql_error()); ?> <div class="search_caption">Search Results</div> <div class="search_div"> <table class="result"> <?php while ($row= mysql_fetch_array($result, MYSQL_ASSOC)) { $cQuote = highlightWords(htmlspecialchars($row['cQuotes']), $search_result); ?> <tr> . . .display results. . . </tr> <?php } ?> </table> </div> <hr> <div class="searchmain"> <?php //Create and print the Navigation bar $nav=""; $next = $page+1; $prev = $page-1; if($page > 1) { $nav .= "<a onclick=\"showPage('','$prev'); return false;\" href=\"$self?page=" . $prev . "&q=" .urlencode($search_result) . "\">< Prev</a>"; $first = "<a onclick=\"showPage('','1'); return false;\" href=\"$self?page=1&q=" .urlencode($search_result) . "\"> << </a>" ; } else { $nav .= "&nbsp;"; $first = "&nbsp;"; } for($i = 1 ; $i <= $numpages ; $i++) { if($i == $page) { $nav .= "<span class=\"no_link\">$i</span>"; }else{ $nav .= "<a onclick=\"showPage('',$i); return false;\" href=\"$self?page=" . $i . "&q=" .urlencode($search_result) . "\">$i</a>"; } } if($page < $numpages) { $nav .= "<a onclick=\"showPage('','$next'); return false;\" href=\"$self?page=" . $next . "&q=" .urlencode($search_result) . "\">Next ></a>"; $last = "<a onclick=\"showPage('','$numpages'); return false;\" href=\"$self?page=$numpages&q=" .urlencode($search_result) . "\"> >> </a>"; } else { $nav .= "&nbsp;"; $last = "&nbsp;"; } echo $first . $nav . $last; ?> </div>

    Read the article

  • where is this function getting its values from

    - by user295189
    I have the JS file below that I am working on and I have a need to know this specific function pg.getRecord_Response = function(){ } within the file. I need to know where are the values are coming from in this function for example arguments[0].responseText? I am new to javascript so any help will be much appreciated. Thanks var pg = new Object(); var da = document.body.all; // ===== - EXPRESS BUILD [REQUEST] - ===== // pg.expressBuild_Request = function(){ var n = new Object(); n.patientID = request.patientID; n.encounterID = request.encounterID; n.flowSheetID = request.flowSheetID; n.encounterPlan = request.encounterPlan; n.action = "/location/diagnosis/dsp_expressBuild.php"; n.target = popWinCenterScreen("/common/html/empty.htm", 619, 757, ""); myLocationDB.PostRequest(n); } // ===== - EXPRESS BUILD [RESPONSE] - ===== // pg.expressBuild_Response = function(){ pg.records.showHiddenRecords = 0; pg.loadRecords_Request(arguments.length ? arguments[0] : 0); } // ===== - GET RECORD [REQUEST] - ===== // pg.getRecord_Request = function(){ if(pg.records.lastSelected){ pg.workin(true); pg.record.recordID = pg.records.lastSelected.i; var n = new Object(); n.noheaders = 1; n.recordID = pg.record.recordID; myLocationDB.Ajax.Post("/location/diagnosis/get_record.php", n, pg.getRecord_Response); } else { pg.buttons.btnOpen.disable(true); } } // ===== - GET RECORD [RESPONSE] - ===== // pg.getRecord_Response = function(){ //alert(arguments[0].responseText); if(arguments.length && arguments[0].responseText){ alert(arguments[0].responseText); // Refresh PQRI grid when encounter context if(request.encounterID && window.parent.frames['main']){ window.parent.frames['main'].pg.loadQualityMeasureRequest(); } var rec = arguments[0].responseText.split(pg.delim + pg.delim); if(rec.length == 20){ // validate record values rec[0] = parseInt(rec[0]); rec[3] = parseInt(rec[3]); rec[5] = parseInt(rec[5]); rec[6] = parseInt(rec[6]); rec[7] = parseInt(rec[7]); rec[8] = parseInt(rec[8]); rec[9] = parseInt(rec[9]); rec[10] = parseInt(rec[10]); rec[11] = parseInt(rec[11]); rec[12] = parseInt(rec[12]); rec[15] = parseInt(rec[15]); // set record state pg.recordState = { recordID: pg.record.recordID, codeID: rec[0], description: rec[2], assessmentTypeID: rec[3], type: rec[4], onsetDateYear: rec[5], onsetDateMonth: rec[6], onsetDateDay: rec[7], onsetDateIsApproximate: rec[8], resolveDateYear: rec[9], resolveDateMonth: rec[10], resolveDateDay: rec[11], resolveDateIsApproximate: rec[12], commentsCount: rec[15], comments: rec[16] } // set record view pg.record.code.codeID = pg.recordState.codeID; pg.record.code.value = rec[1]; pg.record.description.value = rec[2]; for(var i=0; i<pg.record.type.options.length; i++){ if(pg.record.type.options[i].value == rec[4]){ pg.record.type.selectedIndex = i; break; } } for(var i=0; i<pg.record.assessmentType.options.length; i++){ if(pg.record.assessmentType.options[i].value == rec[3]){ pg.record.assessmentType.selectedIndex = i; break; } } if(rec[5]){ if(rec[6] && rec[7]){ pg.record.onsetDateType.selectedIndex = 0; pg.record.onsetDate.value = rec[6] + "/" + rec[7] + "/" + rec[5]; pg.record.onsetDate.format(); } else { pg.record.onsetDateType.selectedIndex = 1; pg.record.onsetDateMonth.selectedIndex = rec[6]; for(var i=0; i<pg.record.onsetDateYear.options.length; i++){ if(pg.record.onsetDateYear.options[i].value == rec[5]){ pg.record.onsetDateYear.selectedIndex = i; break; } } if(rec[8]) pg.record.chkOnsetDateIsApproximate.checked = true; } } else { pg.record.onsetDateType.selectedIndex = 2; } if(rec[9]){ if(rec[10] && rec[11]){ pg.record.resolveDateType.selectedIndex = 0; pg.record.resolveDate.value = rec[10] + "/" + rec[11] + "/" + rec[9]; pg.record.resolveDate.format(); } else { pg.record.resolveDateType.selectedIndex = 1; pg.record.resolveDateMonth.selectedIndex = rec[10]; for(var i=0; i<pg.record.resolveDateYear.options[i].length; i++){ if(pg.record.resolveDateYear.options.value == rec[9]){ pg.record.resolveDateYear.selectedIndex = i; break; } } if(rec[12]) pg.record.chkResolveDateIsApproximate.checked = true; } } else { pg.record.resolveDateType.selectedIndex = 2; } pg.record.lblCommentCount.innerHTML = rec[15]; pg.record.comments.value = rec[16]; pg.record.lblUpdatedBy.innerHTML = "* Last updated by " + rec[13] + " on " + rec[14]; pg.record.lblUpdatedBy.title = "Updated by: " + rec[13] + "\nUpdated on: " + rec[14]; pg.record.linkedNotes.setData(rec[18]); pg.record.linkedOrders.setData(rec[19]); pg.record.updates.setData(rec[17]); return; } } alert("An error occured while attempting to retrieve\ndetails for record #" + pg.record.recordID + ".\n\nPlease contact support if this problem persists.\nWe apologize for the inconvenience."); pg.hideRecordView(); } // ===== - HIDE COMMENTS VIEW - ===== // pg.hideCommentsView = function(){ pg.recordComments.style.left = ""; pg.recordComments.disabled = true; pg.recordComments.comments.value = ""; pg.record.disabled = false; pg.record.style.zIndex = 5500; } // ===== - HIDE code SEARCH - ===== // pg.hidecodeSearch = function(){ pg.codeSearch.style.left = ""; pg.codeSearch.disabled = true; pg.record.disabled = false; pg.record.style.zIndex = 5500; } // ===== - HIDE RECORD - ===== // pg.hideRecord = function(){ if(arguments.length){ pg.loadRecords_Request(); } else if(pg.records.lastSelected){ var n = new Object(); n.recordTypeID = 11; n.patientID = request.patientID; n.recordID = pg.records.lastSelected.i; n.action = "/location/hideRecord/dsp_hideRecord.php"; n.target = popWinCenterScreen("/common/html/empty.htm", 164, 476); myLocationDB.PostRequest(n); } } // ===== - HIDE RECORD VIEW - ===== // pg.hideRecordView = function(){ pg.record.style.left = ""; pg.record.disabled = true; // reset record grids pg.record.updates.state = "NO_RECORDS"; pg.record.linkedNotes.state = "NO_RECORDS"; pg.record.linkedOrders.state = "NO_RECORDS"; // reset linked record tabs pg.record.tabs[0].click(); pg.record.tabs[1].disable(true); pg.record.tabs[2].disable(true); pg.record.tabs[1].all[1].innerHTML = "Notes"; pg.record.tabs[2].all[1].innerHTML = "Orders"; // reset record state pg.recordState = null; // reset record view pg.record.recordID = 0; pg.record.code.value = ""; pg.record.code.codeID = 0; pg.record.description.value = ""; pg.record.type.selectedIndex = 0; pg.record.assessmentType.selectedIndex = 0; pg.record.onsetDateType.selectedIndex = 0; pg.record.chkOnsetDateIsApproximate.checked = false; pg.record.resolveDateType.selectedIndex = 0; pg.record.chkResolveDateIsApproximate.checked = false; pg.record.lblCommentCount.innerHTML = 0; pg.record.comments.value = ""; pg.record.lblUpdatedBy.innerHTML = ""; pg.record.lblUpdatedBy.title = ""; pg.record.updateComment = ""; pg.recordComments.comments.value = ""; pg.record.active = false; pg.codeSearch.newRecord = true; pg.blocker.className = ""; pg.workin(false); } // ===== - HIDE UPDATE VIEW - ===== // pg.hideUpdateView = function(){ pg.recordUpdate.style.left = ""; pg.recordUpdate.disabled = true; pg.recordUpdate.type.value = ""; pg.recordUpdate.onsetDate.value = ""; pg.recordUpdate.description.value = ""; pg.recordUpdate.resolveDate.value = ""; pg.recordUpdate.assessmentType.value = ""; pg.record.disabled = false; pg.record.btnViewUpdate.setState(); pg.record.style.zIndex = 5500; } // ===== - INIT - ===== // pg.init = function(){ var tab = 1; pg.delim = String.fromCharCode(127); pg.subDelim = String.fromCharCode(1); pg.blocker = da.blocker; pg.hourglass = da.hourglass; pg.pageContent = da.pageContent; pg.blocker.shim = da.blocker_shim; pg.activeTip = da.activeTip; pg.activeTip.anchor = null; pg.activeTip.shim = da.activeTip_shim; // PAGE TITLE pg.pageTitle = da.pageTitle; // TOTAL RECORDS pg.totalRecords = da.totalRecords[0]; // START RECORD pg.startRecord = da.startRecord[0]; pg.startRecord.onchange = function(){ pg.records.startRecord = this.value; pg.loadRecords_Request(); } // RECORD PANEL pg.recordPanel = myLocationDB.RecordPanel(pg.pageContent.all.recordPanel); for(var i=0; i<pg.recordPanel.buttons.length; i++){ if(pg.recordPanel.buttons[i].orderBy){ pg.recordPanel.buttons[i].onclick = pg.sortRecords; } } // RECORDS GRIDVIEW pg.records = pg.recordPanel.all.grid; alert(pg.recordPanel.all.grid); pg.records.sortOrder = "DESC"; pg.records.lastExpanded = null; pg.records.attachEvent("onrowclick", pg.record_click); pg.records.orderBy = pg.recordPanel.buttons[0].orderBy; pg.records.attachEvent("onrowmouseout", pg.record_mouseOut); pg.records.attachEvent("onrowdblclick", pg.getRecord_Request); pg.records.attachEvent("onrowmouseover", pg.record_mouseOver); pg.records.attachEvent("onstateready", pg.loadRecords_Response); // BUTTON - TOGGLE HIDDEN RECORDS pg.btnHiddenRecords = myLocationDB.Custom.ImageButton(3, 751, 19, 19, "/common/images/hide.gif", 1, 1, "", "", da.pageContent); pg.btnHiddenRecords.setTitle("Show hidden records"); pg.btnHiddenRecords.onclick = pg.toggleHiddenRecords; pg.btnHiddenRecords.setState = function(){ this.disable(!pg.records.totalHiddenRecords); } // code SEARCH SUBWIN pg.codeSearch = da.subWin_codeSearch; pg.codeSearch.newRecord = true; pg.codeSearch.searchType = "code"; pg.codeSearch.searchFavorites = true; pg.codeSearch.onkeydown = function(){ if(window.event && window.event.keyCode && window.event.keyCode == 113){ if(pg.codeSearch.searchType == "DESCRIPTION"){ pg.codeSearch.searchType = "code"; pg.codeSearch.lblSearchType.innerHTML = "ICD-9 Code"; } else { pg.codeSearch.searchType = "DESCRIPTION"; pg.codeSearch.lblSearchType.innerHTML = "Description"; } pg.searchcodes_Request(); } } // SEARCH TYPE pg.codeSearch.lblSearchType = pg.codeSearch.all.lblSearchType; // SEARCH STRING pg.codeSearch.searchString = pg.codeSearch.all.searchString; pg.codeSearch.searchString.tabIndex = 1; pg.codeSearch.searchString.onfocus = function(){ this.select(); } pg.codeSearch.searchString.onblur = function(){ this.value = this.value.trim(); } pg.codeSearch.searchString.onkeydown = function(){ if(window.event && window.event.keyCode && window.event.keyCode == 13){ pg.searchcodes_Request(); } } // -- "SEARCH" pg.codeSearch.btnSearch = pg.codeSearch.all.btnSearch; pg.codeSearch.btnSearch.tabIndex = 2; pg.codeSearch.btnSearch.disable = myLocationDB.Disable; pg.codeSearch.btnSearch.onclick = pg.searchcodes_Request; pg.codeSearch.btnSearch.baseTitle = "Search diagnosis codes"; pg.codeSearch.btnSearch.setState = function(){ pg.codeSearch.btnSearch.disable(pg.codeSearch.searchString.value.trim().length < 2); } pg.codeSearch.searchString.onkeyup = pg.codeSearch.btnSearch.setState; // START RECORD / TOTAL RECORDS pg.codeSearch.startRecord = pg.codeSearch.all.startRecord; pg.codeSearch.totalRecords = pg.codeSearch.all.totalRecords; pg.codeSearch.startRecord.onchange = function(){ pg.codeSearch.records.startRecord = this.value; pg.searchcodes_Request(); } // RECORD PANEL pg.codeSearch.recordPanel = myLocationDB.RecordPanel(pg.codeSearch.all.recordPanel); pg.codeSearch.recordPanel.buttons[0].onclick = pg.sortcodeResults; pg.codeSearch.recordPanel.buttons[1].onclick = pg.sortcodeResults; // DATA GRIDVIEW pg.codeSearch.records = pg.codeSearch.all.grid; pg.codeSearch.records.orderBy = "code"; pg.codeSearch.records.attachEvent("onrowdblclick", pg.updatecode); pg.codeSearch.records.attachEvent("onstateready", pg.searchcodes_Response); // BUTTON - "CANCEL" pg.codeSearch.btnCancel = pg.codeSearch.all.btnCancel; pg.codeSearch.btnCancel.tabIndex = 4; pg.codeSearch.btnCancel.onclick = pg.hidecodeSearch; pg.codeSearch.btnCancel.title = "Close this search area"; // SEARCH FAVORITES / ALL pg.codeSearch.optSearch = myLocationDB.InputButton(pg.codeSearch.all.optSearch); pg.codeSearch.optSearch[0].onclick = function(){ if(pg.codeSearch.searchFavorites){ pg.codeSearch.searchString.focus(); } else { pg.codeSearch.searchFavorites = true; pg.searchcodes_Request(); } } pg.codeSearch.optSearch[1].onclick = function(){ if(pg.codeSearch.searchFavorites){ pg.codeSearch.searchFavorites = false; pg.searchcodes_Request(); } else { pg.codeSearch.searchString.focus(); } } // -- "USE SELECTED" pg.codeSearch.btnUseSelected = pg.codeSearch.all.btnUseSelected; pg.codeSearch.btnUseSelected.tabIndex = 3; pg.codeSearch.btnUseSelected.onclick = pg.updatecode; pg.codeSearch.btnUseSelected.disable = myLocationDB.Disable; pg.codeSearch.btnUseSelected.baseTitle = "Use the selected diagnosis code"; pg.codeSearch.btnUseSelected.setState = function(){ pg.codeSearch.btnUseSelected.disable(!pg.codeSearch.records.lastSelected); } pg.codeSearch.records.attachEvent("onrowclick", pg.codeSearch.btnUseSelected.setState); // RECORD STATE pg.recordState = null; // RECORD SUBWIN pg.record = da.subWin_record; pg.record.recordID = 0; pg.record.active = false; pg.record.updateComment = ""; // -- TABS pg.record.tabs = myLocationDB.TabCollection( pg.record.all.tab, function(){ if(pg.record.tabs[0].all[0].checked){ pg.record.btnOpen.style.display = "none"; pg.record.chkSelectAll.hitArea.style.display = "none"; pg.record.btnSave.style.display = "block"; pg.record.lblUpdatedBy.style.display = "block"; pg.record.pnlRecord_shim.style.display = "none"; } else { pg.record.pnlRecord_shim.style.display = "block"; pg.record.btnSave.style.display = "none"; pg.record.lblUpdatedBy.style.display = "none"; pg.record.btnOpen.setState(); pg.record.btnOpen.style.display = "block"; if(pg.record.tabs[2].all[0].checked){ pg.record.chkSelectAll.hitArea.style.display = "none"; //pg.record.btnViewLabs.setState(); //pg.record.btnViewLabs.style.display = "block"; } else { pg.record.chkSelectAll.setState(); pg.record.chkSelectAll.hitArea.style.display = "block"; //pg.record.btnViewLabs.style.display = "none"; } } } ); pg.record.tabs[1].disable(true); pg.record.tabs[2].disable(true); pg.record.pnlRecord_shim = pg.record.all.pnlRecord_shim; pg.record.code = pg.record.all.code; pg.record.code.codeID = 0; pg.record.code.tabIndex = -1; // -- CHANGE code pg.record.btnChangecode = myLocationDB.Custom.ImageButton(6, 107, 22, 22, "/common/images/edit.gif", 2, 2, "", "", pg.record.all.pnlRecord); pg.record.btnChangecode.tabIndex = 1; pg.record.btnChangecode.onclick = pg.showcodeSearch; pg.record.btnChangecode.title = "Change the diagnosis code for this problem"; pg.record.description = pg.record.all.description; pg.record.description.tabIndex = 2; pg.record.type = pg.record.all.type; pg.record.type.tabIndex = 3; pg.record.assessmentType = pg.record.all.assessmentType; pg.record.assessmentType.tabIndex = 9; // ONSET DATE pg.record.onsetDateType = pg.record.all.onsetDateType; pg.record.onsetDateType.tabIndex = 4; pg.record.onsetDateType.onchange = pg.record.onsetDateType.setState = function(){ switch(this.selectedIndex){ case 1: // PARTIAL pg.record.chkOnsetDateIsApproximate.disable(false); pg.record.onsetDate.style.visibility = "hidden"; pg.record.onsetDateUnknown.style.visibility = "hidden"; pg.record.onsetDate.datePicker.style.visibility = "hidden"; pg.record.onsetDateMonth.style.visibility = "visible"; pg.record.onsetDateYear.style.visibility = "visible"; break; case 2: // UNKNOWN pg.record.chkOnsetDateIsApproximate.disable(true); pg.record.onsetDate.style.visibility = "hidden"; pg.record.onsetDateYear.style.visibility = "hidden"; pg.record.onsetDateMonth.style.visibility = "hidden"; pg.record.onsetDate.datePicker.style.visibility = "hidden"; pg.record.onsetDateUnknown.style.visibility = "visible"; break; default: // "WHOLE" pg.record.chkOnsetDateIsApproximate.disable(true); pg.record.onsetDateMonth.style.visibility = "hidden"; pg.record.onsetDateYear.style.visibility = "hidden"; pg.record.onsetDateUnknown.style.visibility = "hidden"; pg.record.onsetDate.style.visibility = "visible"; pg.record.onsetDate.datePicker.style.visibility = "visible"; break; } } pg.record.onsetDate = myLocationDB.Custom.DateInput(30, 364, 80, pg.record.all.pnlRecord, 1, 1, 0, params.todayDate, 1); pg.record.onsetDate.tabIndex = 5; pg.record.onsetDate.style.textAlign = "LEFT"; pg.record.onsetDate.calendar.style.zIndex = 6000; pg.record.onsetDate.datePicker.style.left = "448px"; pg.record.onsetDate.setDateRange(params.birthDate, params.todayDate); pg.record.onsetDateYear = pg.record.all.onsetDateYear; pg.record.onsetDateYear.tabIndex = 6; pg.record.onsetDateMonth = pg.record.all.onsetDateMonth pg.record.onsetDateMonth.tabIndex = 7; pg.record.onsetDateUnknown = pg.record.all.onsetDateUnknown; pg.record.onsetDateUnknown.tabIndex = 8; pg.record.chkOnsetDateIsApproximate = myLocationDB.InputButton(pg.record.all.chkOnsetDateIsApproximate); pg.record.chkOnsetDateIsApproximate.setTitle("Onset date is approximate"); pg.record.chkOnsetDateIsApproximate.disable(true); // RESOLVE DATE pg.record.lblResolveDate = pg.record.all.lblResolveDate; pg.record.resolveDateType = pg.record.all.resolveDateType; pg.record.resolveDateType.tabIndex = 10; pg.record.resolveDateType.lastSelectedIndex = 0; pg.record.resolveDateType.setState = function(){ switch(this.selectedIndex){ case 1: // PARTIAL pg.record.chkResolveDateIsApproximate.disable(false); pg.record.resolveDate.style.visibility = "hidden"; pg.record.resolveDateUnknown.style.visibility = "hidden"; pg.record.resolveDate.datePicker.style.visibility = "hidden"; pg.record.resolveDateMonth.style.visibility = "visible"; pg.record.resolveDateYear.style.visibility = "visible"; break; case 2: // UNKNOWN pg.record.chkResolveDateIsApproximate.disable(true); pg.record.resolveDate.style.visibility = "hidden"; pg.record.resolveDateYear.style.visibility = "hidden"; pg.record.resolveDateMonth.style.visibility = "hidden"; pg.record.resolveDate.datePicker.style.visibility = "hidden"; pg.record.resolveDateUnknown.style.visibility = "visible"; break; default: // "WHOLE" pg.record.chkResolveDateIsApproximate.disable(true); pg.record.resolveDateMonth.style.visibility = "hidden"; pg.record.resolveDateYear.style.visibility = "hidden"; pg.record.resolveDateUnknown.style.visibility = "hidden"; pg.record.resolveDate.style.visibility = "visible"; pg.record.resolveDate.datePicker.style.visibility = "visible"; break; } } pg.record.resolveDateType.onchange = function(){ this.lastSelectedIndex = this.selectedIndex; this.setState(); } pg.record.resolveDate = myLocationDB.Custom.DateInput(55, 364, 80, pg.record.all.pnlRecord, 1, 1, 0, params.todayDate, 1); pg.record.resolveDate.tabIndex = 11; pg.record.resolveDate.style.textAlign = "LEFT"; pg.record.resolveDate.calendar.style.zIndex = 6000; pg.record.resolveDate.datePicker.style.left = "448px"; pg.record.resolveDate.setDateRange(params.birthDate, params.todayDate); pg.record.resolveDate.setState = function(){ if(pg.record.assessmentType.value == 15){ pg.record.chkResolveDateIsApproximate.disable(pg.record.resolveDateType.value != "PARTIAL"); pg.record.resolveDate.disabled = false; pg.record.lblResolveDate.disabled = false; pg.record.resolveDateType.selectedIndex = pg.record.resolveDateType.lastSelectedIndex; pg.record.resolveDateType.setState(); pg.record.resolveDate.datePicker.disable(false); pg.record.resolveDateType.disabled = false; pg.record.resolveDateYear.disabled = false; pg.record.resolveDateMonth.disabled = false; pg.record.resolveDateUnknown.disabled = false; } else { pg.record.resolveDate.datePicker.disable(true); pg.record.chkResolveDateIsApproximate.disable(true); pg.record.resolveDateType.selectedIndex = 2; pg.record.resolveDateType.setState(); pg.record.resolveDate.disabled = true; pg.record.lblResolveDate.disabled = true; pg.record.resolveDateType.disabled = true; pg.record.resolveDateYear.disabled = true; pg.record.resolveDateMonth.disabled = true; pg.record.resolveDateUnknown.disabled = true; } } pg.record.assessmentType.onchange = pg.record.resolveDate.setState; pg.record.resolveDateYear = pg.record.all.resolveDateYear; pg.record.resolveDateYear.tabIndex = 11; pg.record.resolveDateMonth = pg.record.all.resolveDateMonth pg.record.resolveDateMonth.tabIndex = 12; pg.record.resolveDateUnknown = pg.record.all.resolveDateUnknown; pg.record.resolveDateUnknown.tabIndex = 13; pg.record.chkResolveDateIsApproximate = myLocationDB.InputButton(pg.record.all.chkResolveDateIsApproximate); pg.record.chkResolveDateIsApproximate.setTitle("Resolve date is approximate"); pg.record.chkResolveDateIsApproximate.disable(true); // -- UPDATES pg.record.updates = pg.record.all.pnlUpdates.all.grid; pg.record.lblUpdateCount = pg.record.all.lblUpdateCount; pg.record.updates.attachEvent("onstateready", pg.showRecordView); pg.record.updates.attachEvent("onrowdblclick", pg.showUpdateView); // -- "VIEW SELECTED" pg.record.btnViewUpdate = myLocationDB.PanelButton(pg.record.all.btnViewUpdate); pg.record.btnViewUpdate.setTitle("View details for the selected problem update"); pg.record.btnViewUpdate.onclick = pg.showUpdateView; pg.record.btnViewUpdate.setState = function(){ pg.record.btnViewUpdate.disable(!pg.record.updates.lastSelected); } pg.record.updates.attachEvent("onrowclick", pg.record.btnViewUpdate.setState); // -- COMMENTS pg.record.comments = pg.record.all.comments; pg.record.pnlComments = pg.record.all.pnlComments; pg.record.lblCommentCount = pg.record.all.lblCommentCount; // -- UPDATE COMMENTS pg.record.btnUpdateComments = myLocationDB.PanelButton(pg.record.all.btnUpdateComments); pg.record.btnUpdateComments.onclick = pg.showCommentView; pg.record.btnUpdateComments.title = "Update this record's comments"; // -- LINKED NOTES pg.record.linkedNotes = pg.record.all.linkedNotes.all.grid; pg.record.linkedNotes.attachEvent("onrowclick", pg.linkedRecordClick); pg.record.linkedNotes.attachEvent("onrowdblclick", pg.openLinkedNote); pg.record.linkedNotes.attachEvent("onstateready", pg.setLinkedNotes_Count); // -- LINKED ORDERS pg.record.linkedOrders = pg.record.all.linkedOrders.all.grid; pg.record.linkedOrders.attachEvent("onrowclick", pg.linkedRecordClick); pg.record.linkedOrders.attachEvent("onrowdblclick", pg.openLinkedOrder); pg.record.linkedOrders.attachEvent("onstateready", pg.setLinkedOrders_Count); // -- "CLOSE" pg.record.btnClose = pg.record.all.btnClose; pg.record.btnClose.tabIndex = 15; pg.record.btnClose.onclick = pg.hideRecordView; pg.record.btnClose.title = "Close this record panel"; // -- LAST UPDATED BY pg.record.lblUpdatedBy = pg.record.all.lblUpdatedBy; // -- "SELECT ALL" pg.record.chkSelectAll = myLocationDB.InputButton(pg.record.all.chkSelectAll); pg.record.chkSelectAll.onclick = function(){ if(pg.record.tabs[1].all[0].checked){ if(pg.record.chkSelectAll.checked){ pg.record.linkedNotes.selectAll(); } else { pg.record.linkedNotes.deselectAll(); } } else { if(pg.record.chkSelectAll.checked){ pg.record.linkedOrders.selectAll(); } else { pg.record.linkedOrders.deselectAll(); } } pg.record.btnOpen.setState(); //pg.record.btnViewLabs.setState(); } pg.record.chkSelectAll.setState = function(){ if(pg.record.tabs[1].all[0].checked){ pg.record.chkSelectAll.checked = pg.record.linkedNotes.selectedRows.length == pg.record.linkedNotes.rows.length; } else { pg.record.chkSelectAll.checked = pg.record.linkedOrders.selectedRows.length == pg.record.linkedOrders.rows.length; } } // -- "OPEN SELECTED" pg.record.btnOpen = pg.record.all.btnOpenSelected; pg.record.btnOpen.tabIndex = 14; pg.record.btnOpen.disable = myLocationDB.Disable; pg.record.btnOpen.title = "Open the selected record"; pg.record.btnOpen.onclick = function(){ if(pg.record.tabs[1].all[0].checked){ pg.openLinkedNote(); } else if(pg.record.tabs[2].all[0].checked){ pg.openLinkedOrder(); } else { pg.record.btnOpen.disable(true); } } pg.record.btnOpen.setState = function(){ if(pg.record.tabs[1].all[0].checked){ pg.record.btnOpen.disable(!pg.record.linkedNotes.lastSelected); } else if(pg.record.tabs[2].all[0].checked){ pg.record.btnOpen.disable(pg.record.linkedOrders.selectedRows.length != 1); } else { pg.record.btnOpen.disable(true); } } // -- "SAVE" pg.record.btnSave = pg.record.all.btnSave; pg.record.btnSave.tabIndex = 14; pg.record.btnSave.onclick = pg.updateRecord_Request; pg.record.btnSave.title = "Save changes to this record"; // RECORD UPDATE SUBWIN pg.recordUpdate = da.subWin_update; pg.recordUpdate.lblUpdatedBy = pg.recordUpdate.all.lblUpdatedBy; pg.recordUpdate.lblUpdateDTS = pg.recordUpdate.all.lblUpdateDTS; pg.recordUpdate.type = pg.recordUpdate.all.type; pg.recordUpdate.onsetDate = pg.recordUpdate.all.onsetDate; pg.recordUpdate.description = pg.recordUpdate.all.description; pg.recordUpdate.resolveDate = pg.recordUpdate.all.resolveDate; pg.recordUpdate.assessmentType = pg.recordUpdate.all.assessmentType; // -- "CLOSE" pg.recordUpdate.btnClose = pg.recordUpdate.all.btnClose; pg.recordUpdate.btnClose.tabIndex = 1; pg.recordUpdate.btnClose.onclick = pg.hideUpdateView; pg.recordUpdate.btnClose.title = "Close this sub-window"; // COMMENTS SUBWIN pg.recordComments = da.subWin_comments; pg.recordComments.comments = pg.recordComments.all.updateComments; pg.recordComments.comment

    Read the article

  • Array sorting efficiency... Beginner need advice

    - by SoleSoft
    I'll start by saying I am very much a beginner programmer, this is essentially my first real project outside of using learning material. I've been making a 'Simon Says' style game (the game where you repeat the pattern generated by the computer) using C# and XNA, the actual game is complete and working fine but while creating it, I wanted to also create a 'top 10' scoreboard. The scoreboard would record player name, level (how many 'rounds' they've completed) and combo (how many buttons presses they got correct), the scoreboard would then be sorted by combo score. This led me to XML, the first time using it, and I eventually got to the point of having an XML file that recorded the top 10 scores. The XML file is managed within a scoreboard class, which is also responsible for adding new scores and sorting scores. Which gets me to the point... I'd like some feedback on the way I've gone about sorting the score list and how I could have done it better, I have no other way to gain feedback =(. I know .NET features Array.Sort() but I wasn't too sure of how to use it as it's not just a single array that needs to be sorted. When a new score needs to be entered into the scoreboard, the player name and level also have to be added. These are stored within an 'array of arrays' (10 = for 'top 10' scores) scoreboardComboData = new int[10]; // Combo scoreboardTextData = new string[2][]; scoreboardTextData[0] = new string[10]; // Name scoreboardTextData[1] = new string[10]; // Level as string The scoreboard class works as follows: - Checks to see if 'scoreboard.xml' exists, if not it creates it - Initialises above arrays and adds any player data from scoreboard.xml, from previous run - when AddScore(name, level, combo) is called the sort begins - Another method can also be called that populates the XML file with above array data The sort checks to see if the new score (combo) is less than or equal to any recorded scores within the scoreboardComboData array (if it's greater than a score, it moves onto the next element). If so, it moves all scores below the score it is less than or equal to down one element, essentially removing the last score and then places the new score within the element below the score it is less than or equal to. If the score is greater than all recorded scores, it moves all scores down one and inserts the new score within the first element. If it's the only score, it simply adds it to the first element. When a new score is added, the Name and Level data is also added to their relevant arrays, in the same way. What a tongue twister. Below is the AddScore method, I've added comments in the hope that it makes things clearer O_o. You can get the actual source file HERE. Below the method is an example of the quickest way to add a score to follow through with a debugger. public static void AddScore(string name, string level, int combo) { // If the scoreboard has not yet been filled, this adds another 'active' // array element each time a new score is added. The actual array size is // defined within PopulateScoreBoard() (set to 10 - for 'top 10' if (totalScores < scoreboardComboData.Length) totalScores++; // Does the scoreboard even need sorting? if (totalScores > 1) { for (int i = totalScores - 1; i > - 1; i--) { // Check to see if score (combo) is greater than score stored in // array if (combo > scoreboardComboData[i] && i != 0) { // If so continue to next element continue; } // Check to see if score (combo) is less or equal to element 'i' // score && that the element is not the last in the // array, if so the score does not need to be added to the scoreboard else if (combo <= scoreboardComboData[i] && i != scoreboardComboData.Length - 1) { // If the score is lower than element 'i' and greater than the last // element within the array, it needs to be added to the scoreboard. This is achieved // by moving each element under element 'i' down an element. The new score is then inserted // into the array under element 'i' for (int j = totalScores - 1; j > i; j--) { // Name and level data are moved down in their relevant arrays scoreboardTextData[0][j] = scoreboardTextData[0][j - 1]; scoreboardTextData[1][j] = scoreboardTextData[1][j - 1]; // Score (combo) data is moved down in relevant array scoreboardComboData[j] = scoreboardComboData[j - 1]; } // The new Name, level and score (combo) data is inserted into the relevant array under element 'i' scoreboardTextData[0][i + 1] = name; scoreboardTextData[1][i + 1] = level; scoreboardComboData[i + 1] = combo; break; } // If the method gets the this point, it means that the score is greater than all scores within // the array and therefore cannot be added in the above way. As it is not less than any score within // the array. else if (i == 0) { // All Names, levels and scores are moved down within their relevant arrays for (int j = totalScores - 1; j != 0; j--) { scoreboardTextData[0][j] = scoreboardTextData[0][j - 1]; scoreboardTextData[1][j] = scoreboardTextData[1][j - 1]; scoreboardComboData[j] = scoreboardComboData[j - 1]; } // The new number 1 top name, level and score, are added into the first element // within each of their relevant arrays. scoreboardTextData[0][0] = name; scoreboardTextData[1][0] = level; scoreboardComboData[0] = combo; break; } // If the methods get to this point, the combo score is not high enough // to be on the top10 score list and therefore needs to break break; } } // As totalScores < 1, the current score is the first to be added. Therefore no checks need to be made // and the Name, Level and combo data can be entered directly into the first element of their relevant // array. else { scoreboardTextData[0][0] = name; scoreboardTextData[1][0] = level; scoreboardComboData[0] = combo; } } } Example for adding score: private static void Initialize() { scoreboardDoc = new XmlDocument(); if (!File.Exists("Scoreboard.xml")) GenerateXML("Scoreboard.xml"); PopulateScoreBoard("Scoreboard.xml"); // ADD TEST SCORES HERE! AddScore("EXAMPLE", "10", 100); AddScore("EXAMPLE2", "24", 999); PopulateXML("Scoreboard.xml"); } In it's current state the source file is just used for testing, initialize is called within main and PopulateScoreBoard handles the majority of other initialising, so nothing else is needed, except to add a test score. I thank you for your time!

    Read the article

  • Quick guide to Oracle IRM 11g: Server configuration

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index Welcome to the second article in this quick quide to Oracle IRM 11g. Hopefully you've just finished the first article which takes you through deploying the software onto a Linux server. This article walks you through the configuration of this new service and contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Create IRM WebLogic Domain Starting the Admin Server and initial configuration Introduction In the previous article the database was prepared, the WebLogic Application Server installed and the files required for an IRM server installed. But we don't actually have a configured system yet. We need to now create a WebLogic Domain in which the IRM server will run, then configure some of the settings and crypography so that we can create a context and be ready to seal some content and test it all works. This article doesn't cover the configuration of SSL communication from client to server. This is quite a big topic and a separate article has been dedicated for this area. In these articles I also use the hostname, irm.company.internal to reference the IRM server and later on use the hostname irm.company.com in reference to the public facing service. Create IRM WebLogic Domain First step is creating the WebLogic domain, in a console switch to the newly created IRM installation folder as shown below and we will run the domain configuration wizard. [oracle@irm /]$ cd /oracle/middleware/Oracle_IRM/common/bin [oracle@irm bin]$ ./config.sh First thing the wizard will ask is if you wish to create a new or extend an existing domain. This guide is creating a standalone system so you should select to create a new domain. Next step is to choose what technologies from the Oracle ECM Suite you wish this domain to host. You are only interested in selecting the option "Oracle Information Rights Management". When you select this check box you will notice that it also selects "Oracle Enterprise Manager" and "Oracle JRF" as these are dependencies of the IRM server. You then need to specify where you wish to place the domain files. I usually just change the domain name from base_domain or irm_domain and leave the others with their defaults. Now the domain will have a single user initially and by default this user is called "weblogic". I usually change this account name to "sysadmin" or "administrator", but in this guide lets just accept the default. With respects to the next dialog, again for eval or dev reasons, leave the server startup mode as development. The JDK should also be automatically detected. We now need to provide details of the database. This guide is using the Oracle 11gR2 database and the settings I used can be seen in the image to the right. There is a lot of configuration that can now be done for the admin server, any managed servers and where the deployments reside. In this guide I am leaving all of these to their defaults so do not check any of the boxes. However I will on this blog be detailing later how you can go back and setup things such as automated startup of an IRM server which require changes to these default settings. But for now, lets leave it all alone and just click next. Now we are ready to install. Note that from this dialog you can scroll the left window and see there are going to be two servers created from the defaults. The AdminServer which is where you modify settings for the WebLogic Server and also hosts the Oracle Enterprise Manager for IRM which allows to monitor the IRM service performance and also make service related settings (which we shortly do below) and the IRM_server1 which hosts the actual IRM services themselves. So go right ahead and hit create, the process is pretty quick and usually under 10 minutes. When the domain creation ends, it will give you the URL to the admin server. It's worth noting this down and the URL is usually; http://irm.company.internal:7001 Starting the Admin Server and initial configuration First thing to do is to start the WebLogic Admin server and review the initial IRM server settings. In this guide we are going to run the Admin server and IRM server in console windows, in another article I will discuss running these as background services. So for now, start a console and run the Admin server by doing the following. cd /oracle/middleware/user_projects/domains/irm_domain/ ./startWebLogic.sh Wait for the server to start, you are looking for the following line to be reported in the console window. <BEA-00360><Server started in RUNNING mode> First step is configuring the IRM service via Enterprise Manager. Now that the Admin server is running you can point a browser at http://irm.company.internal:7001/em. Login with the username and password you supplied when you created the domain. In Enterprise Manager the IRM service administrator is able to make server wide configuration. However finding where to access the pages with these settings can be a bit of a challenge. After logging in on the left you'll see a tree containing elements of the Enterprise Manager farm Farm_irm_domain. Open up Content Management, then Information Rights Management and finally select the IRM node. On the right then select the IRM menu item, navigate to the Administration section and now we have four options, for now, we are just going to look at General Settings. The image on the right proves that a picture is worth a thousand words (or 113 in this case). The General Settings page allows you to set the cryptographic algorithms used for protecting sealed content. Unless you have a burning need to increase the key lengths or you need to comply to a regulation or government mandate, AES192 is a good start. You can change this later on without worry. The most important setting here we need to make is the Server URL. In this blog article I go over why this URL is so important, basically every single piece of content you protect with Oracle IRM is going to have this URL embedded in it, so if it's wrong or unresolvable, then nobody can open the secured documents. Note that in our environment we have yet to do any SSL configuration of the service. If you intend to build a server without SSL, then use http as the protocol instead of https. But I would recommend using SSL and setting this up is described in the next article. I would also probably up the device count from 1 to 3. This means that any user can retrieve rights to access content onto 3 computers at any one time. The default of 1 doesn't really make sense in development, evaluation nor even production environments and my experience is that 3 is a better number. Next step is to create the keystore for the IRM server. When a classification (called a context) is created, Oracle IRM generates a unique set of symmetric keys which are used to secure the content itself. These keys are then encrypted with a set of "wrapper" asymmetric cryptography keys which are stored externally to the server either in a Java Key Store or a HSM. These keys need to be generated and the following shows my commands and the resulting output. I have greyed out the responses from the commands so you can see the input a little easier. [oracle@irmsrv ~]$ cd /oracle/middleware/wlserver_10.3/server/bin/ [oracle@irmsrv bin]$ ./setWLSEnv.sh CLASSPATH=/oracle/middleware/patch_wls1033/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/oracle/middleware/patch_ocp353/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.6.0_18/lib/tools.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic_sp.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic.jar:/oracle/middleware/modules/features/weblogic.server.modules_10.3.3.0.jar:/oracle/middleware/wlserver_10.3/server/lib/webservices.jar:/oracle/middleware/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/oracle/middleware/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar: PATH=/oracle/middleware/wlserver_10.3/server/bin:/oracle/middleware/modules/org.apache.ant_1.7.1/bin:/usr/java/jdk1.6.0_18/jre/bin:/usr/java/jdk1.6.0_18/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin Your environment has been set. [oracle@irmsrv bin]$ cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irmsrv fmwconfig]$ keytool -genkeypair -alias oracle.irm.wrap -keyalg RSA -keysize 2048 -keystore irm.jks Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: Simon Thorpe What is the name of your organizational unit? [Unknown]: Oracle What is the name of your organization? [Unknown]: Oracle What is the name of your City or Locality? [Unknown]: San Francisco What is the name of your State or Province? [Unknown]: CA What is the two-letter country code for this unit? [Unknown]: US Is CN=Simon Thorpe, OU=Oracle, O=Oracle, L=San Francisco, ST=CA, C=US correct? [no]: yes Enter key password for (RETURN if same as keystore password): At this point we now have an irm.jks in the directory /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig. The reason we store it here is this folder would be backed up as part of a domain backup. As with any cryptographic technology, DO NOT LOSE THESE KEYS OR THIS KEY STORE. Once you've sealed content against a context, the keys will be wrapped with these keys, lose these keys, and you can't get access to any secured content, pretty important. Now we've got the keys created, we need to go back to the IRM Enterprise Manager and set the location of the key store. Going back to the General Settings page in Enterprise Manager scroll down to Keystore Settings. Leave the type as JKS but change the location to; /oracle/Middleware/user_projects/domains/irm_domain/config/fmwconfig/irm.jks and hit Apply. The final step with regards to the key store is we need to tell the server what the password is for the Java Key Store so that it can be opened and the keys accessed. Once more fire up a console window and run these commands (again i've greyed out the clutter to see the commands easier). You will see dummy passed into the commands, this is because the command asks for a username, but in this instance we don't use one, hence the value dummy is passed and it isn't used. [oracle@irmsrv fmwconfig]$ cd /oracle/middleware/Oracle_IRM/common/bin/ [oracle@irmsrv bin]$ ./wlst.sh ... lots of settings fly by... Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline>connect('weblogic','password','t3://irmsrv.us.oracle.com:7001') Connecting to t3://irmsrv.us.oracle.com:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'irm_domain'. Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead. wls:/irm_domain/serverConfig>createCred("IRM","keystore:irm.jks","dummy","password") Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root. For more help, use help(domainRuntime)wls:/irm_domain/serverConfig>createCred("IRM","key:irm.jks:oracle.irm.wrap","dummy","password") Already in Domain Runtime Tree wls:/irm_domain/serverConfig> At last we are now ready to fire up the IRM server itself. The domain creation created a managed server called IRM_server1 and we need to start this, use the following commands in a new console window. cd /oracle/middleware/user_projects/domains/irm_domain/bin/ ./startManagedWebLogic.sh IRM_server1 This will start up the server in the console, unlike the Admin server, you need to provide the username and password for the service to start. Enter in your weblogic username and password when prompted. You can change this behavior by putting the password into a boot.properties file, read more about this in the WebLogic Server documentation. Once running, wait until you see the line; <Notice><WebLogicServer><BEA-000360><Server started in RUNNING mode> At this point we can now login to the Oracle IRM Management Website at the URL. http://irm.company.internal:1600/irm_rights/ The server is just configured for HTTP at the moment, no SSL involved. Just want to ensure we can get a working system up and running. You should now see a login like the image on the right and you can now login using your weblogic username and password. The next article in this guide goes over adding SSL and now testing your server by actually adding a few users, sealing some content and opening this content as a user.

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >