Search Results

Search found 519 results on 21 pages for 'configurable'.

Page 17/21 | < Previous Page | 13 14 15 16 17 18 19 20 21  | Next Page >

  • Is there any alternative to obfuscation to make it harder to get any string in javascript?

    - by MarceloRamires
    I use DropBox and I've had some trouble reaching to my files from other computers: I not always want to login to anything when I'm in a public computer, but I like being able to reach my stuff from wherever I am. So I've made a simple little application that when put in the public folder, ran and given the right UID, creates (still in your public folder) an HTML of all the content in the folder (including subfolders) as a tree of links. But I didn't risk loading it anywhere, since there are slightly private things in there (yes, I know that the folder's name is "PUBLIC"). So I've came up with the idea to make it a simple login page, given the right password, the rest of the page should load. brilliant!, but how? If I did this by redirecting to other HTML on the same folder, I'd still put the html link in the web history and the "url's accessed" history of the administrator. So I should generate itin the same page. I've done it. And currently the page is a textbox and a button, and only if you type in the right password (asked by the generator) the rest of the page loads. The fault is that everything (password, URL's) is easily reachable through the sourcecode. Now, assuming I only want to avoid silly people to get it all too easily, not make a bulletproof all-content-holding NSA certified website, I though about some ways to make these informations a bit harder to get. As you may have already figured, I use a streamwritter to write a .HTM file (head, loop through links, bottom), then it's extremely configurable, and I can come up with a pretty messy-but-working c# code, though my javascript knowledge is not that good. Public links in DropBox look like this: http://dl.dropbox.com/u/3045472/img.png Summarizing: How do I hide stuff (MAINLY the password, of course) in my source-code so that no bumb-a-joe that can read, use a computer and press CTRL+U can reach to my stuff too easily ? PS.: It's not that personal, if someone REALLY wants it, it could never be 100% protected, and if it was that important, I wouldnt put it in the public folder, also, if the dude really wants to get it that hard, he should deserve it. PS2.: "Use the ultra-3000'tron obfuscator!!11" is not a real answer, since my javascript is GENERATED by my c# program. PS3.: I don't want other solutions as "use a serverside application and host it somewhere to redirect and bla bla" or "compress the links in a .RAR file and put a password in it" since I'm doing this ALSO to learn, and I want the thrill of it =)

    Read the article

  • What is a proper way to store site-level global variables in a SharePoint site?

    - by ccomet
    One thing that has driven me nuts about SharePoint2007 is the apparent inability to have defineable settings that apply specifically to a site or site collection itself, and not the content. I mean, you have some pre-defined settings like the Site Logo, the Site Name, and various other things, but there doesn't appear to be anywhere to add new kinds of settings. The application I am working on needs to be able to create multiple kinds of "project site collections" that all follow a basic template, but have certain additional settings that apply specifically to that site collection and that one alone. In addition to the standard site name we also need to define the Project Number, the Project Name, and the Client Name. And given the requests of some of our clients, we also reach a point where we have to have configurable settings that alter how some of the workflows work, like whether files are marked with Letters or Numbers. Our current solution, which I'm hesitant about, has been to store an XML file on the SharePoint server. This file contains one node for each site collection, identified by the URL of the root site. Inside the node are all of the elements that need to be defined for that site collection. When we need them, we have to access the XML file (which will always require SPSecurity.RunWithElevatedPrivileges to access files right on the server) every time to load it and retrieve the data. There are a lot of automated processes which will have to do this, and I'm hesitant about the stability of this method when we reach hundreds of sites with thousands of files running tens of thousands of workflows, all wanting to access this file. Maybe they're unfounded worries, but I'd rather worry than risk everything breaking in a couple years. I was looking into the SPWeb object and found the AllProperties hashtable. It looks like just the kind of thing which might work, but I don't know how safe it is to be modifying this. I read through both MSDN and the WSS SDK but found nothing that clarified on adding completely new properties into AllProperties. Is it safe to use AllProperties for this kind of thing? Or is there yet another feature that I am missing, which could handle the concept of global variables at the site collection or site scope?

    Read the article

  • Akka framework support for finding duplicate messages

    - by scala_is_awesome
    I'm trying to build a high-performance distributed system with Akka and Scala. If a message requesting an expensive (and side-effect-free) computation arrives, and the exact same computation has already been requested before, I want to avoid computing the result again. If the computation requested previously has already completed and the result is available, I can cache it and re-use it. However, the time window in which duplicate computation can be requested may be arbitrarily small. e.g. I could get a thousand or a million messages requesting the same expensive computation at the same instant for all practical purposes. There is a commercial product called Gigaspaces that supposedly handles this situation. However there seems to be no framework support for dealing with duplicate work requests in Akka at the moment. Given that the Akka framework already has access to all the messages being routed through the framework, it seems that a framework solution could make a lot of sense here. Here is what I am proposing for the Akka framework to do: 1. Create a trait to indicate a type of messages (say, "ExpensiveComputation" or something similar) that are to be subject to the following caching approach. 2. Smartly (hashing etc.) identify identical messages received by (the same or different) actors within a user-configurable time window. Other options: select a maximum buffer size of memory to be used for this purpose, subject to (say LRU) replacement etc. Akka can also choose to cache only the results of messages that were expensive to process; the messages that took very little time to process can be re-processed again if needed; no need to waste precious buffer space caching them and their results. 3. When identical messages (received within that time window, possibly "at the same time instant") are identified, avoid unnecessary duplicate computations. The framework would do this automatically, and essentially, the duplicate messages would never get received by a new actor for processing; they would silently vanish and the result from processing it once (whether that computation was already done in the past, or ongoing right then) would get sent to all appropriate recipients (immediately if already available, and upon completion of the computation if not). Note that messages should be considered identical even if the "reply" fields are different, as long as the semantics/computations they represent are identical in every other respect. Also note that the computation should be purely functional, i.e. free from side-effects, for the caching optimization suggested to work and not change the program semantics at all. If what I am suggesting is not compatible with the Akka way of doing things, and/or if you see some strong reasons why this is a very bad idea, please let me know. Thanks, Is Awesome, Scala

    Read the article

  • Techniques for querying a set of object in-memory in a Java application

    - by Edd Grant
    Hi All, We have a system which performs a 'coarse search' by invoking an interface on another system which returns a set of Java objects. Once we have received the search results I need to be able to further filter the resulting Java objects based on certain criteria describing the state of the attributes (e.g. from the initial objects return all objects where x.y z && a.b == c). The criteria used to filter the set of objects each time is partially user configurable, by this I mean that users will be able to select the values and ranges to match on but the attributes they can pick from will be a fixed set. The data sets are likely to contain <= 10,000 objects for each search. The search will be executed manually by the application user base probably no more than 2000 times a day (approx). It's probably worth mentioning that all the objects in the result set are known domain object classes which have Hibernate and JPA annotations describing their structure and relationship. Off the top of my head I can think of 3 ways of doing this: For each search persist the initial result set objects in our database, then use Hibernate to re-query them using the finer grained criteria. Use an in-memory Database (such as hsqldb?) to query and refine the initial result set. Write some custom code which iterates the initial result set and pulls out the desired records. Option 1 seems to involve a lot of toing and froing across a network to a physical Database (Oracle 10g) which might result in a lot of network and disk activity. It would also require the results from each search to be isolated from other result sets to ensure that different searches don't interfere with each other. Option 2 seems like a good idea in principle as it would allow me to do the finer query in memory and would not require the persistence of result data which would only be discarded after the search was complete. Gut feeling is that this could be pretty performant too but might result in larger memory overheads (which is fine as we can be pretty flexible on the amount of memory our JVM gets). Option 3 could be very performant but is something I would like to avoid as any code we write would require such careful testing that the time taken to acheive something flexible and robust enough would probably be prohibitive. I don't have time to prototype all 3 ideas so I am looking for comments people may have on the 3 options above, plus any further ideas I have not considered, to help me decide which idea might be most suitable. I'm currently leaning toward option 2 (in memory database) so would be keen to hear from people with experience of querying POJOs in memory too. Hopefully I have described the situation in enough detail but don't hesitate to ask if any further information is required to better understand the scenario. Cheers, Edd

    Read the article

  • What is the most idiomatic way to emulating Perl's Test::More::done_testing?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test exits prematurely - say due to some logic not executing when you expected it to. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it. Another approach is to emulate done_testing() by calling Test::More->builder->plan(tests=>$total_tests_calculated). I'm not sure if it's any better idiomatically-wise.

    Read the article

  • HTTP 500 Internal Server Error on IIS 7.5 with MVC3

    - by Tor Haugen
    I am trying to install an MVC3 application on our production server with no luck. The application is from a 3rd party (compiled), and so debugging is not available to me. Besides, I strongly suspect the error occurs before any code in the site has a chance to execute. Our staging server is - as far as I can determine - set up excactly like the production server. Both run Windows Server 2008 Standard R2, both also run a Sharepoint 2010 site (though this install doesn't touch that in any way). IIS is version 7.5, and .NET Framework 4.0 (required by the MVC app) is (recently) installed (by me, with a reboot after). The application is very small and simple and, as far as I can tell sticks to fairly standard functionality - including forms authentication (ie. it doesnt' pull any dirty tricks). The error message shown in the browser is very general: HTTP Error 500.0 - Internal Server Error An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. The bit about 'An error message detailing the cause' being in the application event log seems to be just speculation - a pious hope that whatever code actually caused the error will log it. Nothing useful is to be found in the event log (only the very same message, logged by IIS). Module: AspNetInitClrHostFailureModule Notification: BeginRequest Handler: StaticFile Error Code: 0x80070002 Requested URL: http://xxxxxx.xxxxxx.xx:80/ Physical Path: C:\Xxxxxxx\Prod\WebClient Logon Method: Not yet determined Logon User: Not yet determined Using Failed Request Tracing, I have been able to track the error (as also indicated above) to the AspNetInitClrHostFailureModule: 103. -NOTIFY_MODULE_START ModuleName AspNetInitClrHostFailureModule Notification 1 fIsPostNotification false Notification BEGIN_REQUEST 104. -SET_RESPONSE_ERROR_DESCRIPTION ErrorDescription An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. 105. -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName AspNetInitClrHostFailureModule Notification 1 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification BEGIN_REQUEST ErrorCode The system cannot find the file specified. (0x80070002) So there you have it. Seemingly, the AspNetInitClrHostFailureModule fails to find some file. So some questions are: What is the AspNetInitClrHostFailureModule? It is not listed in the fairly exhausting list of modules configurable in IIS manager for the site. I have had no success googling it either. Maybe it's secret.. I access the root URL of the site. This is supposed to be redirected to /Account/LogOn by the FormsAuthenticationModule. Why then is the handler StaticFile? Is that a clue? I have tried removing the infamous system.webserver/modules/runAllManagedModulesForAllRequests attribute, and that makes the error go away (but MVC not actually working, of course). I am prepared to specify all necessary modules manually if that's what it takes, but if the AspNetInitClrHostFailureModule is actually needed, I will be just as stuck. Does anyone know, or can anyone direct me to someone who knows, exactly what modules a typical MVC3 application actually needs? This question might well be a duplicate of this one, but he didn't get any useful answer, and also asked less specific questions. So I'll have my own go. Hoping for some help here :) Edit: I have now tried setting up a trivial MVC 3 project on the server. I created a new project using the MVC Application template, compiled it and deployed it to the server. It behaves in exactly the same way. The server simply cannot run MVC 3 projects.

    Read the article

  • Rails, Capistrano, Nginx, Unicorn - Application has been already initialized (RuntimeError)

    - by Andy Copley
    Can anyone shed some light on what exactly this error refers to? I'm having trouble deploying new versions of the site. I, INFO -- : reloading config_file=[snip]/current/config/unicorn.rb I, INFO -- : Refreshing Gem list E, ERROR -- : error reloading config_file=[snip]/current/config/unicorn.rb: Application has been already initialized. (RuntimeError) E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/railties-3.2.3/lib/rails/application.rb:135:in `initialize!' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/railties-3.2.3/lib/rails/railtie/configurable.rb:30:in `method_missing' E, ERROR -- : [snip]/releases/20120907085937/config/environment.rb:5:in `<top (required)>' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.3/lib/active_support/dependencies.rb:251:in `require' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.3/lib/active_support/dependencies.rb:251:in `block in require' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.3/lib/active_support/dependencies.rb:236:in `load_dependency' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/activesupport-3.2.3/lib/active_support/dependencies.rb:251:in `require' E, ERROR -- : config.ru:4:in `block in <main>' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize' E, ERROR -- : config.ru:1:in `new' E, ERROR -- : config.ru:1:in `<main>' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn.rb:44:in `eval' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn.rb:44:in `block in builder' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:696:in `call' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:696:in `build_app!' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:677:in `load_config!' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:303:in `join' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `<top (required)>' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/bin/unicorn:23:in `load' E, ERROR -- : [snip]/shared/bundle/ruby/1.9.1/bin/unicorn:23:in `<main>' I, INFO -- : reaped #<Process::Status: pid 3182 exit 0> worker=0 I, INFO -- : reaped #<Process::Status: pid 3185 exit 0> worker=1 I, INFO -- : reaped #<Process::Status: pid 3188 exit 0> worker=2 I, INFO -- : reaped #<Process::Status: pid 3191 exit 0> worker=3 I, INFO -- : worker=0 ready I, INFO -- : worker=3 ready I, INFO -- : worker=1 ready I, INFO -- : worker=2 ready Unicorn.rb root = "/home/[user]/apps/[site]/current" working_directory root pid "#{root}/tmp/pids/unicorn.pid" stderr_path "#{root}/log/unicorn.log" stdout_path "#{root}/log/unicorn.log" listen "/tmp/unicorn.[site].sock", :backlog => 2048 worker_processes 4 preload_app true timeout 30 before_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection end Any help appreciated - I can pull more config files if required.

    Read the article

  • Performing mechanical movements using computer

    - by Vi
    How to make a computer (in particular, my laptop) to perform some mechanical movements without buying anything $5, soldering things inside computer or creating big sophisticated circuits? Traditionally CD-ROM tray is used to make computer do some movement IRL by, for example, SSH command, but in laptop tray is one-shot (unless manually reloaded) and also not very comfortable [mis]usage. Some assistance circuits can be in use too, but not complex. For example, there is a little motor that can work on USB power. Devices in my computer: DVD-ROM tray: one-time push. USB power: continuous power to the motor or LEDS or relay that turns on something powerful. Audio card. 3 outputs (modprobe alsa model=test can set Mic and Line-in as additional output). One controllable DC output (microphone) that can power up LED and some electronic (may be even mechanic?) relay. Also with sophisticated additional circuiting can control a lot of devices with a good precision. Both input and output support. Probably the most useful object in computer for radio ham. Modem. Don't know about this much, it doesn't work because of hsfmodem crashes kernel if memory is = 1GB. May be it's "pick up" and "hang up" can turn on and off power taken from USB port? Video card. VGA port? S-Video port? Will them be useful? Backlight. Tunable, but probably unuseful. CardBus (or some) slot. Nothing interesting for the task probably (is it?). AC adapter and battery. Probably nothing programmable here. /* My AC adapter already have additional jacks to connect extra devics */ Keyboard. No use. Touchpad. Good sensor (synclient -m 1), but no output. Various LEDs inside laptop. Probably too weak and requires soldering. Fans inside laptop. Poor control over them, requires soldering and dangerous to tinker. HDD (internal and external) that can be spin down and up (hdparm -Y, cat /dev/ubb). But connecting anything serially with it's power line makes HDD underpowered... And too complex. Is something are missed? Any ideas how to use described components? Any other ideas? May be there are easily available /* in developing countries */ cheap devices like "enhanced multimeters" that are controllable from computer and can provide configurable output and measure current and other things? Things to aid pushing many physical buttons with computer. Isn't this a simple idea and implementation and a lot of use in good hands?

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • Customize the Default Screensavers in Windows 7 and Vista

    - by Matthew Guay
    Windows 7 and Vista include a nice set of backgrounds, but unfortunately most of them aren’t configurable by default.  Thanks to a free app and some registry changes, however, you can make the default screensavers uniquely yours! Customize the default screensavers If you’ve ever pressed the Customize button on most of the default screensavers in Windows 7 and Vista, you were probably greeted with this message: A little digging in the registry shows that this isn’t fully correct.  The default screensavers in Vista and 7 do have options you can set, but they’re not obvious.  With the help of an app or some registry tips, you can easily customize the screensavers to be uniquely yours.  Here’s how you can do it with an app or in the registry. Customize Windows Screensavers with System Screensavers Tweaker Download the System Screensavers Tweaker (link below), and unzip the folder.  Run nt6srccfg.exe in the folder to tweak your screensavers.  This application lets you tweak the screensavers’ registry settings graphically, and it works great in all editions of Windows Vista and 7, including x64 versions. Change any of the settings you want in the screensaver tweaker, and click Apply. To preview the changes to your screensaver, open the Screen Saver settings window as normal by right-clicking on the desktop, and selecting Personalize. Click on the Screensaver button on the bottom right. Now, select your modified screensaver, and click Preview to see your changes. You can change a wide variety of settings for the Bubbles, Ribbons, and Mystify screensavers in Windows 7 and Vista, as well as the Aurora screensaver in Windows Vista.  The tweaks to the Bubbles screensaver are especially nice.  Here’s how the Bubbles look without transparency. And, by tweaking a little more, you get a screensaver that looks more like a screen full of marbles. Ribbons and Mystify each have less settings, but still can produce some unique effects.   How’s that for a brilliant screensaver? And, if you want to return your screensavers to their default settings, simply run the System Screensavers Tweaker and select Reset to defaults on any screensaver you wish to reset. Customize Windows Screensavers in the Registry If you prefer to roll up your sleeves and tweak Windows under-the-hood, then here’s how you can customize the screensavers yourself in the Registry.  Type regedit into the search box in the Start menu, browse to the key for each screensaver, and add or modify the DWORD values listed for that screensaver using the Decimal base. Please Note: Tweaking the Registry can be difficult, so if you’re unsure, just use the tweaking application above. Also, you’ll probably want to create a System Restore Point.   Bubbles To edit the Bubbles screensaver, browse to the following in regedit: HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Screensavers\Bubbles Now, add or modify the following DWORD values to tweak the screensaver: MaterialGlass – enter 0 for solid or 1 for transparent bubbles Radius – enter a number between 1090000000 and 1130000000; the larger the number, the larger the bubbles’ radius ShowBubbles – enter 0 to show a black background or 1 to show the current desktop behind the bubbles ShowShadows – enter 0 for no shadow or 1 for shadows behind the bubbles SphereDensity – enter a number from 1000000000 to 2100000000; the higher the number, the more bubbles on the screen. TurbulenceNumOctaves – enter a number from 1 to 255; the higher the number, the faster the bubble colors will change. Ribbons To edit the Ribbons screensaver, browse to the following in regedit: HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Screensavers\Ribbons Now, add or modify the following DWORD values to tweak the screensaver: Blur – enter 0 to prevent ribbons from fading, or 1 to have them fade away after a few moments. Numribbons – enter a number from 1 to 100; the higher the number, the more ribbons on the screen. RibbonWidth – enter a number from 1000000000 to 1080000000; the higher the number, the thicker the ribbons. Mystify To edit the Mystify screensaver, browse to the following in regedit: HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Screensavers\Mystify Now, add or modify the following DWORD values to tweak the screensaver: Blur – enter 0 to prevent lines from fading, or 1 to have them fade away after a few moments. LineWidth – enter a number from 1000000000 to 1080000000; the higher the number, the wider the lines. NumLines – enter a number from 1 to 100; the higher the value, the more lines on the screen. Aurora – Windows Vista only To edit the Aurora screensaver in Windows Vista, browse to the following in regedit: HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Screensavers\Aurora Now, add or modify the following DWORD values to tweak the screensaver: Amplitude – enter a value from 500000000 to 2000000000; the higher the value, the slower the motion. Brightness – enter a value from 1000000000 to 1050000000; the higher the value, the brighter the affect. NumLayers – enter a value from 1 to 15; the higher the value, the more aurora layers displayed. Speed – enter a value from 1000000000 to 2100000000; the higher the value, the faster the cycling. Conclusion Although the default screensavers are nice, they can be boring after awhile with their default settings.  But with these tweaks, you can create a variety of vibrant screensavers that should keep your desktop fresh and interesting. Link Download the System Screensavers Tweaker Similar Articles Productive Geek Tips Create Icons to Start the Screensaver on Windows 7 or VistaMake Your Windows XP Logon Screen Look Like Windows VistaSpeed up Windows Vista Start Menu Search By Limiting ResultsRoundup: 16 Tweaks to Windows Vista Look & FeelSet XP as the Default OS in a Windows Vista Dual-Boot Setup TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor

    Read the article

  • 24+ Coda Alternatives for Windows and Linux

    - by Matt
    Coda plays an important role in designing layout on Mac. There are numerous coda alternatives for windows and Linux too. It is not possible to describe each and everyone so some of the coda alternatives, which work on both windows and Linux platforms, are discussed below. EditPlus $35.00 Good thing about EditPlus is that it highlights URLs and email addresses, activating them when you ‘crtl + double-click’. It also has a built in browser for previewing HTML, and FTP and SFTP support. Also supports Macros and RegEx find and replace. UltraEdit $49.99 It is another good coda alternative for windows and Linux. It is the best suited editor for text, HTML and HEX. It also plays an advanced PHP, Perl, Java and JavaScript editor for programmers. It supports disk-based 64-bit or standard file handling on 32-bit Windows platforms or window 2000 and later versions. HippoEdit $39.95 HippoEDIT has the best autocomplete it gives pop a ‘tooltip’ above your cursor as you type, suggesting words you’ve already typed. It does syntax highlighting for over 2 dozen language. Sublime Text $59.00 Sublime Text awesome ‘zoomed out’ view of the file lets you focus on the area you want. It lets you open a local file when you right-click on its link, and there are a few automation features, so this would make a solid choice of a text editor. Textpad $24.70 TextPad is simple editor with nifty features such as column select, drag-and-drop text between files, and hyperlink support. It also supports large files. Aptana Free Aptana Studio is one of the best editors working on both windows and Linux. It is a complete web development setting that has a nice blend of powerful authoring tools with a collection of online hosting and collaboration services. It is quite helpful as it support for PHP, CSS, FTP, and more. SciTE Free It is a SCIntilla based Text Editor. It has gradually developed as a generally useful editor. It provides for building and running programs. It is best to be used for jobs with simple configurations. SciTE is currently available for Intel Win32 and Linux compatible operating systems with GTK+. It has been run on Windows XP and on Fedora 8 and Ubuntu 7.10 with GTK+ 2.12 E Text Editor $34.96 E Text Editor is a new text editor for Windows, which also works on Linux as well. It has powerful editing features and also some unique abilities. It makes text manipulation quite fast and easy, and makes user focus on his writing as it automatically does all the manual work. It can be extend it in any language. It supports Text Mate bundles, thus allows the user to tap into a huge and active community. Editra Free Editra is an upcoming editor, with some fantastic features such as user profiles, auto-completion, session saving, and syntax highlighing for 60+ languages. Plugins can extend the feature set, offering an integrated python console, FTP client, file browser, and calculator, among others. PSPad Free PSPad is a good Template for writing CSS, as it an internal web browser, and a macro recorder to the table. It also supports hex editing, and some degree of code compiling. JEdit Free It is a mature programmer’s text editor and has taken a good deal of time to be developed as it is today. It is better than many costlier development tools due to its features and simplicity of use. It has been released as free software with full source code, provided under the terms of the GPL 2.0. Which also adds to its attractiveness. NEdit Free It is a multi-purpose text editor for the X Window System, which also works on Linux. It combines a standard, easy to use, graphical user interface with the full functionality and stability required by users who edit text for long period a day. It also provides for thorough support for development in various languages. It also facilitates the use of text processors, and other tools at the same time. It can be used productively by anyone who needs to edit text. It is quite a user-friendly tool. Its salient features include syntax highlighting with built in pattern, auto indent, tab emulation, block indentation adjustment etc. As of version 5.1, NEdit may be freely distributed under the terms of the GNU General Public License. MadEdit Free Mad Edit is an Open-Source and Cross-Platform Text/Hex Editor. It is written in C++ and wxWidgets. MadEdit can edit files in Text/Column/Hex modes. It also supports many useful functions, such as Syntax Highlighting, Word Wrap, Encoding for UTF8/16/32,and others. It also supports word count, which makes it quite a useful text editor for both windows and Linux. It has been recently modified on 10/09/2010. KompoZer Free Kompozer is a complete web authoring system that has a combination of web file management and easy-to-use WYSIWYG web page editing. KompoZer has been designed to be completely and extensively easy to use. It is thus an ideal tool for non-technical computer users who want to create an attractive, professional-looking web site without knowing HTML or web coding. It is based on the NVU source code. Vim Free Vim or “Vi IMproved” is an advanced text editor. Its salient features are syntax highlighting, word completion and it also has a huge amount of contributed content. Vim has several “modes” on offer for editing, which adds to the efficiency in editing. Thus it becomes a non-user-friendly application but it is also strength for its users. The normal mode binds alphanumeric keys to task-oriented commands. The visual mode highlights text. More tools for search & replace, defining functions, etc. are offered through command line mode. Vim comes with complete help. NotePad ++ Free One of the the best free text editor for Windows out there; with support for simple things—like syntax highlighting and folding—all the way up to FTP, Notepad++ should tick most of the boxes Notepad2 Free Notepad2 is also based on the Scintilla editing engine, but it’s much simpler than Notepad++. It bills itself as being fast, light-weight, and Notepad-like. Crimson Editor Free Crimson Editor has the ability to edit remote files, using a built-in FTP client; there’s also a spell checker. TotalEdit Free TotalEdit allows file comparison, RegEx search and replace, and has multiple options for file backup / versioning. For cleanup, it offers (X)HTML and XML customizable formatting, and a spell checker. In-Type Free ConTEXT Free SourceEdit Free SourceEdit includes features such as clipboard history, syntax highlighting and autocompletion for a decent set of languages. A hex editor and FTP client. RJ TextED Free RJ TextED supports integration with TopStyle Lite. Provides HTML validation and formatting. It includes an FTP client, a file browser, and a code browser, as well as a character map and support for email. GEDIT Free It is one of the best coda alternatives for windows and Linux. It has syntax highlighting and is best suitable for programming. It has many attractive features such as full support for UTF-8, undo/redo, and clipboard support, search and replace, configurable syntax highlighting for various languages and many more supportive features. It is extensible with plug ins. Other important coda alternatives for windows and Linux are Redcar, Bluefish Editor, NVU, Ruby Mine, Slick Edit, Geany, Editra, txt2html and CSSED. There are many more. Its up to user to decide which one suits best to his requirements. Related posts:10 Useful Text Editor For Developer Applications to Install & Run Windows on Linux Open Source WYSIWYG Text Editors

    Read the article

  • Prioritizing Product Features

    - by Robert May
    A very common task in Agile Environments is prioritization.  Teams that are functioning well will prioritize new features, old features, the backlog, and any other source of stories for the team, and they’ll do it regularly. Not all teams are good at prioritizing according to the real return on investment that building stories will yield to the company.  This is unfortunate.  Too often, teams end up building features that are less valuable, and everyone seems to know it except perhaps the product owner!  Most features built into software are never even used.  Clearly, not much return for features that go unused. So how does a company avoid building features that add little value to the company?  This is a tough question to answer, but usually, this prioritization starts at the top with the executives of the company.  After all, they’re responsible for the overall vision of the company. Here’s what I recommend: Know your market. Know your customers and users. Know where you’re going and what you want to achieve. Implement the Vision Know Your Market We often see companies that don’t know their market.  Personally, I’m surprised by this.  These companies don’t know who their competitors are, don’t know what features make their product desirable in the market, and in many cases, get by with saying, “I’ve been doing this for XX years.  I know what the market wants!”  In many cases, they equate “marketing” with “advertising” and don’t understand the difference. This is almost never true.  Good companies will spend significant amounts of time and money finding out who they’re competing against and what makes their competitors successful in the marketplace.  Good companies understand that marketing involves more than just advertising.  Often, marketing is mostly research and analysis, not sales.  Until you understand your market, you cannot know what features will give you the best return on your investment dollar. Good companies have a marketing department and can answer the next important step which is to know your customers and your users. Know your Customers and Users First, note that I included both customers and users.  They’re often not the same thing.  Users use the product that you build.  Customers buy the product that you build.  It’s a subtle difference, but too often, I’ve seen companies that focus exclusively on one or the other and are not successful simply because they ignore an important part of the group. If your company is doing appropriate marketing, you know that these are two different aspects of your product and that both deserve attention to have a product that is successful in your target market.  Your marketing department should be spending a lot of time understanding these personas and then conveying that information to the company. I’m always surprised when development teams think that they can build a product that people want to use without understanding the users of that product.  Developers think differently than most people in the world.  They know what the computer is doing.  The computer isn’t magic to them.  So when they assume that they know how to build something, they bring with them quite a bit of baggage.  Never assume that you know your customer unless you’re regularly having interaction with them.  Also, don’t just leave this to Marketing or Product Management.  Take them time to get your developers out with the customers as well.  Developers are very smart people, and often, seeing how someone uses their software inspires them to make a much better product. Very often, because the users and customers aren’t know, teams will spend a significant amount of time building apps that are super flexible and configurable so that any possible combination of feature can be used.  This demonstrates a clear lack of understanding of the customer.  Most configuration questions can quickly be answered by talking to the customer.  In most cases, if your software requires significant setup and configuration before its usable, you probably don’t know your customers and users very well. Until you know your customers, you cannot know what features will be most valuable to your customers and you cannot build those features in a way that your customers can use. Know Where You’re Going and What You Want to Achieve Many companies suffer from not having a plan.  Executives will tell the team to make them a plan.  The team, not knowing their market and customers and users, will come up with a plan that doesn’t reflect reality and doesn’t consider ROI.  Management then wonders why the product is doing poorly in the market place. Instead of leaving this up to the teams, as executives, work with Marketing to understand what broad categories of features will sell the most product in the marketplace.  Then, once you’ve determined that, give this vision to the team and let them run with it.  Revise the vision as needed, but avoid changing streams frequently.  Sure, sometimes you need to, but often, executives will change priorities many times a month, leading to nothing more than confusion.  If the team has a vision, they’ll be able to execute that vision far better than they could otherwise. By knowing what products are most important, you can set budgetary goals and guidelines that will help you achieve the vision that was created. Implement the Vision Creating the vision is often where the general executives stop participating in the plan.  The team is responsible for implementing that vision.  Executives should attend showcases and and should remain aware of the progress that the team is making towards meeting the vision, however. Once a broad vision has been created, the team should break that vision down into minimal market features (MMF).  These MMFs should be sized using story points so that, using the team’s velocity, an estimated cost can be determined for each feature.  The product management team should then try to quantify the relative value of the MMFs based on customer feedback and interviews.  Once the value and cost of creating the feature is understood, a return on investment can be calculated.  The features should then be prioritized with the MMF’s that have the highest value and lowest cost rising to the top of features to implement.  Don’t let politics get in the way! Once the MMF’s have been prioritized, they should go through release planning to schedule them for implementation. Conclusion By having a good grasp on the strategy of the company, your Agile teams can be much more effective.  Each and every story the team is implementing will roll up into features that matter to the company and provide ROI to them.  The steps outlined in this post should be repeated on a regular basis.  I recommend reviewing them at least once per quarter to make sure that the vision hasn’t shifted and that the teams are still working on what matters most to the company. Technorati Tags: Agile,Product Owner,ROI

    Read the article

  • Product Development Investment: A Measure of Vendor Performance

    - by Jim Mcglothlin
    The relationship between a large, complex organization and its key suppliers of information technology is normally more than just "strategic". Expectations about the duration of the relationship typically exceed 20 years. Enterprise applications and technology infrastructure are not expected to be changed out like petunias. So how would you rate the due diligence processes as performed in Higher Education when selecting critical, transformational information technology? My observation: I see a lot of effort put into elaborate demonstration of basic software functionality. I see a lot of attention paid to the cost element of technology acquisition, including the contracted cost of implementation consulting services. But the factor that receives only cursory analysis and due diligence is long-term performance--the ability of a vendor to grow, expand, and develop, and bring its customers along with it. So what should you look for in a long-term IT supplier? Oracle has a public track record for product development. The annual investment has been on a run rate of almost $3 Billion organic product development. Oracle's well-publicized acquisitions and mergers have been supplemental to its R&D. This is important for Higher Education. Another meaningful way to evaluate a company is to look at the tangible track record of enhancement. Consider the Oracle-PeopleSoft enterprise business platform since acquired by Oracle 6 years ago: Product or Technology Enhancement Customer or User Impact Service Oriented Architecture (SOA) 300+ new web services delivered in versions 9.0 & 9.1 provide flexibility, so that customers can integrate PeopleSoft with other applications. Campus Solutions has added Admissions and Constituent Web Services. Constituent Relationship Management PeopleSoft CRM 9.1 for Higher Education introduced new process flows for student recruiting and retention to support "Student Success" initiatives. A 360 view of the constituent is now delivered, and the concept of a single-stop Student Services Center is now in CRM 9.1 with tight integration to PeopleSoft Campus Solutions. Human Capital Management Contract Pay for Education, with flexibility for configuration and calculation, has been extended in HCM 9.1. New chartfield integration among Project Costing - Time & Labor - Payroll to serve the labor distribution requirements for Grants / Sponsored Research. Talent Management PeopleSoft 9.0 and 9.1 feature an integrated talent management approach centered on definitions in "Profile Manager", with all new usability improvements. Internal and external candidate pools, and the entire recruitment process, are driven by delivered configurable selection and on-boarding processes. Interview scheduling, and online job offers are newly delivered processes. Performance Management PeopleSoft HCM ePerformance 9.1 will include significant new functionality designed to help organizations more effectively align business objectives with employee goals. Using an Organization Chart view, your business goals can flow down to become tangible objectives per employee. Succession Planning / Workforce Development New in HCM 9.0, enhanced in 9.1, is a planning capability for regular or unusual (major organizational change) succession of internal or external candidates. PeopleSoft supports employee-based career planning, which ultimately increases the integrity of the succession planning process (identify their career needs, plans, preferences, and interests). Dashboards / Oracle Business Intelligence Application Suite Oracle Human Resources Analytics provides the workforce information foundation that integrates data from HR functional areas and Finance. Oracle Human Resources Analytics delivers 9 dashboards and over 200 reports. Provide your HR professionals and front-line managers the tools to analyze workforce staffing, retention, productivity, to better source high-quality applicants, and to reduce absence costs. Multi-year Planning and Commitment Control External funding sources, especially Grants, require a multi-year encumbrance business process. PeopleSoft HCM 9.1 adds multi-year funding and commitment control, including budget checking. The newly designed Real Time Budget Checking will provide the customer with an updated snapshot of their budget and encumbrances at any given time. Position Budgeting with Hyperion Hyperion Planning world-class products now include delivered integration to PeopleSoft HCM. Position Budgeting is available in the new Public Sector Planning module of Hyperion. Web 2.0 features for the latest in usability PeopleSoft 9.1 features a contemporary internet user experience: Partial-page refreshing Drag and drop pagelets New menu structure Navigation pagelets Modal popup message windows Favorites & recently used links Type-ahead Drag and drop grid columns, pop-out grids Portal Workspaces Enterprise 2.0 for your collaborative web communities, using new content management, along with Wikis, blogs, and discussion forums in PeopleSoft Portal 9.1. PeopleTools enhanced by Oracle Fusion Middleware Standards-based tools have been added to the PeopleTools application infrastructure: BI (XML) Publisher, Java tools. Certified for use with PeopleSoft: Oracle Business Intelligence (OBIEE), Oracle Enterprise Manager, Oracle Weblogic Server, Oracle SOA Suite. Hosting for PeopleSoft applications A solid new deployment option: Oracle On Demand remote hosting center for high scalability, security, and continuity of operations. Business Process Outsourcing (BPO) for HCM / Payroll functions Partnership with AT&T provides hosting of HR/Payroll application along with payroll business process operations, and subscription-based service fees (SaaS). AT&T BPO full service includes pay sheet processing, bank and 3rd party file transfer, payroll tax handling, etc. Continuous Delivery Model Feature Packs provide faster time-to-benefit; new features become available in PeopleSoft 9.1 (or Campus Solutions 9.0) without need to perform upgrade. Golden person data model across all campus applications Oracle Higher Education Constituent Hub provides synchronization and data governance of person data across any application, e.g. HR/ Payroll, Student Information System, Housing, Emergency Contact, LMS, CRM. Oracle's aggressive enhancement plans within the "Applications Unlimited" program continue, as new functionality is under development for a new version of a PeopleSoft release planned for 2012. Meanwhile, new capabilities are planned on an annual basis in Feature Packs. PeopleSoft just delivered the HCM 2010 Feature Pack and another is planned for 2011. In February we plan to have over 100 customers from our Customer Advisory Boards at our PeopleSoft Development Center in California to review designs for all of these releases. For those of you near New York City The investment and progressive development story described above is the subject of an Oracle road show event on February 9, 2011. Charting Your Course with Oracle Applications is a global event series designed to help business and IT executives assess the impact of new inflection points on their business and applications roadmap: changing workforces, shifting customer and constituent bases, and increased volatility. Learn how innovations ranging from new deployment models like cloud computing to the introduction of social applications and smart devices are delivering results across all areas of business and industry. THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT.

    Read the article

  • Using the BAM Interceptor with Continuation

    - by Charles Young
    Originally posted on: http://geekswithblogs.net/cyoung/archive/2014/06/02/using-the-bam-interceptor-with-continuation.aspxI’ve recently been resurrecting some code written several years ago that makes extensive use of the BAM Interceptor provided as part of BizTalk Server’s BAM event observation library.  In doing this, I noticed an issue with continuations.  Essentially, whenever I tried to configure one or more continuations for an activity, the BAM Interceptor failed to complete the activity correctly.   Careful inspection of my code confirmed that I was initializing and invoking the BAM interceptor correctly, so I was mystified.  However, I eventually found the problem.  It is a logical error in the BAM Interceptor code itself. The BAM Interceptor provides a useful mechanism for implementing dynamic tracking.  It supports configurable ‘track points’.  These are grouped into named ‘locations’.  BAM uses the term ‘step’ as a synonym for ‘location’.   Each track point defines a BAM action such as starting an activity, extracting a data item, enabling a continuation, etc.  Each step defines a collection of track points. Understanding Steps The BAM Interceptor provides an abstract model for handling configuration of steps.  It doesn’t, however, define any specific configuration mechanism (e.g., config files, SSO, etc.)  It is up to the developer to decide how to store, manage and retrieve configuration data.  At run time, this configuration is used to register track points which then drive the BAM Interceptor. The full semantics of a step are not immediately clear from Microsoft’s documentation.  They represent a point in a business activity where BAM tracking occurs.  They are named locations in the code.  What is less obvious is that they always represent either the full tracking work for a given activity or a discrete fragment of that work which commences with the start of a new activity or the continuation of an existing activity.  The BAM Interceptor enforces this by throwing an error if no ‘start new’ or ‘continue’ track point is registered for a named location. This constraint implies that each step must marked with an ‘end activity’ track point.  One of the peculiarities of BAM semantics is that when an activity is continued under a correlated ID, you must first mark the current activity as ‘ended’ in order to ensure the right housekeeping is done in the database.  If you re-start an ended activity under the same ID, you will leave the BAM import tables in an inconsistent state.  A step, therefore, always represents an entire unit of work for a given activity or continuation ID.  For activities with continuation, each unit of work is termed a ‘fragment’. Instance and Fragment State Internally, the BAM Interceptor maintains state data at two levels.  First, it represents the overall state of the activity using a ‘trace instance’ token.  This token contains the name and ID of the activity together with a couple of state flags.  The second level of state represents a ‘trace fragment’.   As we have seen, a fragment of an activity corresponds directly to the notion of a ‘step’.  It is the unit of work done at a named location, and it must be bounded by start and end, or continue and end, actions. When handling continuations, the BAM Interceptor differentiates between ‘root’ fragments and other fragments.  Very simply, a root fragment represents the start of an activity.  Other fragments represent continuations.  This is where the logic breaks down.  The BAM Interceptor loses state integrity for root fragments when continuations are defined. Initialization Microsoft’s BAM Interceptor code supports the initialization of BAM Interceptors from track point configuration data.  The process starts by populating an Activity Interceptor Configuration object with an array of track points.  These can belong to different steps (aka ‘locations’) and can be registered in any order.  Once it is populated with track points, the Activity Interceptor Configuration is used to initialise the BAM Interceptor.  The BAM Interceptor sets up a hash table of array lists.  Each step is represented by an array list, and each array list contains an ordered set of track points.  The BAM Interceptor represents track points as ‘executable’ components.  When the OnStep method of the BAM Interceptor is called for a given step, the corresponding list of track points is retrieved and each track point is executed in turn.  Each track point retrieves any required data using a call back mechanism and then serializes a BAM trace fragment object representing a specific action (e.g., start, update, enable continuation, stop, etc.).  The serialised trace fragment is then handed off to a BAM event stream (buffered or direct) which takes the appropriate action. The Root of the Problem The logic breaks down in the Activity Interceptor Configuration.  Each Activity Interceptor Configuration is initialised with an instance of a ‘trace instance’ token.  This provides the basic metadata for the activity as a whole.  It contains the activity name and ID together with state flags indicating if the activity ID is a root (i.e., not a continuation fragment) and if it is completed.  This single token is then shared by all trace actions for all steps registered with the Activity Interceptor Configuration. Each trace instance token is automatically initialised to represent a root fragment.  However, if you subsequently register a ‘continuation’ step with the Activity Interceptor Configuration, the ‘root’ flag is set to false at the point the ‘continue’ track point is registered for that step.   If you use a ‘reflector’ tool to inspect the code for the ActivityInterceptorConfiguration class, you can see the flag being set in one of the overloads of the RegisterContinue method.    This makes no sense.  The trace instance token is shared across all the track points registered with the Activity Interceptor Configuration.  The Activity Interceptor Configuration is designed to hold track points for multiple steps.  The ‘root’ flag is clearly meant to be initialised to ‘true’ for the preliminary root fragment and then subsequently set to false at the point that a continuation step is processed.  Instead, if the Activity Interceptor Configuration contains a continuation step, it is changed to ‘false’ before the root fragment is processed.  This is clearly an error in logic. The problem causes havoc when the BAM Interceptor is used with continuation.  Effectively the root step is no longer processed correctly, and the ultimate effect is that the continued activity never completes!   This has nothing to do with the root and the continuation being in the same process.  It is due to a fundamental mistake of setting the ‘root’ flag to false for a continuation before the root fragment is processed. The Workaround Fortunately, it is easy to work around the bug.  The trick is to ensure that you create a new Activity Interceptor Configuration object for each individual step.  This may mean filtering your configuration data to extract the track points for a single step or grouping the configured track points into individual steps and the creating a separate Activity Interceptor Configuration for each group.  In my case, the first approach was required.  Here is what the amended code looks like: // Because of a logic error in Microsoft's code, a separate ActivityInterceptorConfiguration must be used // for each location. The following code extracts only those track points for a given step name (location). var trackPointGroup = from ResolutionService.TrackPoint tp in bamActivity.TrackPoints                       where (string)tp.Location == bamStepName                       select tp; var bamActivityInterceptorConfig =     new Microsoft.BizTalk.Bam.EventObservation.ActivityInterceptorConfiguration(activityName); foreach (var trackPoint in trackPointGroup) {     switch (trackPoint.Type)     {         case TrackPointType.Start:             bamActivityInterceptorConfig.RegisterStartNew(trackPoint.Location, trackPoint.ExtractionInfo);             break; etc… I’m using LINQ to filter a list of track points for those entries that correspond to a given step and then registering only those track points on a new instance of the ActivityInterceptorConfiguration class.   As soon as I re-wrote the code to do this, activities with continuations started to complete correctly.

    Read the article

  • OS Analytics - Deep Dive Into Your OS

    - by Eran_Steiner
    Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. We will have a call to discuss this blog - please join us!Date: Thursday, November 1, 2012Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D2. If requested, enter your name and email address.3. If a password is required, enter the meeting password: oracle1234. Click "Join". To join the teleconference:Call-in toll-free number:       1-866-682-4770  (US/Canada)      Other countries:                https://oracle.intercallonline.com/portlets/scheduling/viewNumbers/viewNumber.do?ownerNumber=5931260&audioType=RP&viewGa=true&ga=ONConference Code:       7629343#Security code:            7777# Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data View Solaris services status details Drill down into a process details View the busiest zones if applicable Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones: This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. Next, if you view a Solaris machine, you will have a "Services" tab: By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file). The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Master Data Management and Cloud Computing

    - by david.butler(at)oracle.com
    Cloud Computing is all the rage these days. There are many reasons why this is so. But like its predecessor, Service Oriented Architecture, it can fall on hard times if the underlying data is left unmanaged. Master Data Management is the perfect Cloud companion. It can materially increase the chances for successful Cloud initiatives. In this blog, I'll review the nature of the Cloud and show how MDM fits in.   Here's the National Institute of Standards and Technology Cloud definition: •          Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.   Cloud architectures have three main layers: applications or Software as a Service (SaaS), Platforms as a Service (PaaS), and Infrastructure as a Service (IaaS). SaaS generally refers to applications that are delivered to end-users over the Internet. Oracle CRM On Demand is an example of a SaaS application. Today there are hundreds of SaaS providers covering a wide variety of applications including Salesforce.com, Workday, and Netsuite. Oracle MDM applications are located in this layer of Oracle's On Demand enterprise Cloud platform. We call it Master Data as a Service (MDaaS). PaaS generally refers to an application deployment platform delivered as a service. They are often built on a grid computing architecture and include database and middleware. Oracle Fusion Middleware is in this category and includes the SOA and Data Integration products used to connect SaaS applications including MDM. Finally, IaaS generally refers to computing hardware (servers, storage and network) delivered as a service.  This typically includes the associated software as well: operating systems, virtualization, clustering, etc.    Cloud Computing benefits are compelling for a large number of organizations. These include significant cost savings, increased flexibility, and fast deployments. Cost advantages include paying for just what you use. This is especially critical for organizations with variable or seasonal usage. Companies don't have to invest to support peak computing periods. Costs are also more predictable and controllable. Increased agility includes access to the latest technology and experts without making significant up front investments.   While Cloud Computing is certainly very alluring with a clear value proposition, it is not without its challenges. An IDC survey of 244 IT executives/CIOs and their line-of-business (LOB) colleagues identified a number of issues:   Security - 74% identified security as an issue involving data privacy and resource access control. Integration - 61% found that it is hard to integrate Cloud Apps with in-house applications. Operational Costs - 50% are worried that On Demand will actually cost more given the impact of poor data quality on the rest of the enterprise. Compliance - 49% felt that compliance with required regulatory, legal and general industry requirements (such as PCI, HIPAA and Sarbanes-Oxley) would be a major issue. When control is lost, the ability of a provider to directly manage how and where data is deployed, used and destroyed is negatively impacted.  There are others, but I singled out these four top issues because Master Data Management, properly incorporated into a Cloud Computing infrastructure, can significantly ameliorate all of these problems. Cloud Computing can literally rain raw data across the enterprise.   According to fellow blogger, Mike Ferguson, "the fracturing of data caused by the adoption of cloud computing raises the importance of MDM in keeping disparate data synchronized."   David Linthicum, CTO Blue Mountain Labs blogs that "the lack of MDM will become more of an issue as cloud computing rises. We're moving from complex federated on-premise systems, to complex federated on-premise and cloud-delivered systems."    Left unmanaged, non-standard, inconsistent, ungoverned data with questionable quality can pollute analytical systems, increase operational costs, and reduce the ROI in Cloud and On-Premise applications. As cloud computing becomes more relevant, and more data, applications, services, and processes are moved out to cloud computing platforms, the need for MDM becomes ever more important. Oracle's MDM suite is designed to deal with all four of the above Cloud issues listed in the IDC survey.   Security - MDM manages all master data attribute privacy and resource access control issues. Integration - MDM pre-integrates Cloud Apps with each other and with On Premise applications at the data level. Operational Costs - MDM significantly reduces operational costs by increasing data quality, thereby improving enterprise business processes efficiency. Compliance - MDM, with its built in Data Governance capabilities, insures that the data is governed according to organizational standards. This facilitates rapid and accurate reporting for compliance purposes. Oracle MDM creates governed high quality master data. A unified cleansed and standardized data view is produced. The Oracle Customer Hub creates a single view of the customer. The Oracle Product Hub creates high quality product data designed to support all go-to-market processes. Oracle Supplier Hub dramatically reduces the chances of 'supplier exceptions'. Oracle Site Hub masters locations. And Oracle Hyperion Data Relationship Management masters financial reference data and manages enterprise hierarchies across operational areas from ERP to EPM and CRM to SCM. Oracle Fusion Middleware connects Cloud and On Premise applications to MDM Hubs and brings high quality master data to your enterprise business processes.   An independent analyst once said "Poor data quality is like dirt on the windshield. You may be able to drive for a long time with slowly degrading vision, but at some point, you either have to stop and clear the windshield or risk everything."  Cloud Computing has the potential to significantly degrade data quality across the enterprise over time. Deploying a Master Data Management solution prior to or in conjunction with a move to the Cloud can insure that the data flowing into the enterprise from the Cloud is clean and governed. This will in turn insure that expected returns on the investment in Cloud Computing will be realized.       Oracle MDM has proven its metal in this area and has the customers to back that up. In fact, I will be hosting a webcast on Tuesday, April 10th at 10 am PT with one of our top Cloud customers, the Church Pension Group. They have moved all mainline applications to a hosted model and use Oracle MDM to insure the master data is managed and cleansed before it is propagated to other cloud and internal systems. I invite you join Martin Hossfeld, VP, IT Operations, and Danette Patterson, Enterprise Data Manager as they review business drivers for MDM and hosted applications, how they did it, the benefits achieved, and lessons learned. You can register for this free webcast here.  Hope to see you there.

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • CodePlex Daily Summary for Sunday, May 23, 2010

    CodePlex Daily Summary for Sunday, May 23, 2010New ProjectsA2Command: Apple 2 port of CBM-Command (http://cbmcommand.codeplex.com)AgUnit: AgUnit is a plugin for Jetbrains ReSharper (R#) that allows you to run and debug Silverlight unit tests from within Visual Studio.BSonPosh Powershell Module: A collection of useful Powershell functions I have written and collected over the years. It is a Powershell v2 Module composed of mostly scripts.DB Restriker: Simple tool for lookup, parsing, searching some standard databases using wildcards and pattern recognition.Entity Framework Repository & Unit of Work Template: T4 Template for Entity Framework 4 for creating a data access layer using the repository and unit of work patterns. Designed to work well with dep...Fiction Catalog: A catalog project designed to store information about fictional literature.Giving a Presentation: Useful for people doing presentations, this application hides desktop icons, disables screensaver, closes chosen programs when presentation starts,...glueless: Glueless is a local message bus which allows architect to design highly decoupled systems and applications. Glueless is a step beyond dependency i...HtmlCodeIt: Take any code and format it so that it can be viewed properly on a web browser, blog post or website.just testproject :): just have a test!KanbanTaskboard: The aim of the project is to design and implement a functional prototype for visualizing and operating a multi-platform virtual "Kanban Taskboard”Life System: Life SystemOaSys Project: Project Oasys is a project that aims to help solve desertification. Scoring of pingPong Game: Scoring of pingPong GameSilverlight Web Comic: The Silverlight Web Comic makes easier for the people create your own comic with your own pictures o drawings, and add the globes of text like the ...TickSharp: C# Wrapper for http://TickSpot.com RESTful API.Traductor: El Traductor es una aplicación de escritorio para traducción de frases entre distintos idiomas basada en la plataforma Silverlight Out Of Browser y...WatchersNET.SkinObjects.ModulActionsMenu: Displays the Module Actions Menu as a Unsorted CSS Menu.xxfd1r4w96: testingNew ReleasesAgUnit: AgUnit 0.1: Initial release of AgUnit. Copy the extracted files from AgUnit-0.1.zip into the "Bin\Plugins\" folder of your ReSharper installation (default C:...ASP.NET MVC | SCAFFOLD: ASP.NET MVC SCAFFOLD - Beta 1.0: Release versão betaBizTalk Server 2006 Documenter: Documenter_v3.4.0.0: This is the new release of the documenter which has the following highlights Support for 64 bit systems Support for SxS scenarios (so now the sys...CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 Beta 2- VS 2008 Replacement: The CassiniDev Visual Studio build is a fully compatibly Visual Studio 2008/2010 Development server drop-in replacement with all CassiniDev enhance...CBM-Command: 2010-05-22 Beta: Release Notes - 2010-05-22 BetaNew Features Simple text file viewer. Now when you use SHIFT-RETURN to open a file, it will ask if you want to view...Easy Validation: Documentation: Documentation for easyVal was created and presented at University of Texas at Austin in May of 2010.Entity Framework Repository & Unit of Work Template: 1.0: Initial ReleaseFrotz.NET: FrotzNet 1.0 beta: Many, many changes, including: - Got Adaptive Palette working for graphics - Got undo working - Implemented all zcodes - Added scripting as well as...Giving a Presentation: CTP: This release includes basic extensibility infrastructure and three extensions: hides desktop icons, disables screensaver, closes chosen programs wh...Gov 2.0 Kit: SharePoint 2010 MyPeeps Mysite Accelerators: SharePoint 2010 MyPeeps Mysite Accelerators. Attached are the installation and documentations files.HKGolden Express: HKGoldenExpress (Build 201005221900): New features: (None) Bug fix: Hong Kong special characters now can be posted without encoding problem. Improvements: (None) Other changes: (None) K...Intellibox - A WPF auto complete textbox search control: Beta 2: Updated the namespace of the Intellibox control from "System.Windows.Controls" to "FeserWard.Controls". Empty binding Path properties now work on...MDownloader: MDownloader-0.15.14.59111: Fixed DepositFile provider. Fixed FileFactory provider. Added simple fakeness detector (can check if .rar, .zip, .7z files have valid signature...Mute4: V1: Initial version of Mute4NLog - Advanced .NET Logging: Nightly Build 2010.05.22.003: Changes since the last build:No changes. Unit test results:Passed 191/191 (100%) Passed 191/191 (100%) Passed 214/214 (100%) Passed 216/216 (100%)...NSIS Autorun: NSIS Autorun 0.1.9: This release includes source code, executable binaries and example materials.Silverlight Gantt Chart: Silverlight Gantt Chart 1.3 (SL4): The latest release mainly makes the Gantt Chart useful in Silverlight 4 applications.SqlServerExtensions: V 0.2 beta: V 0.2 Beta release: New features available TrimStart - trim leading characters TrimEnd - trim trailing characters Remove - remove characters f...Traductor: Version 3.1: Nuevo en esta versión: El Traductor ahora permite escoger entre los motores de Microsoft y Google. El Text to Speech is es ahora habilitado por...VCC: Latest build, v2.1.30522.0: Automatic drop of latest buildVDialer Add-In for Outlook 2007 & 2010 - Dial your Vonage phone from Outlook: VDialer Add-In 1.0.3: This release adds new features related to Journal and use of Vonage API Changes in version 1.0.3 Added configurable option to automatically open J...WatchersNET.SkinObjects.ModulActionsMenu: ModulActionsMenu 01.00.00: First Release For Informations How To Install, the Skin Object Read the DocumentationMost Popular ProjectsCodeComment.NETRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrpatterns & practices – Enterprise LibraryCaliburn: An Application Framework for WPF and Silverlightpatterns & practices: Windows Azure Security GuidanceCassiniDev - Cassini 3.5/4.0 Developers EditionGMap.NET - Great Maps for Windows Forms & PresentationNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSQL Server PowerShell ExtensionsBlogEngine.NETCodeReview

    Read the article

  • Guide to MySQL & NoSQL, Webinar Q&A

    - by Mat Keep
    0 0 1 959 5469 Homework 45 12 6416 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Yesterday we ran a webinar discussing the demands of next generation web services and how blending the best of relational and NoSQL technologies enables developers and architects to deliver the agility, performance and availability needed to be successful. Attendees posted a number of great questions to the MySQL developers, serving to provide additional insights into areas like auto-sharding and cross-shard JOINs, replication, performance, client libraries, etc. So I thought it would be useful to post those below, for the benefit of those unable to attend the webinar. Before getting to the Q&A, there are a couple of other resources that maybe useful to those looking at NoSQL capabilities within MySQL: - On-Demand webinar (coming soon!) - Slides used during the webinar - Guide to MySQL and NoSQL whitepaper  - MySQL Cluster demo, including NoSQL interfaces, auto-sharing, high availability, etc.  So here is the Q&A from the event  Q. Where does MySQL Cluster fit in to the CAP theorem? A. MySQL Cluster is flexible. A single Cluster will prefer consistency over availability in the presence of network partitions. A pair of Clusters can be configured to prefer availability over consistency. A full explanation can be found on the MySQL Cluster & CAP Theorem blog post.  Q. Can you configure the number of replicas? (the slide used a replication factor of 1) Yes. A cluster is configured by an .ini file. The option NoOfReplicas sets the number of originals and replicas: 1 = no data redundancy, 2 = one copy etc. Usually there's no benefit in setting it >2. Q. Interestingly most (if not all) of the NoSQL databases recommend having 3 copies of data (the replication factor).    Yes, with configurable quorum based Reads and writes. MySQL Cluster does not need a quorum of replicas online to provide service. Systems that require a quorum need > 2 replicas to be able to tolerate a single failure. Additionally, many NoSQL systems take liberal inspiration from the original GFS paper which described a 3 replica configuration. MySQL Cluster avoids the need for a quorum by using a lightweight arbitrator. You can configure more than 2 replicas, but this is a tradeoff between incrementally improved availability, and linearly increased cost. Q. Can you have cross node group JOINS? Wouldn't that run into the risk of flooding the network? MySQL Cluster 7.2 supports cross nodegroup joins. A full cross-join can require a large amount of data transfer, which may bottleneck on network bandwidth. However, for more selective joins, typically seen with OLTP and light analytic applications, cross node-group joins give a great performance boost and network bandwidth saving over having the MySQL Server perform the join. Q. Are the details of the benchmark available anywhere? According to my calculations it results in approx. 350k ops/sec per processor which is the largest number I've seen lately The details are linked from Mikael Ronstrom's blog The benchmark uses a benchmarking tool we call flexAsynch which runs parallel asynchronous transactions. It involved 100 byte reads, of 25 columns each. Regarding the per-processor ops/s, MySQL Cluster is particularly efficient in terms of throughput/node. It uses lock-free minimal copy message passing internally, and maximizes ID cache reuse. Note also that these are in-memory tables, there is no need to read anything from disk. Q. Is access control (like table) planned to be supported for NoSQL access mode? Currently we have not seen much need for full SQL-like access control (which has always been overkill for web apps and telco apps). So we have no plans, though especially with memcached it is certainly possible to turn-on connection-level access control. But specifically table level controls are not planned. Q. How is the performance of memcached APi with MySQL against memcached+MySQL or any other Object Cache like Ecache with MySQL DB? With the memcache API we generally see a memcached response in less than 1 ms. and a small cluster with one memcached server can handle tens of thousands of operations per second. Q. Can .NET can access MemcachedAPI? Yes, just use a .Net memcache client such as the enyim or BeIT memcache libraries. Q. Is the row level locking applicable when you update a column through memcached API? An update that comes through memcached uses a row lock and then releases it immediately. Memcached operations like "INCREMENT" are actually pushed down to the data nodes. In most cases the locks are not even held long enough for a network round trip. Q. Has anyone published an example using something like PHP? I am assuming that you just use the PHP memcached extension to hook into the memcached API. Is that correct? Not that I'm aware of but absolutely you can use it with php or any of the other drivers Q. For beginner we need more examples. Take a look here for a fully worked example Q. Can I access MySQL using Cobol (Open Cobol) or C and if so where can I find the coding libraries etc? A. There is a cobol implementation that works well with MySQL, but I do not think it is Open Cobol. Also there is a MySQL C client library that is a standard part of every mysql distribution Q. Is there a place to go to find help when testing and/implementing the NoSQL access? If using Cluster then you can use the [email protected] alias or post on the MySQL Cluster forum Q. Are there any white papers on this?  Yes - there is more detail in the MySQL Guide to NoSQL whitepaper If you have further questions, please don’t hesitate to use the comments below!

    Read the article

  • Using SSIS to send a HTML E-Mail Message with built-in table of Counts.

    - by Kevin Shyr
    For the record, this can be just as easily done with a .NET class with a DLL call.  The two major reasons for this ending up as a SSIS package are: There are a lot of SQL resources for maintenance, but not as many .NET developers. There is an existing automated process that links up SQL Jobs (more on that in the next post), and this is part of that process.   To start, this is what the SSIS looks like: The first part of the control flow is just for the override scenario.   In the Execute SQL Task, it calls a stored procedure, which already formats the result into XML by using "FOR XML PATH('Row'), ROOT(N'FieldingCounts')".  The result XML string looks like this: <FieldingCounts>   <Row>     <CellId>M COD</CellId>     <Mailed>64</Mailed>     <ReMailed>210</ReMailed>     <TotalMail>274</TotalMail>     <EMailed>233</EMailed>     <TotalSent>297</TotalSent>   </Row>   <Row>     <CellId>M National</CellId>     <Mailed>11</Mailed>     <ReMailed>59</ReMailed>     <TotalMail>70</TotalMail>     <EMailed>90</EMailed>     <TotalSent>101</TotalSent>   </Row>   <Row>     <CellId>U COD</CellId>     <Mailed>91</Mailed>     <ReMailed>238</ReMailed>     <TotalMail>329</TotalMail>     <EMailed>291</EMailed>     <TotalSent>382</TotalSent>   </Row>   <Row>     <CellId>U National</CellId>     <Mailed>63</Mailed>     <ReMailed>286</ReMailed>     <TotalMail>349</TotalMail>     <EMailed>374</EMailed>     <TotalSent>437</TotalSent>   </Row> </FieldingCounts>  This result is saved into an internal SSIS variable with the following settings on the General tab and the Result Set tab:   Now comes the trickier part.  We need to use the XML Task to format the XML string result into an HTML table, and I used Direct input XSLT And here is the code of XSLT: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="html" indent="yes"/>   <xsl:template match="/ROOT">         <table border="1" cellpadding="6">           <tr>             <td></td>             <td>Mailed</td>             <td>Re-mailed</td>             <td>Total Mail (Mailed, Re-mailed)</td>             <td>E-mailed</td>             <td>Total Sent (Mailed, E-mailed)</td>           </tr>           <xsl:for-each select="FieldingCounts/Row">             <tr>               <xsl:for-each select="./*">                 <td>                   <xsl:value-of select="." />                 </td>               </xsl:for-each>             </tr>           </xsl:for-each>         </table>   </xsl:template> </xsl:stylesheet>    Then a script task is used to send out an HTML email (as we are all painfully aware that SSIS Send Mail Task only sends plain text) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 using System; using System.Data; using Microsoft.SqlServer.Dts.Runtime; using System.Windows.Forms; using System.Net.Mail; using System.Net;   namespace ST_b829a2615e714bcfb55db0ce97be3901.csproj {     [System.AddIn.AddIn("ScriptMain", Version = "1.0", Publisher = "", Description = "")]     public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase     {           #region VSTA generated code         enum ScriptResults         {             Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,             Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure         };         #endregion           public void Main()         {             String EmailMsgBody = String.Format("<HTML><BODY><P>{0}</P><P>{1}</P></BODY></HTML>"                                                 , Dts.Variables["Config_SMTP_MessageSourceText"].Value.ToString()                                                 , Dts.Variables["InternalStr_CountResultAfterXSLT"].Value.ToString());             MailMessage EmailCountMsg = new MailMessage(Dts.Variables["Config_SMTP_From"].Value.ToString().Replace(";", ",")                                                         , Dts.Variables["Config_SMTP_Success_To"].Value.ToString().Replace(";", ",")                                                         , Dts.Variables["Config_SMTP_SubjectLinePrefix"].Value.ToString() + " " + Dts.Variables["InternalStr_FieldingDate"].Value.ToString()                                                         , EmailMsgBody);             //EmailCountMsg.From.             EmailCountMsg.CC.Add(Dts.Variables["Config_SMTP_Success_CC"].Value.ToString().Replace(";", ","));             EmailCountMsg.IsBodyHtml = true;               SmtpClient SMTPForCount = new SmtpClient(Dts.Variables["Config_SMTP_ServerAddress"].Value.ToString());             SMTPForCount.Credentials = CredentialCache.DefaultNetworkCredentials;               SMTPForCount.Send(EmailCountMsg);               Dts.TaskResult = (int)ScriptResults.Success;         }     } } Note on this code: notice the email list has Replace(";", ",").  This is only here because the list is configurable in the SQL Job Step at Set Values, which does not react well with colons as email separator, but system.Net.Mail only handles comma as email separator, hence the extra replace in the string. The result is a nicely formatted email message with count information:

    Read the article

  • What's new in Solaris 11.1?

    - by Karoly Vegh
    Solaris 11.1 is released. This is the first release update since Solaris 11 11/11, the versioning has been changed from MM/YY style to 11.1 highlighting that this is Solaris 11 Update 1.  Solaris 11 itself has been great. What's new in Solaris 11.1? Allow me to pick some new features from the What's New PDF that can be found in the official Oracle Solaris 11.1 Documentation. The updates are very numerous, I really can't include all.  I. New AI Automated Installer RBAC profiles have been introduced to enable delegation of installation tasks. II. The interactive installer now supports installing the OS to iSCSI targets. III. ASR (Auto Service Request) and OCM (Oracle Configuration Manager) have been enabled by default to proactively provide support information and create service requests to speed up support processes. This is optional and can be disabled but helps a lot in supportcases. For further information, see: http://oracle.com/goto/solarisautoreg IV. The new command svcbundle helps you to create SMF manifests without having to struggle with XML editing. (btw, do you know the interactive editprop subcommand in svccfg? The listprop/setprop subcommands are great for scripting and automating, but for an interactive property editing session try, for example, this: svccfg -s svc:/application/pkg/system-repository:default editprop )  V. pfedit: Ever wondered how to delegate editing permissions to certain files? It is well known "sudo /usr/bin/vi /etc/hosts" is not the right way, for sudo elevates the complete vi process to admin levels, and the user can "break" out of the session as root with simply starting a shell from that vi. Now, the new pfedit command provides a solution exactly to this challenge - an auditable, secure, per-user configurable editing possibility. See the pfedit man page for examples.   VI. rsyslog, the popular logging daemon (filters, SSL, formattable output, SQL collect...) has been included in Solaris 11.1 as an alternative to syslog.  VII: Zones: Solaris Zones - as a major Solaris differentiator - got lots of love in terms of new features: ZOSS - Zones on Shared Storage: Placing your zones to shared storage (FC, iSCSI) has never been this easy - via zonecfg.  parallell updates - with S11's bootenvironments updating zones was no problem and meant no downtime anyway, but still, now you can update them parallelly, a way faster update action if you are running a large number of zones. This is like parallell patching in Solaris 10, but with all the IPS/ZFS/S11 goodness.  per-zone fstype statistics: Running zones on a shared filesystems complicate the I/O debugging, since ZFS collects all the random writes and delivers them sequentially to boost performance. Now, over kstat you can find out which zone's I/O has an impact on the other ones, see the examples in the documentation: http://docs.oracle.com/cd/E26502_01/html/E29024/gmheh.html#scrolltoc Zones got RDSv3 protocol support for InfiniBand, and IPoIB support with Crossbow's anet (automatic vnic creation) feature.  NUMA I/O support for Zones: customers can now determine the NUMA I/O topology of the system from within zones.  VIII: Security got a lot of attention too:  Automated security/audit reporting, with builtin reporting templates e.g. for PCI (payment card industry) audits.  PAM is now configureable on a per-user basis instead of system wide, allowing different authentication requirements for different users  SSH in Solaris 11.1 now supports running in FIPS 140-2 mode, that is, in a U.S. government security accredited fashion.  SHA512/224 and SHA512/256 cryptographic hash functions are implemented in a FIPS-compliant way - and on a T4 implemented in silicon! That is, goverment-approved cryptography at HW-speed.  Generally, Solaris is currently under evaluation to be both FIPS and Common Criteria certified.  IX. Networking, as one of the core strengths of Solaris 11, has been extended with:  Data Center Bridging (DCB) - not only setups where network and storage share the same fabric (FCoE, anyone?) can have Quality-of-Service requirements. DCB enables peers to distinguish traffic based on priorities. Your NICs have to support DCB, see the documentation, and additional information on Wikipedia. DataLink MultiPathing, DLMP, enables link aggregation to span across multiple switches, even between those of different vendors. But there are essential differences to the good old bandwidth-aggregating LACP, see the documentation: http://docs.oracle.com/cd/E26502_01/html/E28993/gmdlu.html#scrolltoc VNIC live migration is now supported from one physical NIC to another on-the-fly  X. Data management:  FedFS, (Federated FileSystem) is new, it relies on Solaris 11's NFS referring mechanism to join separate shares of different NFS servers into a single filesystem namespace. The referring system has been there since S11 11/11, in Solaris 11.1 FedFS uses a LDAP - as the one global nameservice to bind them all.  The iSCSI initiator now uses the T4 CPU's HW-implemented CRC32 algorithm - thus improving iSCSI throughput while reducing CPU utilization on a T4 Storage locking improvements are now RAC aware, speeding up throughput with better locking-communication between nodes up to 20%!  XI: Kernel performance optimizations: The new Virtual Memory subsystem ("VM2") scales now to 100+ TB Memory ranges.  The memory predictor monitors large memory page usage, and adjust memory page sizes to applications' needs OSM, the Optimized Shared Memory allows Oracle DBs' SGA to be resized online XII: The Power Aware Dispatcher in now by default enabled, reducing power consumption of idle CPUs. Also, the LDoms' Power Management policies and the poweradm settings in Solaris 11 OS will cooperate. XIII: x86 boot: upgrade to the (Grand Unified Bootloader) GRUB2. Because grub2 differs in the configuration syntactically from grub1, one shall not edit the new grub configuration (grub.cfg) but use the new bootadm features to update it. GRUB2 adds UEFI support and also support for disks over 2TB. XIV: Improved viewing of per-CPU statistics of mpstat. This one might seem of less importance at first, but nowadays having better sorting/filtering possibilities on a periodically updated mpstat output of 256+ vCPUs can be a blessing. XV: Support for Solaris Cluster 4.1: The What's New document doesn't actually mention this one, since OSC 4.1 has not been released at the time 11.1 was. But since then it is available, and it requires Solaris 11.1. And it's only a "pkg update" away. ...aand I seriously need to stop here. There's a lot I missed, Edge Virtual Bridging, lofi tuning, ZFS sharing and crypto enhancements, USB3.0, pulseaudio, trusted extensions updates, etc - but if I mention all those then I effectively copy the What's New document. Which I recommend reading now anyway, it is a great extract of the 300+ new projects and RFE-followups in S11.1. And this blogpost is a summary of that extract.  For closing words, allow me to come back to Request For Enhancements, RFEs. Any customer can request features. Open up a Support Request, explain that this is an RFE, describe the feature you/your company desires to have in S11 implemented. The more SRs are collected for an RFE, the more chance it's got to get implemented. Feel free to provide feedback about the product, as well as about the Solaris 11.1 Documentation using the "Feedback" button there. Both the Solaris engineers and the documentation writers are eager to hear your input.Feel free to comment about this post too. Except that it's too long ;)  wbr,charlie

    Read the article

  • ASP.NET MVC 3 Hosting :: Error Handling and CustomErrors in ASP.NET MVC 3 Framework

    - by C. Miller
    So, what else is new in MVC 3? MVC 3 now has a GlobalFilterCollection that is automatically populated with a HandleErrorAttribute. This default FilterAttribute brings with it a new way of handling errors in your web applications. In short, you can now handle errors inside of the MVC pipeline. What does that mean? This gives you direct programmatic control over handling your 500 errors in the same way that ASP.NET and CustomErrors give you configurable control of handling your HTTP error codes. How does that work out? Think of it as a routing table specifically for your Exceptions, it's pretty sweet! Global Filters The new Global.asax file now has a RegisterGlobalFilters method that is used to add filters to the new GlobalFilterCollection, statically located at System.Web.Mvc.GlobalFilter.Filters. By default this method adds one filter, the HandleErrorAttribute. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute());     } HandleErrorAttributes The HandleErrorAttribute is pretty simple in concept: MVC has already adjusted us to using Filter attributes for our AcceptVerbs and RequiresAuthorization, now we are going to use them for (as the name implies) error handling, and we are going to do so on a (also as the name implies) global scale. The HandleErrorAttribute has properties for ExceptionType, View, and Master. The ExceptionType allows you to specify what exception that attribute should handle. The View allows you to specify which error view (page) you want it to redirect to. Last but not least, the Master allows you to control which master page (or as Razor refers to them, Layout) you want to render with, even if that means overriding the default layout specified in the view itself. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute         {             ExceptionType = typeof(DbException),             // DbError.cshtml is a view in the Shared folder.             View = "DbError",             Order = 2         });         filters.Add(new HandleErrorAttribute());     }Error Views All of your views still work like they did in the previous version of MVC (except of course that they can now use the Razor engine). However, a view that is used to render an error can not have a specified model! This is because they already have a model, and that is System.Web.Mvc.HandleErrorInfo @model System.Web.Mvc.HandleErrorInfo           @{     ViewBag.Title = "DbError"; } <h2>A Database Error Has Occurred</h2> @if (Model != null) {     <p>@Model.Exception.GetType().Name<br />     thrown in @Model.ControllerName @Model.ActionName</p> }Errors Outside of the MVC Pipeline The HandleErrorAttribute will only handle errors that happen inside of the MVC pipeline, better known as 500 errors. Errors outside of the MVC pipeline are still handled the way they have always been with ASP.NET. You turn on custom errors, specify error codes and paths to error pages, etc. It is important to remember that these will happen for anything and everything outside of what the HandleErrorAttribute handles. Also, these will happen whenever an error is not handled with the HandleErrorAttribute from inside of the pipeline. <system.web>  <customErrors mode="On" defaultRedirect="~/error">     <error statusCode="404" redirect="~/error/notfound"></error>  </customErrors>Sample Controllers public class ExampleController : Controller {     public ActionResult Exception()     {         throw new ArgumentNullException();     }     public ActionResult Db()     {         // Inherits from DbException         throw new MyDbException();     } } public class ErrorController : Controller {     public ActionResult Index()     {         return View();     }     public ActionResult NotFound()     {         return View();     } } Putting It All Together If we have all the code above included in our MVC 3 project, here is how the following scenario's will play out: 1.       A controller action throws an Exception. You will remain on the current page and the global HandleErrorAttributes will render the Error view. 2.       A controller action throws any type of DbException. You will remain on the current page and the global HandleErrorAttributes will render the DbError view. 3.       Go to a non-existent page. You will be redirect to the Error controller's NotFound action by the CustomErrors configuration for HTTP StatusCode 404. But don't take my word for it, download the sample project and try it yourself. Three Important Lessons Learned For the most part this is all pretty straight forward, but there are a few gotcha's that you should remember to watch out for: 1) Error views have models, but they must be of type HandleErrorInfo. It is confusing at first to think that you can't control the M in an MVC page, but it's for a good reason. Errors can come from any action in any controller, and no redirect is taking place, so the view engine is just going to render an error view with the only data it has: The HandleError Info model. Do not try to set the model on your error page or pass in a different object through a controller action, it will just blow up and cause a second exception after your first exception! 2) When the HandleErrorAttribute renders a page, it does not pass through a controller or an action. The standard web.config CustomErrors literally redirect a failed request to a new page. The HandleErrorAttribute is just rendering a view, so it is not going to pass through a controller action. But that's ok! Remember, a controller's job is to get the model for a view, but an error already has a model ready to give to the view, thus there is no need to pass through a controller. That being said, the normal ASP.NET custom errors still need to route through controllers. So if you want to share an error page between the HandleErrorAttribute and your web.config redirects, you will need to create a controller action and route for it. But then when you render that error view from your action, you can only use the HandlerErrorInfo model or ViewData dictionary to populate your page. 3) The HandleErrorAttribute obeys if CustomErrors are on or off, but does not use their redirects. If you turn CustomErrors off in your web.config, the HandleErrorAttributes will stop handling errors. However, that is the only configuration these two mechanisms share. The HandleErrorAttribute will not use your defaultRedirect property, or any other errors registered with customer errors. In Summary The HandleErrorAttribute is for displaying 500 errors that were caused by exceptions inside of the MVC pipeline. The custom errors are for redirecting from error pages caused by other HTTP codes.

    Read the article

  • Learnings from trying to write better software: Loud errors from the very start

    - by theo.spears
    Microsoft made a very small number of backwards incompatible changes between .NET 1.1 and 2.0, because they wanted to make it as easy and safe as possible to port applications to the new runtime. (Here’s a list.) However, one thing they did change was what happens when a background thread fails with an unhanded exception - in .NET 1.1 nothing happened, the thread terminated, and the application continued oblivious. Try the same trick in .NET 2.0 and the entire application, including all threads, will rudely terminate. There are three reasons for this. Firstly if a background thread has crashed, it may have left the entire application in an inconsistent state, in a way that will affect other threads. It’s better to terminate the entire application than continue and have the application perform actions based on a broken state, for example take customer orders, or write corrupt files to disk.  Secondly, during software development, it is far better for errors to be loud and obtrusive. Even if you have unit tests and integration tests (and you should), a key part of ensuring software works properly is to actually try using it, both through systematic testing and through the casual use all software gets by its developers during use. Subtle errors are easy to miss if you are not actually doing real work using the application, loud errors are obvious. Thirdly, and most importantly, even if catching and swallowing exceptions indiscriminately doesn't cause any problems in your application, the presence of unexpected exceptions shows you do not fully understand the behavior of your code. The currently released version of your application may be absolutely correct. However, because your mental model of the behavior is wrong, any future change you make to the program could and probably will introduce critical errors.  This applies to more than just exceptions causing threads to exit, any unexpected state should make the application blow up in an un-ignorable way. The worst thing you can do is silently swallow errors and continue. And let's be clear, writing to a log file does not count as blowing up in an un-ignorable way.  This is all simple as long as the call stack only contains your code, but when your functions start to be called by third party or .NET framework code, it's surprisingly easy for exceptions to start vanishing. Let's look at two examples.   1. Windows forms drag drop events  Usually if you throw an exception from a winforms event handler it will bring up the "application has crashed" dialog with abort and continue options. This is a good default behavior - the error is big and loud, but it is possible for the user to ignore the error and hopefully save their data, if somehow this bug makes it past testing. However drag and drop are different - throw an exception from one of these and it will just be silently swallowed with no explanation.  By the way, it's not just drag and drop events. Timer events do it too.  You can research how exceptions are treated in different handlers and code appropriately, but the safest and most user friendly approach is to always catch exceptions in your event handlers and show your own error message. I'll talk about one good approach to handling these exceptions at the end of this post.   2. SSMS integration for SQL Tab Magic  A while back wrote an SSMS add-in called SQL Tab Magic (learn more about the process here). It works by listening to certain SSMS events and remembering what documents are opened and closed. I deployed it internally and it was used for a few months by a number of people without problems, so I was reasonably confident in its quality. Before releasing I made a few cleanups, including introducing error reporting. Bam. A few days later I was looking at over 1,000 error reports in my inbox. In turns out I wasn't handling table designers properly. The exceptions were there, but again SSMS was helpfully swallowing them all for me, so I was blissfully unaware. Had I made my errors loud from the start, I would have noticed these issues long before and fixed them.   Handling exceptions  Now you are systematically catching exceptions throughout your application, you need to do something with them. I've tried 3 options: log them, alert the user, and automatically send them home.  There are a few good options for logging in .NET. The most widespread is Apache log4net, which provides a very capable and configurable logging framework. There is also NLog which has a compatible interface, with a greater emphasis on fluent rather than XML configuration.  Alerting the user serves two purposes. Firstly it means they understand their action has failed to they don't just assume it worked (Silent file copy failure is a problem if you then delete the originals) or that they should keep waiting for a background task to complete. Secondly, it means the users can report the bug to your support team, and then you can fix it. This means the message you show the user should contain the information you need as a developer to identify and fix it. And the user will probably just send you a screenshot of the dialog, so it shouldn't be hidden by scroll bars.  This leads us to the third option, automatically sending error reports home. By automatic I mean with minimal effort on the part of the user, rather than doing it silently behind their backs. The advantage of this is you can send back far more detailed and precise information than you can expect a user to include in an email, and by making it easier to report errors, you make it more likely users will do so.  We do this using a great tool called SmartAssembly (full disclosure: this is a product made by Red Gate). It captures complete stack traces including the values of all local variables and then allows the user to send all this information back with a single click. We also capture log files to help understand what lead up to the error. We then use the free SmartAssembly Sync for Jira to dedupe these reports and raise them as bugs in our bug tracking system.  The combined effect of loud errors during development and then automatic error reporting once software is deployed allows us to find and fix more bugs, correct misunderstandings on how our software works, and overall is a key piece in delivering higher quality software. However it is no substitute for having motivated cunning testers in the building - and we're looking to hire more of those too.   If you found this post interesting you should follow me on twitter.  

    Read the article

  • WPF ListView as a DataGrid – Part 2

    - by psheriff
    In my last blog post I showed you how to create GridViewColumn objects on the fly from the meta-data in a DataTable. By doing this you can create columns for a ListView at runtime instead of having to pre-define each ListView for each different DataTable. Well, many of us use collections of our classes and it would be nice to be able to do the same thing for our collection classes as well. This blog post will show you one approach for using collection classes as the source of the data for your ListView.  Figure 1: A List of Data using a ListView Load Property NamesYou could use reflection to gather the property names in your class, however there are two things wrong with this approach. First, reflection is too slow, and second you may not want to display all your properties from your class in the ListView. Instead of reflection you could just create your own custom collection class of PropertyHeader objects. Each PropertyHeader object will contain a property name and a header text value at a minimum. You could add a width property if you wanted as well. All you need to do is to create a collection of property header objects where each object represents one column in your ListView. Below is a simple example: PropertyHeaders coll = new PropertyHeaders(); coll.Add(new PropertyHeader("ProductId", "Product ID"));coll.Add(new PropertyHeader("ProductName", "Product Name"));coll.Add(new PropertyHeader("Price", "Price")); Once you have this collection created, you could pass this collection to a method that would create the GridViewColumn objects based on the information in this collection. Below is the full code for the PropertyHeader class. Besides the PropertyName and Header properties, there is a constructor that will allow you to set both properties when the object is created. C#public class PropertyHeader{  public PropertyHeader()  {  }   public PropertyHeader(string propertyName, string headerText)  {    PropertyName = propertyName;    HeaderText = headerText;  }   public string PropertyName { get; set; }  public string HeaderText { get; set; }} VB.NETPublic Class PropertyHeader  Public Sub New()  End Sub   Public Sub New(ByVal propName As String, ByVal header As String)    PropertyName = propName    HeaderText = header  End Sub   Private mPropertyName As String  Private mHeaderText As String   Public Property PropertyName() As String    Get      Return mPropertyName    End Get    Set(ByVal value As String)      mPropertyName = value    End Set  End Property   Public Property HeaderText() As String    Get      Return mHeaderText    End Get    Set(ByVal value As String)      mHeaderText = value    End Set  End PropertyEnd Class You can use a Generic List class to create a collection of PropertyHeader objects as shown in the following code. C#public class PropertyHeaders : List<PropertyHeader>{} VB.NETPublic Class PropertyHeaders  Inherits List(Of PropertyHeader)End Class Create Property Header Objects You need to create a method somewhere that will create and return a collection of PropertyHeader objects that will represent the columns you wish to add to your ListView prior to binding your collection class to that ListView. Below is a sample method called GetProperties that builds a list of PropertyHeader objects with properties and headers for a Product object. C#public PropertyHeaders GetProperties(){  PropertyHeaders coll = new PropertyHeaders();   coll.Add(new PropertyHeader("ProductId", "Product ID"));  coll.Add(new PropertyHeader("ProductName", "Product Name"));  coll.Add(new PropertyHeader("Price", "Price"));   return coll;} VB.NETPublic Function GetProperties() As PropertyHeaders  Dim coll As New PropertyHeaders()   coll.Add(New PropertyHeader("ProductId", "Product ID"))  coll.Add(New PropertyHeader("ProductName", "Product Name"))  coll.Add(New PropertyHeader("Price", "Price"))   Return collEnd Function WPFListViewCommon Class Now that you have a collection of PropertyHeader objects you need a method that will create a GridView and a collection of GridViewColumn objects based on this PropertyHeader collection. Below is a static/Shared method that you might put into a class called WPFListViewCommon. C#public static GridView CreateGridViewColumns(  PropertyHeaders properties){  GridView gv;  GridViewColumn gvc;   // Create the GridView  gv = new GridView();  gv.AllowsColumnReorder = true;   // Create the GridView Columns  foreach (PropertyHeader item in properties)  {    gvc = new GridViewColumn();    gvc.DisplayMemberBinding = new Binding(item.PropertyName);    gvc.Header = item.HeaderText;    gvc.Width = Double.NaN;    gv.Columns.Add(gvc);  }   return gv;} VB.NETPublic Shared Function CreateGridViewColumns( _    ByVal properties As PropertyHeaders) As GridView  Dim gv As GridView  Dim gvc As GridViewColumn   ' Create the GridView  gv = New GridView()  gv.AllowsColumnReorder = True   ' Create the GridView Columns  For Each item As PropertyHeader In properties    gvc = New GridViewColumn()    gvc.DisplayMemberBinding = New Binding(item.PropertyName)    gvc.Header = item.HeaderText    gvc.Width = [Double].NaN    gv.Columns.Add(gvc)  Next   Return gvEnd Function Build the Product Screen To build the window shown in Figure 1, you might write code like the following: C#private void CollectionSample(){  Product prod = new Product();   // Setup the GridView Columns  lstData.View = WPFListViewCommon.CreateGridViewColumns(       prod.GetProperties());  lstData.DataContext = prod.GetProducts();} VB.NETPrivate Sub CollectionSample()  Dim prod As New Product()   ' Setup the GridView Columns  lstData.View = WPFListViewCommon.CreateGridViewColumns( _       prod.GetProperties())  lstData.DataContext = prod.GetProducts()End Sub The Product class contains a method called GetProperties that returns a PropertyHeaders collection. You pass this collection to the WPFListViewCommon’s CreateGridViewColumns method and it will create a GridView for the ListView. When you then feed the DataContext property of the ListView the Product collection the appropriate columns have already been created and data bound. Summary In this blog you learned how to create a ListView that acts like a DataGrid using a collection class. While it does take a little code to do this, it is an alternative to creating each GridViewColumn in XAML. This gives you a lot of flexibility. You could even read in the property names and header text from an XML file for a truly configurable ListView. NOTE: You can download the complete sample code (in both VB and C#) at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "WPF ListView as a DataGrid – Part 2" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free eBook on "Fundamentals of N-Tier".  

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21  | Next Page >