Search Results

Search found 1498 results on 60 pages for 'continuing education'.

Page 29/60 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Thread Synchronization and Synchronization Primitives

    When considering synchronization in an application, the decision truly depends on what the application and its worker threads are going to do. I would use synchronization if two or more threads could possibly manipulate the same instance of an object at the same time. An example of this in C# can be demonstrated through the use of storing data in a static object. A static object is initialized once per application and the data within the object can be accessed by all threads. I would use the synchronization primitives to prevent any data from being manipulated by multiple threads simultaneously. This would reduce any data corruption from occurring within the object. On the other hand if all the threads used non static objects and were independent of the other tasks there would be no need to use synchronization. Synchronization Primitives in C#: Basic Blocking Locking Signaling Non-Blocking Synchronization Constructs The Basic Blocking methods include Sleep, Join, and Task.Wait.  These methods force threads to wait until other threads have completed. In addition, these methods can also force a thread to wait a set amount of time before continuing to work.   The Locking primitive prevents a thread from entering a critical section of code while another thread is in the same critical section.  If another thread attempts to enter a locked code, it will wait, until the code block is released. The Signaling primitive allows a thread to temporarily pause work until receiving a notification from another thread that it is ok to continue working. The Signaling primitive removes the need for polling.The Non-Blocking Synchronization Constructs protect access to a common field by calling upon processor primitives.

    Read the article

  • It&rsquo;s All About Expectation Management

    - by D'Arcy Lussier
    I saw this tweet from Gerald Weinberg today: I’d expand on this – its not just managers, its our clients as well. With so much focus on “agile” and reducing the amount of wasteful documentation created, those that typically consume traditional deliverables haven’t caught up. For many, there still is a correlation between seeing a mountain of paper, or a 30 page Word document, or a 40 slide PowerPoint, and feeling like some “work” was done. The “Value Driven Development” movement is still in its infancy, even with the adoption and success stories. So, we have two options – we can complain about it, or we can learn how to live with it while continuing to evangelize about the benefits of value over bloat. The reality is that perceived value is still value, so what’s important – especially in a situation as Gerald mentions where management or clients don’t understand the work – is to find out what the manager/client values and deliver to that. That doesn’t mean you don’t discuss it. That doesn’t mean that if you see risks being represented in what a manager/client is asking you don’t question it and provide alternatives. But it does mean that you don’t slam the door on it – you don’t just toss it aside and ignore what their perceived value is. The world isn’t perfect, primarily because its filled with imperfect people. The only way to get better is to engage and not dismiss each other, even if we disagree on value.

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • Android “open for embedded”? Must-read Ars Technica article

    - by terrencebarr
    A few days ago ars technica published an article “Google’s iron grip on Android: Controlling open source by any means necessary”. If you are considering Android for embedded this article is a must-read to understand the severe ramifications of Google’s tight (and tightening) control on the Android technology and ecosystem. Some quotes from the ars technica article: “Android is open – except for all the good parts“ “Android actually falls into two categories: the open parts from the Android Open Source Project (AOSP) … and the closed source parts, which are all the Google-branded apps” “Android open source apps … turn into abandonware by moving all continuing development to a closed source model.” “Joining the OHA requires a company to sign its life away and promise to not build a device that runs a competing Android fork.” “Google Play Services is a closed source app owned by Google … to turn the “Android App Ecosystem” into the “Google Play Ecosystem” “You’re allowed to contribute to Android and allowed to use it for little hobbies, but in nearly every area, the deck is stacked against anyone trying to use Android without Google’s blessing“ Compare this with a recent Wired article “Oracle Makes Java More Relevant Than Ever”: “Oracle has actually opened up Java even more — getting rid of some of the closed-door machinations that used to be part of the Java standards-making process. Java has been raked over the coals for security problems over the past few years, but Oracle has kept regular updates coming. And it’s working on a major upgrade to Java, due early next year.” Cheers, – Terrence Filed under: Embedded, Mobile & Embedded Tagged: Android, embedded, Java Embedded, Open Source

    Read the article

  • How many different servers are needed to keep a website running with no downtime? [closed]

    - by Mason Wheeler
    Machines go down. It's a fact of life. They may need to be rebooted for some reason, or they may have a hardware failure, or a power outage. So if I wanted to deploy a website with a server backed by a SQL database, putting the whole thing on one server wouldn't be good enough. It obviously needs at least two servers, so that if one goes down, the other can pick up the slack until the first comes back up. Of course, if I have the server software on two machines, either one of which could go down, I can't place the database on either of those two machines, because it could go down. So the database needs its own server. But that server can go down, so I need a backup database server and some sort of replication system to keep it in sync so the main can fail-over to it. So far, that's a bare minimum of 4 machines to keep one website running with a reasonable chance of no downtime. (Assuming no catastrophic events take place that take down both front-end servers at once or both DB servers at once, and no hacks, DDOS attacks, etc. Am I missing any other factors, or should I consider 4 servers to be the minimum for running a website with a goal of continuing operation without downtime even when a server goes down?

    Read the article

  • SQL SERVER – Puzzle to Win Print Book – Write T-SQL Self Join Without Using FIRST _VALUE and LAST_VALUE

    - by pinaldave
    Last week we asked a puzzle SQL SERVER – Puzzle to Win Print Book – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY . This puzzle got very interesting participation. The details of the winner is listed here. In this puzzle we received two very important feedback. This puzzle cleared the concepts of First_Value and Last_Value to the participants. As this was based on SQL Server 2012 many could not participate it as they have yet not installed SQL Server 2012. I really appreciate the feedback of user and decided to come up something as fun and helps learn new feature of SQL Server 2012. Please read yesterday’s blog post SQL SERVER – Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 before continuing this puzzle as it is based on yesterday’s post. Yesterday I ran following query which uses functions LEAD and LAG. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: Puzzle: Now use T-SQL Self Join where same table is joined to itself and get the same result without using LEAD or LAG functions. Hint: Introduction to JOINs – Basic of JOINs Self Join A new analytic functions in SQL Server Denali CTP3 – LEAD() and LAG() Rules Leave a comment with your detailed answer by Nov 21's blog post. Open world-wide (where Amazon ships books) If you blog about puzzle’s solution and if you win, you win additional surprise gift as well. Prizes Print copy of my new book SQL Server Interview Questions Amazon|Flipkart If you already have this book, you can opt for any of my other books SQL Wait Stats [Amazon|Flipkart|Kindle] and SQL Programming [Amazon|Flipkart|Kindle]. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How do I create a popup banner before login with Lightdm?

    - by Rich Loring
    When Ubuntu was using gnome I was able to create a popup banner like the banner below before the login screen using zenity in the /etc/gdm/Init/Default. The line of code would be like this: if [ -f "/usr/bin/zenity" ]; then /usr/bin/zenity --info --text="`cat /etc/issue`" --no-wrap; else xmessage -file /etc/issue -button ok -geometry 540X480; fi How can I accomplish this with Unity? NOTICE TO USERS This is a Federal computer system (and/or it is directly connected to a BNL local network system) and is the property of the United States Government. It is for authorized use only. Users (authorized or unauthorized) have no explicit or implicit expectation of privacy. Any or all uses of this system and all files on this system may be intercepted, monitored, recorded, copied, audited, inspected, and disclosed to authorized site, Department of Energy, and law enforcement personnel, as well as authorized officials of other agencies, both domestic and foreign. By using this system, the user consents to such interception, monitoring, recording, copying, auditing, inspection, and disclosure at the discretion of authorized site or Department of Energy personnel. Unauthorized or improper use of this system may result in administrative disciplinary action and civil and criminal penalties. By continuing to use this system you indicate your awareness of and consent to these terms and conditions of use. LOG OFF IMMEDIATELY if you do not agree to the conditions stated in this warning.

    Read the article

  • Install RT Failed: DateTime >= 0.44 ...MISSING

    - by javano
    I am trying to install RT-4.0.5 (Request Tracker) but I keep getting the following output; $ make fixdeps <output cut> SOME DEPENDENCIES WERE MISSING. CORE missing dependencies: DateTime >= 0.44 ...MISSING make: *** [fixdeps] Error 1 The full output is here (it's quite long); http://pastebin.com/raw.php?i=Tn7GrkYw $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 8.04.4 LTS Release: 8.04 Codename: hardy $ perl --version This is perl 5, version 14, subversion 2 (v5.14.2) built for i686-linux $ cpan --version /usr/local/bin/cpan version 1.57 calling Getopt::Std::getopts (version 1.06 [paranoid]), running under Perl version 5.14.2. [Now continuing due to backward compatibility and excessive paranoia. See ``perldoc Getopt::Std'' about $Getopt::Std::STANDARD_HELP_VERSION.] Nothing to install! I can't see why this is a problem; $ cpan DateTime Going to read '/root/.cpan/Metadata' Database was generated on Thu, 08 Mar 2012 16:11:26 GMT DateTime is up to date (0.72).

    Read the article

  • Oracle Linux and Oracle VM Hardware Certification Program

    - by Durgam Vahia
    The Oracle Linux and Oracle VM are continuing to see growth in IHV (Independent Hardware Vendor) ecosystem. The Oracle Linux and Oracle VM Hardware Certification Program, also referred as HCL, provides a formal means for hardware vendors to work with Oracle to establish high quality support for the certified hardware platform. Since the beginning of the program, number of hardware partners have certified range of server platforms on Oracle Linux and Oracle VM. Currently, HCL lists over 400 certifications from 10 server vendors and the list continues to grow at a rapid pace. New hardware certification involves close collaboration between Oracle and server partner to ensure that adequate testing is performed on the target server and results are thoroughly reviewed. This rigorous process ensures that when new hardware platform is listed on HCL, it has full support from both Oracle and the respective partner. Additionally, once a certification is achieved with Oracle Linux with the current version of Unbreakable Enterprise Kernel, future minor updates of the software continue to carry over the certification, reducing the need for a re-certification. For the complete list of certified hardware, please visit Oracle Linux and Oracle VM Certified Hardware. Also refer to Frequently Asked Questions for more information.

    Read the article

  • LDAP Structure: dc=example,dc=com vs o=Example

    - by PAS
    I am relatively new to LDAP, and have seen two types of examples of how to set up your structure. One method is to have the base being: dc=example,dc=com while other examples have the base being o=Example. Continuing along, you can have a group looking like: dn: cn=team,ou=Group,dc=example,dc=com cn: team objectClass: posixGroup memberUid: user1 memberUid: user2 ... or using the "O" style: dn: cn=team, o=Example objectClass: posixGroup memberUid: user1 memberUid: user2 My questions are: Are there any best practices that dictate using one method over the other? Is it just a matter of preference which style you use? Are there any advantages to using one over the other? Is one method the old style, and one the new-and-improved version? So far, I have gone with the dc=example,dc=com style. Any advice the community could give on the matter would be greatly appreciated.

    Read the article

  • A Reusable Builder Class for Javascript Testing

    - by Liam McLennan
    Continuing on my series of builders for C# and Ruby here is the solution in Javascript. This is probably the implementation with which I am least happy. There are several parts that did not seem to fit the language. This time around I didn’t bother with a testing framework, I just append some values to the page with jQuery. Here is the test code: var initialiseBuilder = function() { var builder = builderConstructor(); builder.configure({ 'Person': function() { return {name: 'Liam', age: 26}}, 'Property': function() { return {street: '127 Creek St', manager: builder.a('Person') }} }); return builder; }; var print = function(s) { $('body').append(s + '<br/>'); }; var build = initialiseBuilder(); // get an object liam = build.a('Person'); print(liam.name + ' is ' + liam.age); // get a modified object liam = build.a('Person', function(person) { person.age = 999; }); print(liam.name + ' is ' + liam.age); home = build.a('Property'); print(home.street + ' manager: ' + home.manager.name); and the implementation: var builderConstructor = function() { var that = {}; var defaults = {}; that.configure = function(d) { defaults = d; }; that.a = function(type, modifier) { var o = defaults[type](); if (modifier) { modifier(o); } return o; }; return that; }; I still like javascript’s syntax for anonymous methods, defaults[type]() is much clearer than the Ruby equivalent @defaults[klass].call(). You can see the striking similarity between Ruby hashes and javascript objects. I also prefer modifier(o) to the equivalent Ruby, yield o.

    Read the article

  • Cant add network printer with system-config-printer package

    - by Erick David Ruiz Coronel
    Hello im new here and I dont know if im doing it right but I hope yes. I have a printer conected to a windows 8 machine, also I had ubuntu 13.04 and it worked fine when I printed from linux to windows but when I upgraded to 13.10 my printer didnt worked, I removed it thinking that would fix it but when I tryed to add the printer again I couldnt, I reinstalled cups and the system-config-printer-gnome package but didnt worked. Here is the terminal log : erick@Tauro:~$ system-config-printer Caught non-fatal exception. Traceback: File "/usr/share/system-config-printer/probe_printer.py", line 255, in _do_find fn () File "/usr/share/system-config-printer/probe_printer.py", line 367, in _probe_hplip stderr=null) File "/usr/lib/python2.7/subprocess.py", line 709, in init errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1326, in _execute_child raise child_exception OSError: [Errno 2] No existe el archivo o el directorio Continuing anyway.. Traceback (most recent call last): File "/usr/share/system-config-printer/newprinter.py", line 912, in on_btnNPForward_clicked self.nextNPTab() File "/usr/share/system-config-printer/newprinter.py", line 1064, in nextNPTab stderr=file("/dev/null")) File "/usr/lib/python2.7/subprocess.py", line 709, in init errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1326, in _execute_child raise child_exception OSError: [Errno 2] No existe el archivo o el directorio Any suggestion please? C:

    Read the article

  • Achieving forward compatibility with C++11

    - by mcmcc
    I work on a large software application that must run on several platforms. Some of these platforms support some features of C++11 (e.g. MSVS 2010) and some don't support any (e.g. GCC 4.3.x). I see this situation continuing on for several years (my best guess: 3-5 years). Given that, I would like set up a compatibility interface such that (to whatever degree possible) people can write C++11 code that will still compile with older compilers with a minimum of maintenance. Overall, the goal is to minimize #ifdef's as much as reasonably possible while still enabling basic C++11 syntax/features on the platforms that support them, and provide emulation on the platforms that don't. Let's start with std::move(). The most obvious way to achieve compatibility would be to put something like this in a common header file: #if !defined(HAS_STD_MOVE) namespace std { // C++11 emulation template <typename T> inline T& move(T& v) { return v; } template <typename T> inline const T& move(const T& v) { return v; } } #endif // !defined(HAS_STD_MOVE) This allow people to write things like std::vector<Thing> x = std::move(y); ... with impugnity. It does what they want in C++11 and it does the best it can in C++03. When we finally drop the last of the C++03 compilers, this code can remain as is. However, according to the standard, it is illegal to inject new symbols into the std namespace. That's the theory. My question is, practically speaking, is there any harm in doing this as a way of achieving forward compatibility?

    Read the article

  • Debian Unstable + Postfix 2.6.5 + dkim-filter 2.8.2 issue

    - by kura
    I have Postfix installed on Debian Unstable, as the title states, the system is completely up-to-date, I have tried to get DKIM signatures working on outgoing mail using dkim-filter 2.8.2. I couldn't use the default Debian way of doing things with sockets, instead I used the Ubuntu way: SOCKET="inet:12345@localhost"` I have the following in my postfix/main.cf milter_default_action = accept milter_protocol = 6 smtpd_milters = inet:localhost:12345 non_smtpd_milters = inet:localhost:12345 All is fine except I get the following message I start DKIM in mail.log: dkim-filter[22029]: can't configure DKIM library; continuing And when it tries to sign mails I get the following error: postfix/cleanup[22042]: warning: milter inet:localhost:12345: can't read SMFIC_EOH reply packet header: Success And then dkim-filter daemon stops. I've looked through Google but found no actual way to fix this that works for me. I have this working fine on an Ubuntu server but would love to get it working on Debian too.

    Read the article

  • Silverlight Cream for April 12, 2010 -- #837

    - by Dave Campbell
    In this Issue: Michael Washington, Joe McBride, Kirupa, Maurice de Beijer, Brad Abrams, Phil Middlemiss, and CorrinaB. Shoutout: Charlie Kindel has a post up about the incompatibility between VS2010RTM and what we currently have for WP7: Visual Studio 2010 RTM and the Windows Phone Developer Tools CTP and if you want to be notified when that changes, submit your email here. Erik Mork and Co. have their latest This Week in Silverlight 4.9.2010 posted. From SilverlightCream.com: Simplified MVVM: Silverlight Video Player Michael Washington created a 'designable' video player using MVVM that allows any set of controls to implement the player. Great tutorial and all the code. Windows Phone 7 Panorama Behaviors Joe McBride posted a link to a couple WP7 gesture behaviors and a link out to some more by smartyP. Event Bubbling and Tunneling Kirupa has a great article up on Event Bubbling and Tunneling... showing the route that events take through your WPF or Silverlight app. Using dynamic objects in Silverlight 4 Maurice de Beijer has a blog up about binding to indexed properties in Silverlight 4... in other words, you don't have to know what you're binging to at design time. Silverlight 4 + RIA Services - Ready for Business: Ajax Endpoint Brad Abrams is still continuing his RIA series. His latest is on exposing your RIA Services in JSON. Changing Data-Templates at run-time from the VM Looks like I missed Phil Middlemiss' latest post on Changing DataTemplates at run-time. He has a visual of why you might need this right up-front, and is a very common issue. Check out the solution he provides us. Windows System Color Theme for Silverlight - Part Three CorrinaB blogged screenshots and discussion of 3 new themes that are going to be coming up, and what they've done to the controls in general. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • A Model for Planning Your Oracle BPM 10g Migration by Kris Nelson

    - by JuergenKress
    As the Oracle SOA Suite and BPM Suite 12c products enter beta, many of our clients are starting to discuss migrating from the Oracle 10g or prior platforms. With the BPM Suite 11g, Oracle introduced a major change in architecture with a strong focus on integration with SOA and an entirely new technology stack. In addition, there were fresh new UIs and a renewed business focus with an improved Process Composer and features like Adaptive Case Management. While very beneficial to both technology and the business, the fundamental change in architecture does pose clear migration challenges for clients who have made investments in the 10g platform. Some of the key challenges facing 10g customers include: Managing in-process instance migration and running multiple process engines Migration of User Interfaces and other code within the environment that may not be automated Growing or finding technical staff with both 10g and 12c experience Managing migration projects while continuing to move the business forward and meet day-to-day responsibilities As a former practitioner in a mixed 10g/11g shop, I wrestled with many of these challenges as we tried to plan ahead for the migration. Luckily, there is migration tooling on the way from Oracle and several approaches you can use in planning your migration efforts. In addition, you already have a defined and visible process on the current platform, which will be invaluable as you migrate.  A Migration ModelThis model presents several options across a value and investment spectrum. The goal of the AVIO Migration Model is to kick-start discussions within your company and assist in creating a plan of action to take advantage of the new platform. As with all models, this is a framework for discussion and certain processes or situations may not fit. Please contact us if you have specific questions or want to discuss migrations efforts in your situation. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Kris Nelson,ACM,Adaptive Case Management,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • Good experiences with bulk rate SMS providers?

    - by jen_h
    We're a pretty popular service, our users are currently sending 100000+ SMS messages (projected 180k this month, and continuing to grow) per month. We're currently using a primary domestic provider that doesn't provide bulk rates and doesn't provide short code access. We're using a few backup providers as well for max redundancy, but aren't thrilled by 'em. We're ideally looking for a service that provides good bulk rates/incentives, good uptime/redundancy/reputation, easy API-integration (including respectable error codes!) ;). Right now, we're looking primarily for a domestic US SMS solution, but aren't averse to using the same provider for both International & US. For those of you using bulk SMS right now - what are your recommendations, experiences, etc. in the bulk SMS domain? It sounds like I'm looking for a golden unicorn here, I know, but any data/recommendations/warnings you've got are helpful!

    Read the article

  • Exchange 2010, multiple accepted domains, UCC and outside webhosts

    - by westbadger
    We have an Exchange 2010 server configured to send and receive mail on several accepted domains for Outlook Anywhere, with a UCC cert addressing each mail.domain.com and autodiscover.domain.com, mail.otherplace.com etc. This worked fine until an SSL domain validation cert for one of the additional domains - where the www.otherplace.com is hosted outside our org - expired. Now Exchange users in mail.otherplace.com get an expired cert warning for otherplace.com when connecting to our mail.domain.com portal. They still get mail, but with a repeated popup in Outlook 2007 and 2010. If I understand it correctly - Outlook autodiscover connects by first polling otherplace.com/autodiscover - which is the outside www server with the expired cert before continuing on to autodiscover.otherplace.com - which is where the MX record points to our in-house Exchange UCC. I'm trying to find out if we should: 1) turn down all mail functions on the outside webserver 2) delete the expired (useless for an informational site) cert on the outside webserver 3) renew the cert for otherplace.com on the outside webserver - or something completely different? Many thanks in advance for your thoughts.

    Read the article

  • Bruce Lee Software development.

    - by DesigningCode
    "Styles tend to not only separate men - because they have their own doctrines and then the doctrine became the gospel truth that you cannot change. But if you do not have a style, if you just say: Well, here I am as a human being, how can I express myself totally and completely? Now, that way you won't create a style, because style is a crystallization. That way, it's a process of continuing growth."- Bruce Lee This is kind of how I see software development. What I enjoyed in the the early days of Agile, things seemed very dynamic, people were working out all manner of ways of doing things. It was technique oriented, it was very fluid and people were finding all kinds of good ways of doing things.  Now when I look at the world of “Agile” it seems more crystalized.  In fact that seemed to be a goal, to crystalize the goodness so everyone can share.   I think mainly because it seems a heck of a lot easier to market.  People are more willing to accept a well defined doctrine and drink the Kool Aid.   Its more “corporate” or “professional”. But the process of crystalizing the goodness actually makes it bad.   But luckily in the world of software development there are still many people who are more focused on “how can I express myself totally and completely”.   We are seeing expressive languages, expressive frameworks, tooling that helps you to better express yourself, design techniques that allow you to better express your intent.    I love that stuff! So beware, be very cautious of anyone offering you new age wisdom based on crystals!

    Read the article

  • How do I manage the technical debate over WCF vs. Web API?

    - by Saeed Neamati
    I'm managing a team of like 15 developers now, and we are stuck at a point on choosing the technology, where the team is broken into two completely opposite teams, debating over usage of WCF vs. Web API. Team A which supports usage of Web API, brings forward these reasons: Web API is just the modern way of writing services (Wikipedia) WCF is an overhead for HTTP. It's a solution for TCP, and Net Pipes, and other protocols WCF models are not POCO, because of [DataContract] & [DataMember] and those attributes SOAP is not as readable and handy as JSON SOAP is an overhead for network compared to JSON (transport over HTTP) No method overloading Team B which supports the usage of WCF, says: WCF supports multiple protocols (via configuration) WCF supports distributed transactions Many good examples and success stories exist for WCF (while Web API is still young) Duplex is excellent for two-way communication This debate is continuing, and I don't know what to do now. Personally, I think that we should use a tool only for its right place of usage. In other words, we'd better use Web API, if we want to expose a service over HTTP, but use WCF when it comes to TCP and Duplex. By searching the Internet, we can't get to a solid result. Many posts exist for supporting WCF, but on the contrary we also find people complaint about it. I know that the nature of this question might sound arguable, but we need some good hints to decide. We're stuck at a point where choosing a technology by chance might make us regret it later. We want to choose with open eyes. Our usage would be mostly for web, and we would expose our services over HTTP. In some cases (say 5 to 10 percent) we might need distributed transactions though. What should I do now? How do I manage this debate in a constructive way?

    Read the article

  • What FTP clients securely handle FTP/TLS where the server has a self-signed cert?

    - by billpg
    I'm trying to connect to an FTP server that uses TLS on port 990. Unfortunately, the server uses a self-signed cert. What FTP clients for Windows handle this type of connection securely, such that I can securely verify the cert before continuing with the connection and logging in? (The server admin has supplied me with the expected certificate thumbprint to look for.) As an example of doing it wrongly, Core FTP LE 2.2 presents a dialog with basic information about the cert presented, inviting me to accept-once, accept-always or cancel. The dialog does not include the cert's hash/thumbprint and without that thumprint, I can't verify if the cert I'm being presented is the right one.

    Read the article

  • Focus on Identity Management at Oracle OpenWorld12

    - by Tanu Sood
    Heading to Oracle OpenWorld 2012? Then we have Identity Management and relevant sessions all mapped out for you to help you navigate Oracle OpenWorld. Do make use of Focus On Identity Management document online or if you’d like to have a copy handy, use the pdf version instead. In the meantime, here are the 3 must-attend Identity Management sessions for this year: Trends in Identity Management Monday, October 1, at 10:45 a.m., Moscone West L3, room 3003, (session ID# CON9405) Led by Amit Jasuja, this session focuses on how the latest release of Oracle Identity Management addresses emerging identity management requirements for mobile, social, and cloud computing. It also explores how existing Oracle Identity Management customers are simplifying implementations and reducing total cost of ownership. Mobile Access Management Tuesday, October 2, at 10:15 a.m., Moscone West L3, room 3022, (session ID# CON9437) There are now more than 5 billion mobile devices on the planet, including an increasing number of personal devices being used to access corporate data and applications. This session focuses on ways to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. Evolving Identity Management Thursday, October 4, at 12:45 p.m., Moscone West L3, room 3008, (session ID# CON9640) Identity management requirements have evolved and are continuing to evolve as organizations seek to secure cloud and mobile access. This session explores emerging requirements and shares best practices for evolving your identity management implementation, including the value of a service-oriented, platform approach. For a complete listing of all identity management sessions, hands-on labs, and more, see Focus on Identity Management now. See you at OOW12. 

    Read the article

  • Does Test Driven Development (TDD) improve Quality and Correctness? (Part 1)

    - by David V. Corbin
    Since the dawn of the computer age, various methodologies have been introduced to improve quality and reduce cost. In this posting, I will by sharing my experiences with Test Driven Development; both its benefits and limitations. To start this topic, we need to agree on what TDD is. The first is to define each of the three words as used in this context. Test - An item or action which measures something in some quantifiable form. Driven - The primary motivation or focus of a series of activities (process) Development - All phases of a software project/product from concept through delivery. The above are very simple definitions that result in the following: "TDD is a process where the primary focus is on measuring and quantifying all aspects of the creation of a (software) product." There are many places where TDD is used outside of software development, even though it is not known by this name. Consider the (conventional) education process that most of us grew up on. The focus was to get the best grades as measured by different tests. Many of these tests measured rote memorization and not understanding of the subject matter. The result of this that many people graduated with high scores but without "quality and correctness" in their ability to utilize the subject matter (of course, the flip side is true where certain people DID understand the material but were not very good at taking this type of test). Returning to software development, let us look at some common scenarios. While these items are generally applicable regardless of platform, language and tools; the remainder of this post will utilize Microsoft Visual Studio and Team Foundation Server (TFS) for examples. It should be realized that everyone does at least some aspect of TDD. At the most rudimentary level, getting a program to compile involves a "pass/fail" measurement (is the syntax valid) that drives their ability to proceed further (run the program). Other developers may create "Unit Tests" in the belief that having a test for every method/property of a class and good code coverage is the goal of TDD. These items may be helpful and even important, but really only address a small aspect of the overall effort. To see TDD in a bigger view, lets identify the various activities that are part of the Software Development LifeCycle. These are going to be presented in a Waterfall style for simplicity, but each item also occurs within Iterative methodologies such as Agile/Scrum. the key ones here are: Requirements Gathering Architecture Design Implementation Quality Assurance Can each of these items be subjected to a process which establishes metrics (quantified metrics) that reflect both the quality and correctness of each item? It should be clear that conventional Unit Tests do not apply to all of these items; at best they can verify that a local aspect (e.g. a Class/Method) of implementation matches the (test writers perspective of) the appropriate design document. So what can we do? For each of area, the goal is to create tests that are quantifiable and durable. The ability to quantify the measurements (beyond a simple pass/fail) is critical to tracking progress(eventually measuring the level of success that has been achieved) and for providing clear information on what items need to be addressed (along with the appropriate time to address them - in varying levels of detail) . Durability is important so that the test can be reapplied (ideally in an automated fashion) over the entire cycle. Returning for a moment back to our "education example", one must also be careful of how the tests are organized and how the measurements are taken. If a test is in a multiple choice format, there is a significant statistical probability that a correct answer might be the result of a random guess. Also, in many situations, having the student simply provide a final answer can obscure many important elements. For example, on a math test, having the student simply provide a numeric answer (rather than showing the methodology) may result in a complete mismatch between the process and the result. It is hard to determine which is worse: The student who makes a simple arithmetric error at one step of a long process (resulting in a wrong answer) or The student who (without providing the "workflow") uses a completely invalid approach, yet still comes up with the right number. The "Wrong Process"/"Right Answer" is probably the single biggest problem in software development. Even very simple items can suffer from this. As an example consider the following code for a "straight line" calculation....Is it correct? (for Integral Points)         int Solve(int m, int b, int x) { return m * x + b; }   Most people would respond "Yes". But let's take the question one step further... Is it correct for all possible values of m,b,x??? (no fair if you cheated by being focused on the bolded text!)  Without additional information regarding constrains on "the possible values of m,b,x" the answer must be NO, there is the risk of overflow/wraparound that will produce an incorrect result! To properly answer this question (i.e. Test the Code), one MUST be able to backtrack from the implementation through the design, and architecture all the way back to the requirements. And the requirement itself must be tested against the stakeholder(s). It is only when the bounding conditions are defined that it is possible to determine if the code is "Correct" and has "Quality". Yet, how many of us (myself included) have written such code without even thinking about it. In many canses we (think we) "know" what the bounds are, and that the code will be correct. As we all know, requirements change, "code reuse" causes implementations to be applied to different scenarios, etc. This leads directly to the types of system failures that plague so many projects. This approach to TDD is much more holistic than ones which start by focusing on the details. The fundamental concepts still apply: Each item should be tested. The test should be defined/implemented before (or concurrent with) the definition/implementation of the actual item. We also add concepts that expand the scope and alter the style by recognizing: There are many things beside "lines of code" that benefit from testing (measuring/evaluating in a formal way) Correctness and Quality can not be solely measured by "correct results" In the future parts, we will examine in greater detail some of the techniques that can be applied to each of these areas....

    Read the article

  • XBRL US Conference Highlights

    - by john.orourke(at)oracle.com
    Back in early November I had an opportunity to attend the XBRL US National Conference in Philadelphia.  At the event, XBRL US announced that Oracle had joined the initiative, so I had a chance to participate in a press conference and attend a number of sessions.  Oracle joined XBRL US so we can stay ahead of the standard and leverage it in our products, and to help drive awareness with customers and improve adoption of XBRL. There were roughly 250 attendees at the event, about half of which were vendors and consultants and the rest financial reporting staff from corporate filers.  Event sponsors included Ernst & Young, SWIFT and Fujitsu.  There were also a number of XBRL technology and service providers exhibiting at the conference.  On Monday Nov. 8th, the XBRL US Steering Committee meetings and Annual Members meeting and reception were held.  At the Annual Members meeting the big news was that current XBRL US President, Mark Bolgiano, is moving to a new position at Howard Hughes Medical Center.  Campbell Pryde, who had led the Taxonomy Development for XBRL US, is taking over as XBRL US President. Other items that were highlighted at the members meeting included: The US GAAP XBRL taxonomy is being used by over 1500 SEC filers and has now been handed over to the FASB to maintain and enhance 16 filer training events were held in 2010 XBRL Global Magazine was launched Corporate Actions proposal was submitted to the SEC with SWIFT in May XBRL Labs for iPhone, XBRL US Consistency Suite launched ISO 2022 Corporate Actions Alignment with XBRL achieved The XBRL Credit Rating taxonomy was accepted Tuesday Nov. 9th included Keynotes, General Sessions, Innovation Workshop for Governments and Securities Professionals, and an Opening Reception.  General sessions included: Lessons Learned from the SEC's rollout of XBRL.  More than 18,000 errors were identified in reviews of filings between June 2009 and September 2010.  Most of these related to negative values being used where they shouldn't have.  Also, the SEC feels there are too many taxonomy extensions being created - mostly in the Cash Flow Statements.  They emphasize using existing elements in the US GAAP taxonomy and advise filers not to  create extensions to improve the visual formatting of XBRL filings. Investors and XBRL - Setting the Standard for Data Quality.  In this panel discussion, the key learning was that CFA's, academics and the financial community are not using XBRL as expected.  The issues raised include the  accuracy and completeness of filings, number of taxonomy extensions, and limited number of tools available to help analyze XBRL data.  Another big issue that was raised is the lack of historic results in XBRL - most analysts need 10 quarters of historic data.  On the positive side, XBRL has the potential to eliminate re-keying of data and errors here and can improve analytic capabilities for financial analysts once more historic data is available and more companies are providing detailed tagging of their filings. A US Roadmap for XBRL Financial Reporting.  This was a panel discussion featuring Jeff Neumann(SEC), Campbell Pryde(XBRL US), and Louis Matherne(FASB).  Key points included the fact that XBRL is currently used by 1500 companies, with 8000 more companies coming in 2011.  XBRL for Mutual Fund Reporting will start in 2011 for 8000 funds, and a Credit Rating Taxonomy has now been submitted for review.  The XBRL tagging/filing process is improving each quarter - more education is helping here.  The FASB is looking at extensions to date, and potential additions to US GAAP taxonomy, while the SEC is evaluating filings for accuracy, consistency in tagging, and tools for analyzing data.  The big news is that the FASB 2011 US GAAP Taxonomy has been completed and reviewed by SEC.  The 2011 US GAAP Taxonomy supports new FASB accounting standards issued since 2009, has new taxonomy elements for certain industries (i.e airlines) and the elimination of 500 concepts.  (meaning they can't be used going forward but are still supported for historical comparison)  The 2011 US GAAP Taxonomy will be available for usage with Q2 2011 SEC filings.  More information about this can be found on the FASB web site.  http://www.fasb.org/home Accounting Firms and XBRL.  This session covered the Role of Audit Firms, which includes awareness and education, validation of XBRL filings, and in-house transition planning.  The main advice provided was that organizations should document XBRL mapping process, perform peer comparisons, and risk assessments on a regular basis. Wednesday Nov. 10th included more Keynotes, General Sessions on Corporate Actions, and XBRL Essentials Workshop Training for corporate filers.  The XBRL Essentials Training included: Getting Started Once you Have the Basics Detailed Footnote Tagging and Handling Tables Quality Control and Trust in the XBRL Process Bringing XBRL In-House:  What are the Options, What should you consider? The US GAAP Financial Reporting Taxonomy - Overview of the 2011 release The XBRL Essentials Training was well-attended with about 80 people.  This included a good overview of the SEC's XBRL mandate, limited liability issue, tagging levels, recommended planning process, internal vs. outsourced approach, and how to manage service providers.  I learned a lot from the session on detailed tagging.  This is the requirement that kicks in during a company's second year of XBRL filing with the SEC and applies to financial statements, footnotes and disclosures (it does not apply to MD&A, executive communications and other information).  The review of the Linkbase model, or dimensional table structure, was very interesting and can be complex to understand.  The key takeaway here is that using dimensional tables in XBRL filings can help limit the number of taxonomy extensions that are required.  The slides from this session are posted on the XBRL US web site. (http://xbrl.us/events/Pages/archive.aspx) For me, the main summary points and takeaways from the XBRL US conference are: XBRL for financial reporting has turned the corner and gone mainstream - with 1500 companies currently using it and 8000 more coming in 2011 The expected value is not being achieved by filers or consumers of XBRL data - this will improve when more companies are filing in XBRL, more history is available, and more software tools are available for analysis (hmm, sounds like an opportunity for Oracle) XBRL is becoming the global standard for all business communications beyond just the financials - i.e. adoption for mutual funds, corporate actions and others planned for the future If you would like to learn more about XBRL and the various training programs, services and software tools that are available check out the XBRL US web site and even better - become a member.  Here's a link:  http://xbrl.us/Pages/default.aspx

    Read the article

  • Security risk of JIRA standalone installation running JRE version 1.6.0_26 vs 1.6.0_29 (latest)

    - by kayaker243
    Atlassian recently introduced a standalone installer that installs JIRA, along with its own JRE. Unfortunately the JRE Atlassian bundles with this installer is 1.6.0_26, whereas the current version of the JRE is 1.6.0_29. This is potentially concerning given there were vulnerabilities in _26 that were fixed in the subsequent versions. We are currently using the bundled-installer version of JIRA and one contractor has recommended we ditch this for the system-installed JRE. My question is this: what is the actual security risk of continuing to use the _26 version of the JRE included in the bundled installer? There is no public access to our install of JIRA (only about 20 employees and contractors can login to our JIRA) and it's only accessible on a subdomain of a domain at which there's no publicly-available website. If there's a not insignificant risk inherent in sticking with the older JRE, why hasn't Atlassian upgraded the default JRE?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >