Search Results

Search found 15798 results on 632 pages for 'authentication required'.

Page 549/632 | < Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >

  • Where Next for Google Translate? And What of Information Quality?

    - by ultan o'broin
    Fascinating article in the UK Guardian newspaper called Can Google break the computer language barrier? In it, Andreas Zollman, who works on Google Translate, comments that the quality of Google Translate's output relative to the amount of data required to create that output is clearly now falling foul of the law of diminishing returns. He says: "Each doubling of the amount of translated data input led to about a 0.5% improvement in the quality of the output," he suggests, but the doublings are not infinite. "We are now at this limit where there isn't that much more data in the world that we can use," he admits. "So now it is much more important again to add on different approaches and rules-based models." The Translation Guy has a further discussion on this, called Google Translate is Finished. He says: "And there aren't that many doublings left, if any. I can't say how much text Google has assimilated into their machine translation databases, but it's been reported that they have scanned about 11% of all printed content ever published. So double that, and double it again, and once more, shoveling all that into the translation hopper, and pretty soon you get the sum of all human knowledge, which means a whopping 1.5% improvement in the quality of the engines when everything has been analyzed. That's what we've got to look forward to, at best, since Google spiders regularly surf the Web, which in its vastness dwarfs all previously published content. So to all intents and purposes, the statistical machine translation tools of Google are done. Outstanding job, Googlers. Thanks." Surprisingly, all this analysis hasn't raised that much comment from the fans of machine translation, or its detractors either for that matter. Perhaps, it's the season of goodwill? What is clear to me, however, of course is that Google Translate isn't really finished (in any sense of the word). I am sure Google will investigate and come up with new rule-based translation models to enhance what they have already and that will also scale effectively where others didn't. So too, will they harness human input, which really is the way to go to train MT in the quality direction. But that aside, what does it say about the quality of the data that is being used for statistical machine translation in the first place? From the Guardian article it's clear that a huge humanly translated corpus drove the gains for Google Translate and now what's left is the dregs of badly translated and poorly created source materials that just can't deliver quality translations. There's a message about information quality there, surely. In the enterprise applications space, where we have some control over content this whole debate reinforces the relationship between information quality at source and translation efficiency, regardless of the technology used to do the translation. But as more automation comes to the fore, that information quality is even more critical if you want anything approaching a scalable solution. This is important for user experience professionals. Issues like user generated content translation, multilingual personalization, and scalable language quality are central to a superior global UX; it's a competitive issue we cannot ignore.

    Read the article

  • Fusion Concepts: Fusion Database Schemas

    - by Vik Kumar
    You often read about FUSION and FUSION_RUNTIME users while dealing with Fusion Applications. There is one more called FUSION_DYNAMIC. Here are some details on the difference between these three and the purpose of each type of schema. FUSION: It can be considered as an Administrator of the Fusion Applications with all the corresponding rights and powers such as owning tables and objects, providing grants to FUSION_RUNTIME.  It is used for patching and has grants to many internal DBMS functions. FUSION_RUNTIME: Used to run the Applications.  Contains no DB objects. FUSION_DYNAMIC: This schema owns the objects that are created dynamically through ADM_DDL. ADM_DDL is a package that acts as a wrapper around the DDL statement. ADM_DDL support operations like truncate table, create index etc. As the above statements indicate that FUSION owns the tables and objects including FND tables so using FUSION to run applications is insecure. It would be possible to modify security policies and other key information in the base tables (like FND) to break the Fusion Applications security via SQL injection etc. Other possibilities would be to write a logon DB trigger and steal credentials etc. Thus, to make Fusion Applications secure FUSION_RUNTIME is granted privileges to execute DMLs only on APPS tables. Another benefit of having separate users is achieving Separation of Duties (SODs) at schema level which is required by auditors. Below are the roles and privileges assigned to FUSION, FUSION_RUNTIME and FUSION_DYNAMIC schema: FUSION It has the following privileges: Create SESSION Do all types of DDL owned by FUSION. Additionally, some specific priveleges on other schemas is also granted to FUSION. EXECUTE ON various EDN_PUBLISH_EVENT It has the following roles: CTXAPP for managing Oracle Text Objects AQ_SER_ROLE and AQ_ADMINISTRATOR_ROLE for managing Advanced Queues (AQ) FUSION_RUNTIME It has the following privileges: CREATE SESSION CHANGE NOTIFICATION EXECUTE ON various EDN_PUBLISH_EVENT It has the following roles: FUSION_APPS_READ_WRITE for performing DML (Select, Insert, Delete) on Fusion Apps tables FUSION_APPS_EXECUTE for performing execute on objects such as procedures, functions, packages etc. AQ_SER_ROLE and AQ_ADMINISTRATOR_ROLE for managing Advanced Queues (AQ) FUSION_DYNAMIC It has following privileges: CREATE SESSION, PROCEDURE, TABLE, SEQUENCE, SYNONYM, VIEW UNLIMITED TABLESPACE ANALYZE ANY CREATE MINING MODEL EXECUTE on specific procedure, function or package and SELECT on specific tables. This depends on the objects identified by product teams that ADM_DDL needs to have access  in order to perform dynamic DDL statements. There is one more role FUSION_APPS_READ_ONLY which is not attached to any user and has only SELECT privilege on all the Fusion objects. FUSION_RUNTIME does not have any synonyms defined to access objects owned by FUSION schema. A logon trigger is defined in FUSION_RUNTIME which sets the current schema to FUSION and eliminates the need of any synonyms.   What it means for developers? Fusion Application developers should be using FUSION_RUNTIME for testing and running Fusion Applications UI, BC and to connect to any SQL front end like SQL *PLUS, SQL Loader etc. For testing ADFbc using AM tester while using FUSION_RUNTIME you may hit the following error: oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.sql.SQLException, msg=invalid name pattern: FUSION.FND_TABLE_OF_VARCHAR2_255 The fix is to add the below JVM parameter in the Run/Debug client property in the Model project properties -Doracle.jdbc.createDescriptorUseCurrentSchemaForSchemaName=true More details are discussed in this forum thread for it.

    Read the article

  • SharePoint OCR image files indexing

    Introduction This article describes how to setup indexing of the image files (including TIFF, PDF, JPEG, BMP...) using OCR technology. The indexing described below utilizes Microsoft IFilter technology and as such is not specific to SharePoint, but can be used with any product that uses Microsoft indexing: Microsoft Search, Desktop search, SQL Server search, and through the plug-ins with Google desktop search. I however use it with Microsoft Windows SharePoint Services 2003. For those other products, the registration may need to be slightly different. Background  One of the projects I was working on required a storage of old documents scanned into PDF files. Then there was a separate team of people responsible for providing a tags for a search engine so those image documents could be found. The whole process was clumsy, labor intensive, and error prone. That was what started me on my exploration path. OCR The first search I fired was for the Open Source OCR products. Pretty quickly, I narrowed it down to TESSERACT (http://code.google.com/p/tesseract-ocr/). Tesseract is an orphaned brain child of HP that worked on it from 1985 to 1995. Then it was moved to the Open Source, and now if I understand it correctly, Google is working on it. With credentials like that, it's no wonder that Tesseract scores one of the highest marks on OCR recognition and accuracy. After downloading and struggling just a bit, I got Tesseract to work. The struggling part was that the home page claims that its base input format is a TIFF file. May be my TIFFs were bad, but I was able to get it to work only for BMP files. Image files conversion So now that I have an OCR that can convert BMP files into text, how do I get text out of the image PDF files? One more search, and I settled down on ImageMagic (http://www.imagemagick.org/). This is another wonderful Open Source utility that can convert any file into image. It did work out of the box, converting any TIFF files into bitmaps, but to get PDF files converted, it requires a GhostScript (http://mirror.cs.wisc.edu/pub/mirrors/ghost/GPL/gs864/gs864w32.exe). Dealing with text PDFs With that utility installed, I was cooking - I can convert any file (in particular PDF and TIFF) into bitmap, and then I can extract the text out of the bitmap. The only consideration was to somehow treat PDF files containing text differently - after all, OCR is very computation intensive and somewhat error prone even with perfect image quality and resolution. So another quick search, and I have a PDFTOTEXT (ftp://ftp.foolabs.com/pub/xpdf/xpdf-3.02pl4-win32.zip) - thank God for Open Source! With these guys, I can pull text out of PDF in an eye blink. However, I would get nothing for pure image PDFs, but I already have a solution for that! Batch process It took another 15 minutes to setup a batch script to automate the process: Check the file extension If file is a PDF file try to extract text out of it if there is more than certain amount of text in the file - done! if there is no text, convert first page into bitmap run OCR on the bitmap For any other file type, convert file into bitmap Run OCR on the bitmap Once you unzip the attached project, check out the bin\OCR.BAT file. It will create a temporary file in the directory where your source file is with the same name + the '.txt' extension.Continue span.fullpost {display:none;}

    Read the article

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

  • SharePoint 2010 release date - is it that important?

    - by CharlesLee
    There has been lots of excitement in the SharePoint community over the last few days as Microsoft have announced the official release date of SharePoint 2010. May 12th is the date for your diaries (RTM in April.) The twittersphere has been telling everyone for the last few days about this news and there is much excitement. The major conferences this year all seem to have a SharePoint 2010 focus and some are entirely focussed on the new product (e.g. SharePoint Evolution Conference.)  Now by all accounts Microsoft have plugged some significant functionality gaps that exist in WSS 3.0 and MOSS 2007 and provided some exciting new functionality.  You don't need me to tell you about these as the MVPs (and other community members) are doing a sterling job, after all that is why Microsoft has MVPs in the first place. Lets get real for a second though as there is a significant investment involved in moving to SharePoint 2010:  Firstly you need 64 bit architecture across the board, now for some environments that is no inconsequential hurdle, that's a pretty significant roadblock.   The development farm, test farm and UAT farm are all going to require the same infrastructure upgrades. To take advantage of the tooling for SP2010 you will need to upgrade to Visual Studio 2010 and your development team is going to require 64 bit hardware/OS too.  I would not recommend installing SP 2010 in client installation mode (i.e. for Windows 7) on your developer machines, I would use this for demo machines only. Something that lots of people seem to forget in all their whooping and hollering about the new release is that there is a large amount of end user training going to be required as the browser UI has now adopted the omnipotent ribbon interface and there are other new and more complicated features. SharePoint Designer has also entirely changed in both look and feel and some significant feature changes have taken place. Lest we should forget that some companies have not long upgraded to MOSS 2007 and are yet to see a significant ROI for that project. And the reticence that most companies feel about implementing v1 Microsoft products.  This is only the surface of the deeper issues which would be involved in any upgrade process, so I guess I share a small part of the concern voiced by Mark Miller of EndUserSharePoint.com.  Is SharePoint 2010 relevant? I don't share this sentiment in its entirety as I firmly believe that all companies should be looking at SharePoint 2010 from day one, however most large scale existing implementations of MOSS 2007 are going to be several years away from a serious upgrade project.  So should the conference organisers and the SharePoint community as a whole be a little more understanding of the real world issues?  It's easy to get carried away in the excitement of a new product and new tools to play with but there needs to be a focus on the real world issues that most people are facing day to day and at the moment and for the short term future (at the very least the next 12 months) that is fairly and squarely in the WSS 2.0/3.0 and SPS 2003/MOSS 2007 camps. Don't get me wrong, I am very very excited about getting to grips with SharePoint 2010 in the real world and I cannot wait for my first real project to come along, but for now I am just being realistic about the reality for most people who work with SharePoint. I have been spending a lot of time on www.sharepointoverflow.com recently as there is a community of people building up who are committed to answering the real world questions that folks are dealing with every day.  I urge you to take a look and either ask or answer some questions direct from the front line of the SharePoint world.

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • Oracle Database 12 c Training and Certification: What’s in it for Me?

    - by KJones
    Oracle Database 12c has officially launched! Through Oracle University, our expert instructors can introduce you to the features and functions of this new Oracle Database 12c product. Through training courses and certification exam prep seminars, you can build up your database knowledge and apply this knowledge to advance your career. Already an Oracle Database Expert? Why Oracle Database 12c Training is still a Good Idea Oracle is the industry leader for database technology and takes the release of new products very seriously. We continue to listen to customer needs and add features and functionality to address those needs. Oracle Database 12c is no exception. The following areas have been greatly enhanced and should be considered for your additional training or certification: • Database for Cloud Computing • Compression and Information Lifecycle Management (ILM) • Improved Performance & Scalability • Extreme Availability • Security Defense in Depth • Manageability Oracle Certified Database Administrators Reap Career Rewards Becoming an expert user of database technology through Oracle University's certification program widens your skill set to demonstrate your expertise implementing the most advanced database technology available. By doing so, you'll make yourself a more marketable employee and candidate in the job market.  Reasons to Become an Oracle Certified Database Administrator of Oracle Database 12c: • The new Oracle Database 12c certifications emphasize more advanced skills that align with industry standards, resulting in an even more valuable credential for customers and partners. • The Oracle Certified Associate (OCA) for Oracle Database 12c centers upon certification objectives that measure IT professionals' day-to-day skills, along with your ability to manage challenges. • Building upon all of the competencies incorporated into Oracle's Database 12c OCA certification, the Oracle Certified Professional (OCP) for Oracle Database 12c certification includes advanced knowledge and skills required of top-performing database administrators. • The Oracle Certified Master (OCM) for Oracle Database 12c - a very challenging and elite top-level certification - certifies the most highly skilled and experienced database experts. • Oracle offers 12c upgrade paths for existing Oracle Certified Professionals (OCP) and Oracle Certified Masters (OCM). Database 12c Training and Certification: Built with Your Input When creating Oracle Database 12c training courses and certifications, we wanted to know which tasks are most important in a DBA's day-to-day work. Instead of assuming what those tasks might be, we decided to develop a job task analysis survey for DBAs. The response rate from DBAs from around the world was overwhelming! The survey focused on the following key database areas: • DBA Core Essentials • Database Storage • High Availability • Scalability • Networking • Security • Very Large Database Administration • Distributed Databases After conducting this survey, we took this specific feedback and used it to help mold the new Oracle Database 12c training and certification curriculum. The benefit to you? You now have access to Oracle Database 12c courses and certification exams that were created with your specific on-the-job tasks in mind. Explore Oracle Database 12c Training & Certification Today Investing in Oracle Database 12c training courses and certifications will help you develop a great deal of knowledge, experience and expertise. Explore our portfolio of offerings to determine which skills you need as a DBA to get up-to-speed on Oracle Database 12c technology. Questions or comments about the new Oracle Database 12c offerings? Let us know in the comments below. - Diana Gray, Principle Curriculum Product Manager and Raza Siddiqui, Senior Principle Curriculum Product Manager

    Read the article

  • Value of SOA Specialization interview with Thomas Schaller IPT - part III

    - by Jürgen Kress
    Recognized by Oracle, Preferred by Customers. We had the great opportunity to interview Thomas Schaller – Partner from our SOA Specialized Partner IPT Innovation Process Technology from Switzerland Why did IPT decide to become SOA Specialized? " SOA Specialization is a great branding for IPT. We are the SOA Specialists in the Swiss market, as we focus all our services around SOA. With 65 Swiss consultants focused on SOA Security & SOA Testing & BPM – Business Process Management & BSM – Business Service Modeling the partnership with Oracle as the technology leader in SOA is key, therefore it was important to us to become the first SOA Specialized company in Switzerland. As a result IPT is mentioned by Gartner as one of eight European SOA Consulting Firms and included in „Guide to SOA Consulting and System Integration Service Providers“ Can you describe the marketing activities with Oracle? Once a year we organize the largest SOA Conference in Switzerland “SOA, BPM & Integration Forum 2011“ Oracle is much more than a sponsor for the conference. Jointly we invite our customer base to attend this key event. The sales teams address jointly their most important prospects and customers. Oracle supports us with key speakers who present future directions of the Oracle SOA portfolio like Clemens Utschig-Utschig who presented details about the Complex Event Processing (CEP) solution in 2009 and James Allerton-Austin who presented details about the social BPM solution (BPM) in 2010. Additional our key customers presented their Oracle SOA success stories. How did you team with Oracle around the sales activities? "Sales alignment is key for the successful partnership. When we achieved! SOA Specialization we celebrated jointly with the Oracle and IPT middleware sales team. At the Aperol may interesting discussions resulted in joint opportunities and business. A key section of our joint business planning are marketing and sales activities. Together we define campaign topics and target customers. Matthias Breitschmid our superb Oracle partner manager ensures that the defined sales teams align and start the joint business. Regular we review our joint business plan with the joint management teams and Jürgen Kress our EMEA Oracle Sponsor. It is great to see that both companies profit from each other and we receive leads from Oracle!” Did you get Oracle support to train your consultants in the Oracle SOA Suite? “Enablement is key for us to deliver successful SOA projects. Together with Ralph Bellinghausen from the Oracle Enablement team we defined an Oracle trainings plan for our consultants. The monthly SOA Partner Community newsletter is a great resource to get the latest product updates, webcasts and trainings. As a SOA Specialized partner we get also invited to the SOA Blackbelt trainings, this trainings are hosted by Oracle product management where we get not only first hand information we get also direct access to the developers who can support us in critical project phases. Driven by the customer success we have increased our Oracle SOA practice by more than 200% in the last years!” Why did the customer decide for the IPT SOA offering? “SOA Specialization becomes a brand for customers, it proofs that we have the certified SOA skills and that IPT has delivered successful Oracle SOA projects. Jointly with Oracle and all the support we get from marketing, sales, enablement, support and product management we can ensure our customers to deliver their SOA project successful!” What are the next steps for IPT? “SOA Specialization is a super beneficial for IPT. We are looking forward to our upcoming SOA, BPM & Integration Forum 2011 and prepare to become BPM Specialized. part I Torsten Winterberg, Opitz Consulting & part II Debra Lilley, Fujitsu For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website

    Read the article

  • Calling Web Services in classic ASP

    - by cabhilash
      Last day my colleague asked me the provide her a solution to call the Web service from classic ASP. (Yes Classic ASP. still people are using this :D ) We can call web service SOAP toolkit also. But invoking the service using the XMLHTTP object was more easier & fast. To create the Service I used the normal Web Service in .Net 2.0 with [Webmethod] public class WebService1 : System.Web.Services.WebService { [WebMethod] public string HelloWorld(string name){return name + " Pay my dues :) "; // a reminder to pay my consultation fee :D} } In Web.config add the following entry in System.web<webServices><protocols><add name="HttpGet"/><add name="HttpPost"/></protocols></webServices> Alternatively, you can enable these protocols for all Web services on the computer by editing the <protocols> section in Machine.config. The following example enables HTTP GET, HTTP POST, and also SOAP and HTTP POST from localhost: <protocols> <add name="HttpSoap"/> <add name="HttpPost"/> <add name="HttpGet"/> <add name="HttpPostLocalhost"/> <!-- Documentation enables the documentation/test pages --> <add name="Documentation"/> </protocols> By adding these entries I am enabling the HTTPGET & HTTPPOST (After .Net 1.1 by default HTTPGET & HTTPPOST is disabled because of security concerns)The .NET Framework 1.1 defines a new protocol that is named HttpPostLocalhost. By default, this new protocol is enabled. This protocol permits invoking Web services that use HTTP POST requests from applications on the same computer. This is true provided the POST URL uses http://localhost, not http://hostname. This permits Web service developers to use the HTML-based test form to invoke the Web service from the same computer where the Web service resides. Classic ASP Code to call Web service <%Option Explicit Dim objRequest, objXMLDoc, objXmlNode Dim strRet, strError, strNome Dim strName strName= "deepa" Set objRequest = Server.createobject("MSXML2.XMLHTTP") With objRequest .open "GET", "http://localhost:3106/WebService1.asmx/HelloWorld?name=" & strName, False .setRequestHeader "Content-Type", "text/xml" .setRequestHeader "SOAPAction", "http://localhost:3106/WebService1.asmx/HelloWorld" .send End With Set objXMLDoc = Server.createobject("MSXML2.DOMDocument") objXmlDoc.async = false Response.ContentType = "text/xml" Response.Write(objRequest.ResponseText) %> In Line 6 I created an MSXML XMLHTTP object. Line 9 Using the HTTPGET protocol I am openinig connection to WebService Line 10:11 – setting the Header for the service In line 15, I am getting the output from the webservice in XML Doc format & reading the responseText(line 18). In line 9 if you observe I am passing the parameter strName to the Webservice You can pass multiple parameters to the Web service by just like any other QueryString Parameters. In similar fashion you can invoke the Web service using HTTPPost. Only you have to ensure that the form contains all th required parameters for webmethod.  Happy coding !!!!!!!

    Read the article

  • Tough Decisions

    - by Johnm
    There was once a thriving business that employed two Database Administrators, Sam and Jim. Both DBAs were certified, educated and highly talented in their skill sets. During lunch breaks these two DBAs were often found together discussing best practices, troubleshooting techniques and the latest release notes for the upcoming version of SQL Server. They genuinely loved what they did. The maintenance of the first database was the responsibility of Sam. He was the architect of this server's setup and he was very meticulous in its configuration. He regularly monitored the health of the database, validated backup files and regularly adhered to the best practices that were advocated by well respected professionals. He was very proud of the fact that there was never a database that he managed that lost data or performed poorly. The maintenance of the second database was the responsibility of Jim. He too was the architect of this server's setup. At the time that he built this server, his understanding of the finer details of configuration were not as clear as they are today. The server was build on a shoestring budget and with very little time for testing and implementation. Jim often monitored the health of the database; but in more of a reactionary mode due to user complaints of slowness or failed transactions. Deadlocks abounded and the backup files were never validated. One day, the announcement was made that revealed that the business had hit financially hard times. Budgets were being cut, limitation on spending was implemented and the reduction in full-time staff was required. Since having two DBAs was regarded a luxury by many, this meant that either Sam or Jim were about to find themselves out of a job. Sam and Jim's boss, Frank, was faced with a very tough decision. Sam's performance was flawless. His techniques and practices were perfection. The databases he managed were reliable and efficient. His solutions are "by the book". When given a task it is certain that, while it may take a little longer, it will be done right the first time. Jim's techniques and practices were not perfect; but effective and responsive. He made mistakes regularly; but he shows that he learns from them and they often result in innovative solutions. When given a task it is certain that, while the results may require some tweaking, it will be done on time and under budget. You are Frank's best friend. He approaches you and presents this scenario. He must layoff one of his valued DBAs the very next morning. Frank asks you: "All else being equal, who would you let go? and Why?" Another pertinent question is raised: "Regardless of good times or bad, if you had to choose, which DBA would you want on your team when tough challenges arise?" Your response is. (This is where you enter a comment below)

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Running ODI 11gR1 Standalone Agent as a Windows Service

    - by fx.nicolas
    ODI 11gR1 introduces the capability to use OPMN to start and protect agent processes as services. Setting up the OPMN agent is covered in the following post and extensively in the ODI Installation Guide. Unfortunately, OPMN is not installed along with ODI, and ODI 10g users who are really at ease with the old Java Wrapper are a little bit puzzled by OPMN, and ask: "How can I simply set up the agent as a service?". Well... although the Tanuki Service Wrapper is no longer available for free, and the agentservice.bat script lost, you can switch to another service wrapper for the same result. For example, Yet Another Java Service Wrapper (YAJSW) is a good candidate. To configure a standalone agent with YAJSW: download YAJSW Uncompress the zip to a folder (called %YAJSW% in this example) Configure, start and test your standalone agent. Make sure that this agent is loaded with all the required libraries and drivers, as the service will not load dynamically the drivers added subsequently in the /drivers directory. Retrieve the PID of the agent process: Open Task Manager. Select View Select Columns Select the PID (Process Identifier) column, then click OK In the list of processes, find the java.exe process corresponding to your agent, and note its PID. Open a command line prompt in %YAJSW%/bat and run: genConfig.bat <your_pid> This command generates a wrapper configuration file for the agent. This file is called %YAJSW%/conf/wrapper.conf. Stop your agent. Edit the wrapper.conf file and modify the configuration of your service. For example, modify the display name and description of the service as shown in the example below. Important: Make sure to escape the commas in the ODI encoded passwords with a backslash! In the example below, the ODI_SUPERVISOR_ENCODED_PASS contained a comma character which had to be prefixed with a backslash. # Title to use when running as a console wrapper.console.title=\"AGENT\" #******************************************************************** # Wrapper Windows Service and Posix Daemon Properties #******************************************************************** # Name of the service wrapper.ntservice.name=AGENT_113 # Display name of the service wrapper.ntservice.displayname=ODI Agent # Description of the service wrapper.ntservice.description=Oracle Data Integrator Agent 11gR3 (11.1.1.3.0) ... # Escape the comma in the password with a backslash. wrapper.app.parameter.7 = -ODI_SUPERVISOR_ENCODED_PASS=fJya.vR5kvNcu9TtV\,jVZEt Execute your wrapped agent as console by calling in the command line prompt: runConsole.bat Check that your agent is running, and test it again.This command starts the agent with the configuration but does not install it yet as a service. To Install the agent as service call installService.bat From that point, you can view, start and stop the agent via the windows services. Et voilà ! Two final notes: - To modify the agent configuration, you must uninstall/reinstall the service. For this purpose, run the uninstallService.bat to uninstall it and play again the process above. - To be able to uninstall the agent service, you should keep a backup of the wrapper.conf file. This is particularly important when starting several services with the wrapper.

    Read the article

  • Start/Stop Window Service from ASP.NET page

    - by kaushalparik27
    Last week, I needed to complete one task on which I am going to blog about in this entry. The task is "Create a control panel like webpage to control (Start/Stop) Window Services which are part of my solution installed on computer where the main application is hosted". Here are the important points to accomplish:[1] You need to add System.ServiceProcess reference in your application. This namespace holds ServiceController Class to access the window service.[2] You need to check the status of the window services before you explicitly start or stop it.[3] By default, IIS application runs under ASP.NET account which doesn't have access rights permission to window service. So, Very Important part of the solution is: Impersonation. You need to impersonate the application/part of the code with the User Credentials which is having proper rights and permission to access the window service. If you try to access window service it will generate "access denied" error.The alternatives are: You can either impersonate whole application by adding Identity tag in web.cofig as:        <identity impersonate="true" userName="" password=""/>This tag will be under System.Web section. the "userName" and "password" will be the credentials of the user which is having rights to access the window service. But, this would not be a wise and good solution; because you may not impersonate whole website like this just to have access window service (which is going to be a small part of code).Second alternative is: Only impersonate part of code where you need to access the window service to start or stop it. I opted this one. But, to be fair; I am really unaware of the code part for impersonation. So, I just googled it and injected the code in my solution in a separate class file named as "Impersonate" with required static methods. In Impersonate class; impersonateValidUser() is the method to impersonate a part of code and undoImpersonation() is the method to undo the impersonation. Below is one example:  You need to provide domain name (which is "." if you are working on your home computer), username and password of appropriate user to impersonate.[4] Here, it is very important to note that: You need to have to store the Access Credentials (username and password) which you are going to user for impersonation; to some secured and encrypted format. I have used Machinekey Encryption to store the value encrypted value inside database.[5] So now; The real part is to start or stop a window service. You are almost done; because ServiceController class has simple Start() and Stop() methods to start or stop a window service. A ServiceController class has parametrized constructor that takes name of the service as parameter.Code to Start the window service: Code to Stop the window service: Isn't that too easy! ServiceController made it easy :) I have attached a working example with this post here to start/stop "SQLBrowser" service where you need to provide proper credentials who have permission to access to window service.  hope it would helps./.

    Read the article

  • Welcome to the SOA &amp; E2.0 Partner Community Forum

    - by Jürgen Kress
    With more than 200 registrations the SOA & E2.0 Partner Community Forum is a huge success!   Conference program Is available online: http://tinyurl.com/soaforumagenda Agenda Tuesday March 15th 2011 12:15 Welcome & Introduction – Hans Blaas & Jürgen Kress, Oracle 12:30 Oracle Middleware Strategy and Information on Application Grid and Exalogic - Andrew Sutherland, Oracle 13:15 Managing Online Customer, Partner and Employee Engagement Oracle E2.0 Solutions - Andrew Gilboy, Oracle 14:00 Coffee Break 14:30 Partner SOA/ BPM Reference Case – Leon Smiers, Capgemini 15:15 Partner WebCenter/ UCM Reference Case – Vikram Setia, Infomentum 16.00 Break 16.30 SOA and BPM 11gR1 PS3 Update – David Shaffer 17:00 Why specialization is important for Partners – Nick Kritikos, Hans Blaas & Jürgen Kress 17:45 Social Event   Wednesday March 16th 2011 09.00 Welcome & Introduction Day II 09.15 Breakout sessions Round 1 SOA Suite 11g PS3 & OSB Importance of ADF & Jdeveloper SOA Security IDM WebCenter PS3, Whats New E2.0 Sales Plays 10.30 Break 10.45 Breakout sessions Round 2 WebCenter PS3, Whats New Applications Management Enterprise Manager and Amberpoint ADF/WebCenter 11g integration with BPM Suite 11g Importance of ADF & Jdeveloper JCAPS & OC4J migration opportunities for service business 12.00 Lunch 13.00 Breakout sessions Round 3 BPM 11g, Whats New Universal Content Management! 11g SOA Security IDM E2.0 Surrounding Products: ATG, Documaker, Primavera Middleware Industry Value Propositions & Sales Plays 14.30 Break 14.45 Fusion Applications, Rajan Krishnan, Oracle 15.30 SOA & E2.0 Summary & Closing, Hans Blaas & Jürgen Kress, Oracle 15.45 Finish & Departure 16:00 Bus departure   Capgemini Nederland BV Papendorpseweg 100 3500 GN Utrecht The Netherlands Tel: +31 30 689 00 00 For a detailed routedescription by car or public transport please visit: http://www.nl.capgemini.com/pdf/Papendorp_UK.pdf Hotel In case you have not booked your hotel yet, please make your own hotel reservation. You can book your hotel room at the 'Hotel Vianen' at a special rate, by using the Oracle booking code: DDG VIA-GF41422. One night package € 110,- for a single room, including breakfast. Kindly secure your hotel room as soon as possible. The number of rooms is limited! Hotel Vianen Prins Bernhardstraat 75 4132 XE Vianen [email protected] The Netherlands [email protected] Arrival on 14th of March and staying at Hotel Vianen. On 15th of March we have arranged a transfer from Hotel Vianen to the Capgemini Offices. The bus is parked in front of the hotel and will leave at 10.15AM (UTC/GMT+1). Logistics Pass with barcode At your arrival you will receive a pass with a barcode. This pass will give you access to the conference building and the different floors within the building. Please make sure to hand in your pass at the registration desk at the end of the day. Arrival by plane Transfer from Schiphol Airport to Capgemini on 15th of March will be arranged by Oracle. A hostess will be welcoming you at the Meeting Point at Schiphol Airport (this is a red and white large cubicle situated next to Delifrance) The buses will depart from Schiphol Airport at 09.00AM, 09.45AM and 10.30AM (UTC/GMT+1).     For future SOA Partner Community Forums  become a member for registration please visit www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Partner Community Forum,Community,SOA Partner Community,Utrecht 03.2011,OPN,Oracle,Jürgen Kress

    Read the article

  • Dude, what’s up with POP Forums vNext?

    - by Jeff
    Yeah, it has been awhile. I posted v9.2 back in January, about five months ago. That’s a real change from the release pace I had there for awhile. Let me explain what’s going on. First off, in the interim, I re-launched CoasterBuzz, which required a lot of my attention for about two of those months. That’s a good thing though, because that site is just about the best test bed I could ask for. The other thing is that I committed to make the next version use ASP.NET MVC 4, which is now at the RC stage. I didn’t think much about when they’d hit their RTW point, but RC is good enough for me. To that end, there is enough change in the next version that I recently decided to make it a major version upgrade, and finish up the loose ends and science projects to make it whole. Here’s what’s in store… Mobile views: I sat on this or a long time. Originally, I was going to use jQuery Mobile, and waited and waited for a new release, but in the end, decided against using it. Sometimes buttons would unexplainably not work, I felt like I was fighting it at times, and the CSS just felt too heavy. I rolled my own mobile sugar at a fraction of the size, and I think you’ll find it easy to modify. And it’s Metro-y, of course! Re-do of background services: A number of things run in the background, and I did quite a bit of “reimagining” of that code. It’s the weirdness of running services in a Web site context, because so many folks can’t run a bona fide service on their host’s box. The biggest change here is that these service no longer start up by default. You’ll need to call a new method from global.asax called PopForumsActivation.StartServices(). This is also a precursor to running the app in a Web farm (new data layer and caching is the second part of that). I learned about this the hard way when I had three apps using the forum library code but only one was actually the forum. The services were all running three times as often with race conditions and hits on the same data. That was particularly bad for e-mail. CSS clean up: It’s still not ideal, but it’s getting better. That’s one of those things that comes with integrating to a real site… you discover all of the dumb things you did. The mobile CSS is particularly easier to live with. Bug fixes: There are a whole lot of them. Most were minor, but it’s feeling pretty solid now. So that’s where I am. I’m going to call it v10.0, and I’m going to really put forth some effort toward finishing the mobile experience and getting through the remaining bugs. The roadmap beyond that will likely not be feature oriented, but rather work on some other things, like making it run in Azure, perhaps using SQL CE, a better install experience, etc. As usual, I’ll post the latest here. Stay tuned!

    Read the article

  • Type Casting variables in PHP: Is there a practical example?

    - by Stephen
    PHP, as most of us know, has weak typing. For those who don't, PHP.net says: PHP does not require (or support) explicit type definition in variable declaration; a variable's type is determined by the context in which the variable is used. Love it or hate it, PHP re-casts variables on-the-fly. So, the following code is valid: $var = "10"; $value = 10 + $var; var_dump($value); // int(20) PHP also alows you to explicitly cast a variable, like so: $var = "10"; $value = 10 + $var; $value = (string)$value; var_dump($value); // string(2) "20" That's all cool... but, for the life of me, I cannot conceive of a practical reason for doing this. I don't have a problem with strong typing in languages that support it, like Java. That's fine, and I completely understand it. Also, I'm aware of—and fully understand the usefulness of—type hinting in function parameters. The problem I have with type casting is explained by the above quote. If PHP can swap types at-will, it can do so even after you force cast a type; and it can do so on-the-fly when you need a certain type in an operation. That makes the following valid: $var = "10"; $value = (int)$var; $value = $value . ' TaDa!'; var_dump($value); // string(8) "10 TaDa!" So what's the point? Can anyone show me a practical application or example of type casting—one that would fail if type casting were not involved? I ask this here instead of SO because I figure practicality is too subjective. Edit in response to Chris' comment Take this theoretical example of a world where user-defined type casting makes sense in PHP: You force cast variable $foo as int -- (int)$foo. You attempt to store a string value in the variable $foo. PHP throws an exception!! <--- That would make sense. Suddenly the reason for user defined type casting exists! The fact that PHP will switch things around as needed makes the point of user defined type casting vague. For example, the following two code samples are equivalent: // example 1 $foo = 0; $foo = (string)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; // example 2 $foo = 0; $foo = (int)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; UPDATE Guess who found himself using typecasting in a practical environment? Yours Truly. The requirement was to display money values on a website for a restaurant menu. The design of the site required that trailing zeros be trimmed, so that the display looked something like the following: Menu Item 1 .............. $ 4 Menu Item 2 .............. $ 7.5 Menu Item 3 .............. $ 3 The best way I found to do that wast to cast the variable as a float: $price = '7.50'; // a string from the database layer. echo 'Menu Item 2 .............. $ ' . (float)$price; PHP trims the float's trailing zeros, and then recasts the float as a string for concatenation.

    Read the article

  • CRMIT’s HIGH VALUE CRM++ PLUGINS FOR CRM On DEMAND

    - by Soumo Das
    Customer satisfaction and experience being the two most considerable factors, these days businesses are on the lookout for automation tools that are world class, agile and keep quality at its core. CRMIT has developed such tools using cutting edge technologies and abstracting industry best practices and R&D.  Self Service Portal  With customers being so meticulous about regular updates and reliable access to their data, administrators just cannot think of walking a thin line. Surviving without a resource that provides a track of customer requirements for services available 24 x 7 can severely affect the productivity. In such a scenario, CRMIT’s Self Service Portal (SSP) is the best solution. This not only tracks the required customer data, but also allows companies to stay in tune with their employees, vendors and stakeholders.   One can directly sign up to become a CRMOD contact and SSP user. One need not use the database, as operations and interactions are d at run time. This is a fully configurable solution that tracks results periodically, thus making it easy for end users. It also offers better security and data visibility that enables users to progress smoothly. Quote and Order Management   When dealing with quotes, contracts and orders becomes complicated, only Quote & Order Management can work as a one-stop solution. CRMIT offers this great tool for managing all this information and for taking care of customer orders and service requirements.  This CRM On Demand plug-in allows one to create a new quote or copy the existing one. Products can be directly added from the product list of CRMOD and the pricing is calculated automatically. Quote can be generated and mailed to the external users in PDF, HTML and XLS formats. This not only allows management of quotes in an enhanced manner, but also supports various billing and tax calculation features that make work effortless.    Report Scheduler  When it comes to analyzing and providing statistics of various business processes currently running in an organization, one cannot depend on manual updates, which sometimes may be inaccurate or even delayed. CRMIT provides a SaaS based powerful solution - Report Scheduler - that allows CRM users to schedule reports as per the frequencies and then receive them as email attachments at the scheduled time.   With this powerful tool, administrators can control the report scheduler for assigning specific reports to specific users. After that, users can login and schedule any assigned report for viewing at particular intervals on monthly, weekly or daily basis. Additionally, users can also copy the mail to external users and can choose the preferred format. The best part is that sharing business data with third party become easy with this and for viewing reports, users need not log into their CRMOD account.  CRM On Demand Offline Solution CRM On-Demand Offline is another great CRM++ extension that allows one to work in both online and offline modes. Synchronizing both the modes is absolutely easy and offers ease while working. CRM OD offline works as an automation tool that not only improves efficiency, but also works as a backup in most cases. It is readily available as a windows application installer and requires users to be online only while validating and synchronizing. The best part is that working in the offline mode also works as a backup. 

    Read the article

  • Get Oracle Linux Certified at Much Reduced Price

    - by Antoinette O'Sullivan
    You have already heard the great news that you can now prove your knowledge on Oracle Linux 5 and 6 with the new Oracle Certified Associate, Oracle Linux 5 and 6 System Administrator exam. Until December 21th 2013, this exam is in beta phase so you can get a fully-fledged certification at a much reduced price; for example $50 in the United States or 39 euros in the euro zone. Establishing What You Need to Know Your first step is to click on the Exam Topics tab on the certification page. You will see a list of topics that you will be tested on during the certification exam. These are the areas that you need to improve your knowledge on, if you are not already expert. Registering For a Certification Exam On the certification page, click on Register for this Exam. The Pearson VUE site guides you through signing up for an event at a date and location to suit you. Preparing to Take an Exam On the certification page, click on the Exam Preparation tab. This indicates the recommended training that can help you prepare to sit the exam. The recommended training for this certification is the Oracle Linux System Administration course. You can take this very popular 5-day live instructor-led course as a: Live Virtual Event: Take the training from your own desk, no travel required. Choose from a selection of events already on the schedule to suit different timezones. In-Class: Travel to an education center to take this class. Below is a selection of events already on the schedule.  Location  Date  Delivery Language  Brussels, Belgium  18 November 2013  English  London, England  16 December 2013  English   Manchester, England  27 January 2014  English  Reading, England  12 May 2014  English  Milan, Italy  31 March 2014  Italian   Rome, Italy  10 February 2014  Italian  Utrecht, Netherlands  18 November 2013  Dutch Warsaw, Poland   9 December 2013  Polish  Bucharest, Romania  20 January 2014  Romanian  Ankara, Turkey  12 January 2014  Turkish  Istanbul, Turkey  16 December 2013  Turkish  Panjim, India  4 November 2013  English  Jakarta, Indonesia  9 December 2013  English  Kuala Lumpur, Malaysia  25 November 2013  English  Makati City, Philippines  11 November 2013  English  Singapore  25 November 2013  English  Bangkok, Thailand  11 November 2013  English  Casablanca, Morocco  16 December 2013  English  Muscat, Oman  2 March 2014  English  Johannesburg, South Africa  17 February 2014  English  Tunis, Tunisia  31 March 2014  French  Canberra, Australia 25 November 2013   English  Melbourne, Australia  19 May 2014  English  Sydney, Australia  20 January 2014  English  Mississauga, Canada  24 February 2014  English Ottawa, Canada   28 April 2014  English  Belmont, CA, United States  10 February 2014  English  Irvine, CA, United States  12 May 2014  English  San Francisco, CA, United States  18 November 2013  English  Chicago, IL, United States  14 April 2014  English  Cambridge, MA, United States  18 November 2013  English  Roseville, MA, United States  2 December 2013  English  Edison, NJ, United States  10 March 2014  English   Pittsburg, PA, United States  9 December 2013  English   Reston, VA, United States 13 January 2014   English For more information on the Oracle Linux curriculum, go to http://oracle.com/education/linux.

    Read the article

  • Kanban Tools Review

    - by GeekAgilistMercenary
    The first two sessions on Sunday were Collaboration and why it is so hard and the following, which was a perfect following session was on Kanban.  While in that second session two online Saas Style Tools were mentioned; AgileZen and Leankit.  I decided right then and there that I would throw together some first impressions and setup some sample projects.  I did this by setting up an account and creating the projects. Agile Zen Account Creation Setting up the initial account required an e-mail verification, which is understandable.  Within a few seconds it was mailed out and I was logged in. Setting Up the Kanban Board The initial setup of the board was pretty easy.  I maybe clicked around an extra few times, but overall everything I needed to use the tool was immediately available.  The representation of everything was very similar to what one expects in a real Kanban Board too.  This is a HUGE plus, especially if a team is smart and places this tool in a centrally viewable area to allow for visibility. Each of the board items is just like a post it, being blue, grey, green, pink, or one of another few colors.  Dragging them onto each swim lane on the board was flawless, making changes through the work super easy and intuitive. The other thing I really liked about AgileZen is that the Kanban Board had the swim lanes setup immediately.  One can change them, but when you know you immediately need a Ready Lane, Working Lane, and a Complete Lane it is nice to just have them right in front of you in the interface.  In addition, the Backlog is simply a little tab on the left hand side.  This is perfect for the Backlog Queue.  Out of the way, with the focus on the primary items. Once  I got the items onto the board I was easily able to get back to the actual work at hand versus playing around with the tool.  The fact that it was so easy to use, fast and easy UX, and overall a great layout put me back to work on things I needed to do versus sitting a playing with the tool.  That, in the end is the key to using these tools. LeanKit Kanban Account Creation Setting up the account got me straight into the online tool.  This I thought was pretty cool. Setting Up the Kanban Board Setting up the Kanban Board within Leankit was a bit of trouble.  There were multiple UX issues in regard to process and intuitiveness.  The Leankit basically forces one to design the whole board first, making no assumptions about how the board should look.  The swim lanes in my humble opinion should be setup immediately without any manipulation with the most common lanes;  ready, working, and complete. The other UX hiccup that I had a problem with is that as soon as I managed to get the swim lanes into place, I wanted to remove the redundant Backlog Lane.  The Backlog Lane, or Backlog Bucket should be somewhere that I accidentally added as a lane.  Then on top of that I screwed up and added an item inside the lane, which then prevented me from deleting the lane.  I had to go back out of the lane manipulation, remove the item, and then remove the excess lane.  Summary Leankit wasn't a bad interface, it just wasn't as good as AgileZen.  The AgileZen interface was just better UX design overall.  AgileZen also presents a much better user interface graphical design all together.  It is much closer to what the Kanban Board would look like if it were a physical Kanban Board.  Since one of the HUGE reasons for Kanban is to increase visibility, the fact the design is similar to what a real Kanban Board is actually a pretty big deal. This is an image (click for larger) that shows the two Kanban Boards side by side.  The one on the left is AgileZen and the right is Leankit. Original Entry

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • The New Social Developer Community: a Q&A

    - by Mike Stiles
    In our last blog, we introduced the opportunities that lie ahead for social developers as social applications reach across every aspect and function of the enterprise. Leading the upcoming JavaOne Social Developer Program October 2 at the San Francisco Hilton is Roland Smart, VP of Social Marketing at Oracle. I got to ask Roland a few of the questions an existing or budding social developer might want to know as social extends beyond interacting with friends and marketing and into the enterprise. Why is it smart for developers to specialize as social developers? What opportunities lie in the immediate future that’s making this a critical, in-demand position? Social has changed the way we interact with brands and with each other across the web. As we acclimate to a new social paradigm we also look to extend its benefits into new areas of our lives. The workplace is a logical next step, and we're starting to see social interactions more and more in this context. But unlocking the value of social interactions requires technical expertise and knowledge of developing social apps that tap into the social graph. Developers focused on integrating social experiences into enterprise applications must be familiar with popular social APIs and must understand how to build enterprise social graphs of their own. These developers are part of an emerging community of social developers and are key to socially enabling the enterprise. Facebook rebranded their Preferred Developer Consultant Group (PDC) and the Preferred Marketing Developers (PMD) to underscore the fact developers are required inside marketing organizations to unlock the full potential of their platform. While this trend is starting on the marketing side with marketing developers, this is just an extension of the social developer concept that will ultimately drive social across the enterprise. What are some of the various ways social will be making its way into every area of enterprise organizations? How will it be utilized and what kinds of applications are going to be needed to facilitate and maximize these changes? Check out Oracle’s vision for the social-enabled enterprise. It’s a high-level overview of how social will impact across the enterprise. For example: HR can leverage social in recruiting and retentionSales can leverage social as a prospecting toolMarketing can use social to gain market insightCustomer support can use social to leverage community support to improve customer satisfaction while reducing service costOperations can leverage social improve systems That’s only the beginning. Once sleeves get rolled up and social developers and innovators get to work, still more social functions will no doubt emerge. What makes Java one of, if not the most viable platform on which to build these new enterprise social applications? Java is certainly one of the best platforms on which to build social experiences because there’s such a large existing community of Java developers. This means you can affordably recruit talent, and it's possible to effectively solicit advice from the community through various means, including our new Social Developer Community. Beyond that, there are already some great proof points Java is the best platform for creating social experiences at scale. Consider LinkedIn and Twitter. Tell us more about the benefits of collaboration and more about what the Oracle Social Developer Community is. What opportunities does that offer up and what are some of the ways developers can actively participate in and benefit from that community? Much has been written about the overall benefits of collaborating with other developers. Those include an opportunity to introduce yourself to the community of social developers, foster a reputation, establish an expertise, contribute to the advancement of the space, get feedback, experiment with the latest concepts, and gain inspiration. In short, collaboration is a tool that must be applied properly within a framework to get the most value out of it. The OSDC is a place where social developers can congregate to discuss the opportunities/challenges of building social integrations into their applications. What “needs” will this community have? We don't know yet. But we wanted to create a forum where we can engage and understand what social developers are thinking about, excited about, struggling with, etc. The OSDL can then step in if we can help remove barriers and add value in a serious and committed way so Oracle can help drive practice development.

    Read the article

  • C# 4.0: COM Interop Improvements

    - by Paulo Morgado
    Dynamic resolution as well as named and optional arguments greatly improve the experience of interoperating with COM APIs such as Office Automation Primary Interop Assemblies (PIAs). But, in order to alleviate even more COM Interop development, a few COM-specific features were also added to C# 4.0. Ommiting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. These parameters are typically not meant to mutate a passed-in argument, but are simply another way of passing value parameters. Specifically for COM methods, the compiler allows to declare the method call passing the arguments by value and will automatically generate the necessary temporary variables to hold the values in order to pass them by reference and will discard their values after the call returns. From the point of view of the programmer, the arguments are being passed by value. This method call: object fileName = "Test.docx"; object missing = Missing.Value; document.SaveAs(ref fileName, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing); can now be written like this: document.SaveAs("Test.docx", Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value, Missing.Value); And because all parameters that are receiving the Missing.Value value have that value as its default value, the declaration of the method call can even be reduced to this: document.SaveAs("Test.docx"); Dynamic Import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object form the context of the call, but has to explicitly perform a cast on the returned values to make use of that knowledge. These casts are so common that they constitute a major nuisance. To make the developer’s life easier, it is now possible to import the COM APIs in such a way that variants are instead represented using the type dynamic which means that COM signatures have now occurrences of dynamic instead of object. This means that members of a returned object can now be easily accessed or assigned into a strongly typed variable without having to cast. Instead of this code: ((Excel.Range)(excel.Cells[1, 1])).Value2 = "Hello World!"; this code can now be used: excel.Cells[1, 1] = "Hello World!"; And instead of this: Excel.Range range = (Excel.Range)(excel.Cells[1, 1]); this can be used: Excel.Range range = excel.Cells[1, 1]; Indexed And Default Properties A few COM interface features are still not available in C#. On the top of the list are indexed properties and default properties. As mentioned above, these will be possible if the COM interface is accessed dynamically, but will not be recognized by statically typed C# code. No PIAs – Type Equivalence And Type Embedding For assemblies indentified with PrimaryInteropAssemblyAttribute, the compiler will create equivalent types (interfaces, structs, enumerations and delegates) and embed them in the generated assembly. To reduce the final size of the generated assembly, only the used types and their used members will be generated and embedded. Although this makes development and deployment of applications using the COM components easier because there’s no need to deploy the PIAs, COM component developers are still required to build the PIAs.

    Read the article

  • BizTalk Server Monitoring &ndash; SharePoint Web Part

    - by SURESH GIRIRAJAN
    I have been worked with customers using BizTalk as shared infrastructure in the enterprise, where we have two or more BizTalk apps running on it for different Business groups. Also these customers are not using BizTalk ESB portal even though they are using BizTalk ESB exception framework. So main issue with all these Business groups are they don’t have visibility into the BizTalk apps running in prod, even though they are using SCOM and other monitoring stuff in place. So I am trying to address few issues I am going to list below and how I try to mitigate them, first one on the list is how to get visibility into prod, how to provision those access to the BizTalk resources with minimal activity and how can we take advantage of the resources we have today. So I was working on creating REST data services for BizTalk RFID a year ago and available on codeplex. I thought to extend that idea to take advantage of BizTalk Data Services available in codeplex. I extended the BizTalk data services I will upload the updated service soon. So let me start thru how my solution works, so first step I am using the BizTalk data service (REST service) which expose most of the BizTalk artifacts as resources such as Applications, Orchestrations, Send ports, Receive ports, Host instances and In process instances etc. BizTalk Server Monitoring – SharePoint Web Part I am hosting the BizTalk data service in IIS with application pool configured to run under BizTalk administrator credentials. So with this setup I am making the service to make accessible anonymous. Next step of this solution I have created a SharePoint Visual web part which consumes the BizTalk data service and display all the BizTalk Application and Platform settings in read only mode. Even though BizTalk data services offers to browse resources as well perform actions like starting, stopping Orchestrations, Send ports, Receive locations, Host instances etc. Host Instances BizTalk Applications BizTalk Running / Suspended Instances So having this BizTalk Monitoring SharePoint web part, will be added to the SharePoint. This eliminates the need for granting access to the BizTalk users explicitly, so when you have BizTalk contractor or BizTalk application user need to have access to the BizTalk environment all the need is have access to the SharePoint website. You can configure the web part point to different end point based on your environment. I am making this as read only as part of this to make easier for the users and in terms of provisioning. This removes the dependency of BizTalk admin at least for viewing the BizTalk application status and errors etc. If we need to make any changes to the BizTalk application then its application owner responsibility to co-ordinate with BizTalk admins. There are options like BizTalk ESB portal, BizTalk 360 etc… but this one of the approach to reduce number of steps required to give access to BizTalk application users and also to maximize the resource we have in enterprise today. Also you can expose this data service thru Azure Service Bus and access from other apps like mobile devices or create a web site hosted in Azure etc. One last thing I have tested only with BizTalk Server 2010 on x64 VM only, but it should work on other version. I will try to upload the code shortly with instructions how to setup etc.… I welcome thoughts and suggestions… Hope this helps….

    Read the article

  • SQL SERVER – How to Compare the Schema of Two Databases with Schema Compare

    - by Pinal Dave
    Earlier I wrote about An Efficiency Tool to Compare and Synchronize SQL Server Databases and it was very much well received. Since the blog post I have received quite a many question that just like data how we can also compare schema and synchronize it. If you think about comparing the schema manually, it is almost impossible to do so. Table Schema has been just one of the concept but if you really want the all the schema of the database (triggers, views, stored procedure and everything else) it is just impossible task. If you are developer or database administrator who works in the production environment than you know that there are so many different occasions when we have to compare schema of the database. Before deploying any changes to the production server, I personally like to make note of the every single schema change and document it so in case of any issue , I can always go back and refer my documentation. As discussed earlier it is absolutely impossible to do this task without the help of third party tools. I personally use Devart Schema Compare for this task. This is an extremely easy tool. Let us see how it works. First I have two different databases – a) AdventureWorks2012 and b) AdventureWorks2012-V1. There are total three changes between these databases. Here is the list of the same. One of the table has additional column One of the table have new index One of the stored procedure is changed Now let see how dbForge Schema Compare works in this scenario. First open dbForge Schema Compare studio. Click on New Schema Comparison. It will bring you to following screen where we have to configure the database needed to configure. I have selected AdventureWorks2012 and AdventureWorks-V1 databases. In the next screen we can verify various options but for this demonstration we will keep it as it is. We will not change anything in schema mapping screen as in our case it is not required but generically if you are comparing across schema you may need this. This is the most important screen as on this screen we select which kind of object we want to compare. You can see the options which are available to select. The screen lets you select the objects from SQL Server 2000 to SQL Server 2012. Once you click on compare in previous screen it will bring you to this screen, which will essentially display the comparative difference between two of the databases which we had selected in earlier screen. As mentioned above there are three different changes in the database and the same has been listed over here. Two of the changes belongs to the tables and one changes belong to the procedure. Let us click each of them one by one to see what is the difference between them. In very first option we can see that there is an additional column in another database which did not exist earlier. In this example we can see that AdventureWorks2012 database have an additional index. Following example is very interesting as in this case, we have changed the definition of the stored procedure and the result pan contains the same. dbForget Schema Compare very effectively identify the changes in schema and lists them neatly to developers. Here is one more screen. This software not only compares the schema but also provides the options to update or drop them as per the choice. I think this is brilliant option. Well, I have been using schema compare for quite a while and have found it very useful. Here are few of the things which dbForge Schema Compare can do for developers and DBAs. Compare and synchronize SQL Server database schemas Compare schemas of live database and SQL Server backup Generate comparison reports in Excel and HTML formats Eliminate mistakes in schema changes propagation across environments Track production database changes and customizations Automate migration of schema changes using command line interface I suggest that you try out dbForge Schema Compare and let me know what you think of this product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • Plymouth and GRUB do not show at all

    - by WarriorIng64
    I am using Ubuntu 11.04 64-bit as my only OS on my desktop computer, which used to only run Ubuntu 10.04 LTS until I had the time to upgrade it with a fresh install. It uses integrated NVIDIA graphics (listed as a GeForce 6150SE nForce 430 by the NVIDIA X Server Settings utility) with the current proprietary driver as provided by the Additional Drivers utility, and has a VGA connection to a 1680x1050 Acer monitor. I used to get the (ugly-looking version of) Plymouth graphical boot screen while under 10.04. It didn't look that great, but I was fine with it. Now, it doesn't show on 11.04 at all during boot (I just get an error message in a moving gray box from the monitor saying "Input Not Supported"), and only rarely it will show on shutdown, all garbled up. I could not get GRUB to show during boot while holding down Shift, either (same error message), but pressing Enter while it should be up starts the system normally. A picture of the error message I was getting: Once fully booted, the system still shows the login screen and desktop just fine. Any information on how to troubleshoot this would be appreciated. If there's any hardware-specific stuff I forgot to include here, let me know the relevant commands to run in a comment below. Things that I've tried: Running plymouth in a framebuffer: no effect Booting with nomodeset as my grub boot: option no effect Booting with nomodeset and plymouth in a framebuffer: no effect other than Plymouth showing during shutdown only Following the Softpedia instructions for fixing Plymouth's resolution: Problem mostly solved, except logo does not show in Plymouth during boot, and both grub and Plymouth are slightly off-center #4 above, but with nomodeset removed as a grub boot option: same effect as #4 #5 above, but with vt.handoff=7 added as a grub boot option: same effect as #4 I have added the current contents of /etc/default/grub as requested in the comments: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash video=uvesafb:mode_option=1280x1024-24,mtrr=3,scroll=ywrap" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' GRUB_GFXMODE=1280x1024 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" CURRENT STATUS: I forgot to uncomment one line as per "things that I've tried" #4, so I took care of that. I can now see GRUB during startup when I hold Shift and a normal-looking Plymouth during shutdown...but Plymouth during boot is now just a solid purple screen. In each case, it's displayed a little off-center to the left, with a thin black bar running down the right side of the monitor. The error pictured above no longer shows. I'd say this problem is about 2/3 solved now. UPDATE: After Natty started freezing up on me, I decided to dual-boot with Oneiric, which unfortunately shows the same problems. Rather than trying all these workarounds though, I decided to do what I should have done from the start and file a pair of bug reports. LAST UPDATE: Bug 850908 has been confirmed as a legitimate nouveaufb bug. I have overwritten my 11.04 partition with 12.04 LTS, and I can confirm at this time that the issue is present there, as well. I will now flag this question to be closed, yet I hope it was helpful for anyone who experienced similar issues; if you are still having the same problem as me, please go there and mark yourself as affected. Thanks!

    Read the article

< Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >