Search Results

Search found 10463 results on 419 pages for 'required'.

Page 79/419 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Can't install git on Ubuntu 12.10

    - by Lucas Windir
    I'm following these instructions to install git on my laptop: http://git-scm.com/download/linux When I do: $ sudo apt-get install git-core This is what my terinal shows: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libasprintf0c2:i386 libcroco3:i386 libgettextpo0:i386 libgomp1:i386 libunistring0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: git git-man liberror-perl Suggested packages: git-daemon-run git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email git-gui gitk gitweb The following NEW packages will be installed: git git-core git-man liberror-perl 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 6,825 kB of archives. After this operation, 15.3 MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-man git git-core Install these packages without verification [y/N]? E: Some packages could not be authenticated lucas@lucas-Inspiron-N5050:~$ sudo apt-get install git-core Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libasprintf0c2:i386 libcroco3:i386 libgettextpo0:i386 libgomp1:i386 libunistring0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: git git-man liberror-perl Suggested packages: git-daemon-run git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email git-gui gitk gitweb The following NEW packages will be installed: git git-core git-man liberror-perl 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 6,825 kB of archives. After this operation, 15.3 MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-man git git-core Install these packages without verification [y/N]? y Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main liberror-perl all 0.17-1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git-man all 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git amd64 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git-core all 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/libe/liberror-perl/liberrorperl_0.17-1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git-man_1.7.10.4-1ubuntu1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git_1.7.10.4-1ubuntu1_amd64.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch http://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git-core_1.7.10.4-1ubuntu1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? How could I install git on Ubuntu 12.10? I can't even do it from the Ubuntu Software Center. Thanks in advance!

    Read the article

  • ASP.NET JavaScript Routing for ASP.NET MVC–Constraints

    - by zowens
    If you haven’t had a look at my previous post about ASP.NET routing, go ahead and check it out before you read this post: http://weblogs.asp.net/zowens/archive/2010/12/20/asp-net-mvc-javascript-routing.aspx And the code is here: https://github.com/zowens/ASP.NET-MVC-JavaScript-Routing   Anyways, this post is about routing constraints. A routing constraint is essentially a way for the routing engine to filter out route patterns based on the day from the URL. For example, if I have a route where all the parameters are required, I could use a constraint on the required parameters to say that the parameter is non-empty. Here’s what the constraint would look like: Notice that this is a class that inherits from IRouteConstraint, which is an interface provided by System.Web.Routing. The match method returns true if the value is a match (and can be further processed by the routing rules) or false if it does not match (and the route will be matched further along the route collection). Because routing constraints are so essential to the route matching process, it was important that they be part of my JavaScript routing engine. But the problem is that we need to somehow represent the constraint in JavaScript. I made a design decision early on that you MUST put this constraint into JavaScript to match a route. I didn’t want to have server interaction for the URL generation, like I’ve seen in so many applications. While this is easy to maintain, it causes maintenance issues in my opinion. So the way constraints work in JavaScript is that the constraint as an object type definition is set on the route manager. When a route is created, a new instance of the constraint is created with the specific parameter. In its current form the constraint function MUST return a function that takes the route data and will return true or false. You will see the NotEmpty constraint in a bit. Another piece to the puzzle is that you can have the JavaScript exist as a string in your application that is pulled in when the routing JavaScript code is generated. There is a simple interface, IJavaScriptAddition, that I have added that will be used to output custom JavaScript. Let’s put it all together. Here is the NotEmpty constraint. There’s a few things at work here. The constraint is called “notEmpty” in JavaScript. When you add the constraint to a parameter in your C# code, the route manager generator will look for the JsConstraint attribute to look for the name of the constraint type name and fallback to the class name. For example, if I didn’t apply the “JsConstraint” attribute, the constraint would be called “NotEmpty”. The JavaScript code essentially adds a function to the “constraintTypeDefs” object on the “notEmpty” property (this is how constraints are added to routes). The function returns another function that will be invoked with routing data. Here’s how you would use the NotEmpty constraint in C# and it will work with the JavaScript routing generator. The only catch to using route constraints currently is that the following is not supported: The constraint will work in C# but is not supported by my JavaScript routing engine. (I take pull requests so if you’d like this… go ahead and implement it).   I just wanted to take this post to explain a little bit about the background on constraints. I am looking at expanding the current functionality, but for now this is a good start. Thanks for all the support with the JavaScript router. Keep the feedback coming!

    Read the article

  • How to install SharePoint Server 2013 Preview

    - by ybbest
    The Office 2013 and SharePoint Server 2013 Preview is announced yesterday and as a SharePoint Developer, I am really excited to learn all the new features and capabilities. Today I will show you how to install the preview. 1. Create a service account called SP2013Install and give this account Dbcreator and SecurityAdmin in SQL Server 2012 2. You need to run the following script to set the ‘maxdegree of parellism’ setting to the required value of 1 in SQL Server 2012(using sysadmin privilege) before configure the SharePoint Farm. Otherwise , you might get the error ‘This SQL Server Instance does not have the required maxdegree of parellism setting of 1’ sp_configure 'show advanced options', 1; GO RECONFIGURE WITH OVERRIDE; GO sp_configure 'max degree of parallelism', 1; GO RECONFIGURE WITH OVERRIDE; GO 3. Download the SharePoint preview from here and I am going to install it on Windows Server 2008R2 with SQL2012. 4. Click the Install software prerequisites, this works fine with the internet connection. (However, if you do not have internet connection, it is a bit tricky to install window azure AppFabric as it has to be installed using the prerequisite installer. Your computer might reboot a few times in the process.) 5.After the prerequisites are installed `completely, you can then install the Preview. Click the Install SharePoint Server and Enter the Product key you get from the Preview download page. 6. Accept the License terms and Click Next. 7. Leave the default path for the file location. 8. You can now start the installation process 9. After binary files are installed, you then can configure your farm using the farm configuration wizard. 10.Specify the Database server and the install account 11. Specify SharePoint farm passphrase. 12 Specify the port number , you should choose your own favorite port number. 13. Choose Create a New Server Farm and click next. 14. Double-check with the settings and click Next to Configure the farm install. 15. Finally, your farm is configured successfully and you now are able to go to your Central Admin site http://sp2010:6666/ 16. You should configure the services manually or automate using PowerShell (If you like to understand why,you can read the blog post here) ,however I will use the wizard to configure automatically here  as  this is a test machine. After the configuration is complete, you now be able to see your SharePoint Site. 17.To start the evaluate the Preview , you need to install Visual Studio 2012 RC , Microsoft Office Developer Tools for Visual Studio 2012,SharePoint 2013 Designer Preview , Office 2013 Preview. References: Download SharePoint2013 Server 2013 Download Microsoft Visio Professional 2013 Preview Install SharePoint 2013 Preview Hardware and software requirements for SharePoint 2013 Preview SharePoint 2013 IT Pro and Developer training materials released Plan for SharePoint 2013 Preview Microsoft Office Developer Tools for Visual Studio 2012 SharePoint 2013 Preview Office365 for the SharePoint 2013 preview SharePoint Designer 2013 Download: Microsoft Office 2013 Preview Language Pack Try Office

    Read the article

  • Add Microsoft Core Fonts to Ubuntu

    - by Matthew Guay
    Have you ever needed the standard Microsoft fonts such as Times New Roman on your Ubuntu computer?  Here’s how you can easily add the core Microsoft fonts to Ubuntu. Times New Roman, Arial, and other core Microsoft fonts are still some of the most commonly used fonts in documents and websites.  Times New Roman especially is often required for college essays, legal docs, and other critical documents that you may need to write or edit.  Ubuntu includes the Liberation alternate fonts that include similar alternates to Times New Roman, Arial, and Courier New, but these may not be accepted by professors and others when a certain font is required.  But, don’t worry; it only takes a couple clicks to add these fonts to Ubuntu for free. Installing the Core Microsoft Fonts Microsoft has released their core fonts, including Times New Roman and Arial, for free, and you can easily download these from the Software Center.  Open your Applications menu, and select Ubuntu Software Center.   In the search box enter the following: ttf-mscorefonts Click Install on the “Installer for Microsoft TrueType core fonts” directly in the search results. Enter your password when requested, and click Authenticate. The fonts will then automatically download and install in a couple minutes depending on your internet connection speed. Once the install is finished, you can launch OpenOffice Writer to try out the new fonts.  Here’s a preview of all the fonts included in this pack.  And, yes, this does included the infamous Comic Sans and Webdings fonts as well as the all-important Times New Roman. Please Note:  By default in Ubuntu, OpenOffice uses Liberation Serif as the default font, but after installing this font pack, the default font will switch to Times New Roman. Adding Other Fonts In addition to the Microsoft Core Fonts, the Ubuntu Software Center has hundreds of free fonts available.  Click the Fonts link on the front page to explore these, and install the same as above. If you’ve downloaded another font individually, you can also install it easily in Ubuntu.  Just double-click it, and then click Install in the preview window. Conclusion Although you may prefer the fonts that are included with Ubuntu, there are many reasons why having the Microsoft core fonts can be helpful.  Thankfully it’s easy in Ubuntu to install them, so you’ll never have to worry about not having them when you need to edit an important document. Similar Articles Productive Geek Tips Enable Smooth fonts on Ubuntu LinuxEmbed True Type Fonts in Word and PowerPoint 2007 DocumentsNew Vista Syntax for Opening Control Panel Items from the Command-lineStupid Geek Tricks: Enable More Fonts for the Windows Command PromptAdding extra Repositories on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • Updated Security Baseline (7u45) impacts Java 7u40 and before with High Security settings

    - by costlow
    The Java Security Baseline has been increased from 7u25 to 7u45.  For versions of Java below 7u45, this means unsigned Java applets or Java applets that depend on Javascript LiveConnect calls will be blocked when using the High Security setting in the Java Control Panel. This issue only affects Applets and Web Start applications. It does not affect other types of Java applications. The Short Answer Users upgrading to Java 7 update 45 will automatically fix this and is strongly recommended. The More Detailed Answer There are two items involved as described on the deployment flowchart: The Security Baseline – a dynamically updated attribute that checks to see which Java version contains the most recent security patches. The Security Slider – the user-controlled setting of when to prompt/run/block applets. The Security Baseline Java clients periodically check in to understand what version contains the most recent security patches. Versions are released in-between that contain bug fixes. For example: 7u25 (July 2013) was the previous secure baseline. 7u40 contained bug fixes. Because this did not contain security patches, users were not required to upgrade and were welcome to remain on 7u25. When 7u45 was released (October, 2013), this critical patch update contained security patches and raised the secure baseline. Users are required to upgrade from earlier versions. For users that are not regularly connected to the internet, there is a built in Expiration Date. Because of the pre-established quarterly critical patch updates, we are able to determine an approximate date of the next version. A critical patch released in July will have its successor released, at latest, in July + 3 months: October. The Security Slider The security slider is located within the Java control panel and determines which Applets & Web Start applications will prompt, which will run, and which will be blocked. One of the questions used to determine prompt/run/block is, “At or Above the Security Baseline.” The Combination JavaScript calls made from LiveConnect do not reside within signed JAR files, so they are considered to be unsigned code. This is correct within networked systems even if the domain uses HTTPS because signed JAR files represent signed "data at rest" whereas TLS (often called SSL) literally stands for "Transport Level Security" and secures the communication channel, not the contents/code within the channel. The resulting flow of users who click "update later" is: Is the browser plug-in registered and allowed to run? Yes. Does a rule exist for this RIA? No rules apply. Does the RIA have a valid signature? Yes and not revoked. Which security prompt is needed? JRE is below the baseline. This is because 7u45 is the baseline and the user, clicked "upgrade later." Under the default High setting, Unsigned code is set to "Don’t Run" so users see: Additional Notes End Users can control their own security slider within the control panel. System Administrators can customize the security slider during automated installations. As a reminder, in the future, Java 7u51 (January 2014) will block unsigned and self-signed Applets & Web Start applications by default.

    Read the article

  • Framework 4 Features: Support for Timed Jobs

    - by Anthony Shorten
    One of the new features of the Oracle Utilities Application Framework V4 is the ability for the batch framework to support Timed Batch. Traditionally batch is associated with set processing in the background in a fixed time frame. For example, billing customers. Over the last few versions their has been functionality required by the products required a more monitoring style batch process. The monitor is a batch process that looks for specific business events based upon record status or other pieces of data. For example, the framework contains a fact monitor (F1-FCTRN) that can be configured to look for specific status's or other conditions. The batch process then uses the instructions on the object to determine what to do. To support monitor style processing, you need to run the process regularly a number of times a day (for example, every ten minutes). Traditional batch could support this but it was not as optimal as expected (if you are a site using the old Workflow subsystem, you understand what I mean). The Batch framework was extended to add additional facilities to support times (and continuous batch which is another new feature for another blog entry). The new facilities include: The batch control now defines the job as Timed or Not Timed. Non-Timed batch are traditional batch jobs. The timer interval (the interval between executions) can be specified The timer can be made active or inactive. Only active timers are executed. Setting the Timer Active to inactive will stop the job at the next time interval. Setting the Timer Active to Active will start the execution of the timed job. You can specify the credentials, language to view the messages and an email address to send the a summary of the execution to. The email address is optional and requires an email server to be specified in the relevant feature configuration. You can specify the thread limits and commit intervals to be sued for the multiple executions. Once a timer job is defined it will be executed automatically by the Business Application Server process if the DEFAULT threadpool is active. This threadpool can be started using the online batch daemon (for non-production) or externally using the threadpoolworker utility. At that time any batch process with the Timer Active set to Active and Batch Control Type of Timed will begin executing. As Timed jobs are executed automatically then they do not appear in any external schedule or are managed by an external scheduler (except via the DEFAULT threadpool itself of course). Now, if the job has no work to do as the timer interval is being reached then that instance of the job is stopped and the next instance started at the timer interval. If there is still work to complete when the interval interval is reached, the instance will continue processing till the work is complete, then the instance will be stopped and the next instance scheduled for the next timer interval. One of the key ways of optimizing this processing is to set the timer interval correctly for the expected workload. This is an interesting new feature of the batch framework and we anticipate it will come in handy for specific business situations with the monitor processes.

    Read the article

  • Want a headless build server for SSDT without installing Visual Studio? You’re out of luck!

    - by jamiet
    An issue that regularly seems to rear its head on my travels is that of headless build servers for SSDT. What does that mean exactly? Let me give you my interpretation of it. A SQL Server Data Tools (SSDT) project incorporates a build process that will basically parse all of the files within the project and spit out a .dacpac file. Where an organisation employs a Continuous Integration process they will likely want to automate the building of that dacpac whenever someone commits a change to the source control repository. In order to do that the organisation will use a build server (e.g. TFS, TeamCity, Jenkins) and hence that build server requires all the pre-requisite software that understands how to build an SSDT project. The simplest way to install all of those pre-requisites is to install SSDT itself however a lot of folks don’t like that approach because it installs a lot unnecessary components on there, not least Visual Studio itself. Those folks (of which i am one) are of the opinion that it should be unnecessary to install a heavyweight GUI in order to simply get a few software components required to do something that inherently doesn’t even need a GUI. The phrase “headless build server” is often used to describe a build server that doesn’t contain any heavyweight GUI tools such as Visual Studio and is a desirable state for a build server. In his blog post Headless MSBuild Support for SSDT (*.sqlproj) Projects Gert Drapers outlines the steps necessary to obtain a headless build server for SSDT: This article describes how to install the required components to build and publish SQL Server Data Tools projects (*.sqlproj) using MSBuild without installing the full SQL Server Data Tool hosted inside the Visual Studio IDE. http://sqlproj.com/index.php/2012/03/headless-msbuild-support-for-ssdt-sqlproj-projects/ Frankly however going through these steps is a royal PITA and folks like myself have longed for Microsoft to support headless build support for SSDT by providing a distributable installer that installs only the pre-requisites for building SSDT projects. Yesterday in MSDN forum thread Building a VS2013 headless build server - it's sooo hard Mike Hingley complained about this very thing and it prompted a response from Kevin Cunnane from the SSDT product team: The official recommendation from the TFS / Visual Studio team is to install the version of Visual Studio you use on the build machine. I, like many others, would rather not have to install full blown Visual Studio and so I asked: Is there any chance you'll ever support any of these scenarios: Installation of all build/deploy pre-requisites without installing the VS shell? TFS shipping with all of the pre-requisites for doing SSDT project build/deploys 3rd party build servers (e.g. TeamCity) shipping with all of the requisites for doing SSDT project build/deploys I have to say that the lack of a single installer containing all the pre-requisites for SSDT build/deploy puzzles me. Surely the DacFX installer would be a perfect vehicle for that? Kevin replied again: The answer is no for all 3 scenarios. We looked into this issue, discussed it with the Visual Studio / TFS team, and in the end agreed to go with their latest guidance which is to install Visual Studio (e.g. VS2013 Express for Web) on the build machine. This is how Visual Studio Online is doing it and it's the approach recommended for customers setting up their own TFS build servers. I would hope this is compatible with 3rd party build servers but have not verified whether this works with TeamCity etc. Note that DacFx MSI isn't a suitable release vehicle for this as we don't want to include Visual Studio/MSBuild dependencies in that package. It's meant to just include the core DacFx DLLs used by SSMS, SqlPackage.exe on the command line, etc. What this means is we won't be providing a separate MSI installer or nuget package with just the necessary build DLLs you need to run your build and tests. If someone wanted to create a script that generated a nuget package based on our DLLs and targets files, then release that somewhere on the web for easier integration with 3rd party build servers we've no problem with that. Again, here’s the link to the thread and its worth reading in its entirety if this is something that interests you. So there you have it. Microsoft will not be be providing support for headless build servers for SSDT but if someone in the community wants to go ahead and roll their own, go right ahead. @Jamiet

    Read the article

  • Where’s my MD.050?

    - by Dave Burke
    A question that I’m sometimes asked is “where’s my MD.050 in OUM?” For those not familiar with an MD.050, it serves the purpose of being a Functional Design Document (FDD) in one of Oracle’s legacy Methods. Functional Design Documents have existed for many years with their primary purpose being to describe the functional aspects of one or more components of an IT system, typically, a Custom Extension of some sort. So why don’t we have a direct replacement for the MD.050/FDD in OUM? In simple terms, the disadvantage of the MD.050/FDD approach is that it tends to lead practitioners into “Design mode” too early in the process. Whereas OUM encourages more emphasis on gathering, and describing the functional requirements of a system ahead of the formal Analysis and Design process. So that just means more work up front for the Business Analyst or Functional Consultants right? Well no…..the design of a solution, particularly when it involves a complex custom extension, does not necessarily take longer just because you put more thought into the functional requirements. In fact, one could argue the complete opposite, in that by putting more emphasis on clearly understanding the nuances of functionality requirements early in the process, then the overall time and cost incurred during the Analysis to Design process should be less. In short, as your understanding of requirements matures over time, it is far easier (and more cost effective) to update a document or a diagram, than to change lines of code. So how does that translate into Tasks and Work Products in OUM? Let us assume you have reached a point on a project where a Custom Extension is needed. One of the first things you should consider doing is creating a Use Case, and remember, a Use Case could be as simple as a few lines of text reflecting a “User Story”, or it could be what Cockburn1 describes a “fully dressed Use Case”. It is worth mentioned at this point the highly scalable nature of OUM in the sense that “documents” should not be produced just because that is the way we have always done things. Some projects may well be predicated upon a base of electronic documents, whilst other projects may take a much more Agile approach to describing functional requirements; through “User Stories” perhaps. In any event, it is quite common for a Custom Extension to involve the creation of several “components”, i.e. some new screens, an interface, a report etc. Therefore several Use Cases might be required, which in turn can then be assembled into a Use Case Package. Once you have the Use Cases attributed to an appropriate (fit-for-purpose) level of detail, and assembled into a Package, you can now create an Analysis Model for the Package. An Analysis Model is conceptual in nature, and depending on the solution being developing, would involve the creation of one or more diagrams (i.e. Sequence Diagrams, Collaboration Diagrams etc.) which collectively describe the Data, Behavior and Use Interface requirements of the solution. If required, the various elements of the Analysis Model may be indexed via an Analysis Specification. For Custom Extension projects that follow a pure Object Orientated approach, then the Analysis Model will naturally support the development of the Design Model without any further artifacts. However, for projects that are transitioning to this approach, then the various elements of the Analysis Model may be represented within the Analysis Specification. If we now return to the original question of “Where’s my MD.050”. The full answer would be: Capture the functional requirements within a Use Case Group related Use Cases into a Package Create an Analysis Model for each Package Consider creating an Analysis Specification (AN.100) as a index to each Analysis Model artifact An alternative answer for a relatively simple Custom Extension would be: Capture the functional requirements within a Use Case Optionally, group related Use Cases into a Package Create an Analysis Specification (AN.100) for each package 1 Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1

    Read the article

  • Amazon Product Advertising API SOAP Namespace Changes

    - by Rick Strahl
    About two months ago (twowards the end of February 2012 I think) Amazon decided to change the namespace of the Product Advertising API. The error that would come up was: <ItemSearchResponse > was not expected. If you've used the Amazon Product Advertising API you probably know that Amazon has made it a habit to break the services every few years or so and I guess last month was about the time for another one. Basically the service namespace of the document has been changed and responses from the service just failed outright even though the rest of the schema looks fine. Now I looked around for a while trying to find a recent update to the Product Advertising API - something semi-official looking but everything is dated around 2009. Really??? And it's not just .NET - the newest thing on the sample/APIs is dated early 2011 and a handful of 2010 samples. There are newer full APIs for the 'cloud' offerings, but the Product Advertising API apparently isn't part of that. After searching for quite a bit trying to trace this down myself and trying some of the newer samples (which also failed) I found an obscure forum post that describes the solution of getting past the namespace issue. FWIW, I've been using an old version of the Product Advertising API using the old Microsoft WSE3 services (pre-WCF), which provides some of the WS* security features required by the Amazon service. The fix for this code is to explicitly override the namespace declaration on each of the imported service method signatures. The old service namespace (at least on my build) was: http://webservices.amazon.com/AWSECommerceService/2009-03-31 and it should be changed to: http://webservices.amazon.com/AWSECommerceService/2011-08-01 Change it on the class header:[Microsoft.Web.Services3.Messaging.SoapService("http://webservices.amazon.com/AWSECommerceService/2011-08-01")] [System.Xml.Serialization.XmlIncludeAttribute(typeof(Property[]))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(BrowseNode[]))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(TransactionItem[]))] public partial class AWSECommerceService : Microsoft.Web.Services3.Messaging.SoapClient { and on all method signatures:[Microsoft.Web.Services3.Messaging.SoapMethodAttribute("http://soap.amazon.com/ItemSearch")] [return: System.Xml.Serialization.XmlElementAttribute("ItemSearchResponse", Namespace="http://webservices.amazon.com/AWSECommerceService/2011-08-01")] public ItemSearchResponse ItemSearch(ItemSearch ItemSearch1) { Microsoft.Web.Services3.SoapEnvelope results = base.SendRequestResponse("ItemSearch", ItemSearch1); return ((ItemSearchResponse)(results.GetBodyObject(typeof(ItemSearchResponse), this.SoapServiceAttribute.TargetNamespace))); } It's easy to do with a Search and Replace on the above strings. Amazon Services <rant> FWIW, I've not been impressed by Amazon's service offerings. While the services work well, their documentation and tool support is absolutely horrendous. I was recently working with a customer on an old AWS application and their old API had been completely removed with a new API that wasn't even a close match. One old API call resulted in requiring three different APIs to perform the same functionality. We had to re-write the entire piece from scratch essentially. The documentation was downright wrong, and incomplete and so scattered it was next to impossible to follow. The examples weren't examples at all - they're mockups of real service calls with fake data that didn't even provide everything that was required to make same service calls work. Additionally there appears to be just about no public support from Amazon, only peer support which is sparse at best - and getting a hold of somebody at Amazon, even for pay seems to be mythical task. It's a terrible business model they have going. I can't see why anybody would put themselves through this sort of customer and development experience. Sad really, but an experience we see more and more these days. Nobody puts in the time to document anything anymore, leaving it to devs to figure this stuff out over and over again… </rant>© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Silverlight Cream for December 12, 2010 -- #1008

    - by Dave Campbell
    In this Issue: Michael Washington, Samuel Jack, Alfred Astort(-2-), Nokola(-2-), Avi Pilosof, Chris Klug, Pete Brown, Laurent Bugnion(-2-), and Jaime Rodriguez(-2-, -3-). Above the Fold: Silverlight: "Sharing resources and styles between projects in Silverlight" Chris Klug WP7: "Windows Phone Application Performance at Silverlight Firestarter" Jaime Rodriguez Training: "Silverlight View Model (MVVM) - A Play In One Act" Michael Washington Shoutouts: Koen Zwikstra announced the availability of the first Silverlight Spy 4 Preview 1 Gavin Wignall announced the Launch of Festive game built with Silverlight 4, hosted on Azure ... free to play. From SilverlightCream.com: Silverlight View Model (MVVM) - A Play In One Act Michael Washington has an interesting take on writing a blog post with this 'play' version of Silverlight View Models and Expression Blend with a heaping dose of Behaviors added in for flavoring. Build a Windows Phone Game in 3 days – Day 1 Samuel Jack is attempting to build a WP7 game in 3 days including downloading the tools and an XNA book... interesting to see where he's headed wth this venture. 4 of 10 - Make sure your finger can hit the target and text is legible Continuing with a series of tips from the folks reviewing apps for the marketplace via Alfred Astort is this number 4 -- touch target size and legible text. 5 of 10 - Give feedback on touch and progress within your UI Alfred Astort's number 5 is also up, and continues the touch discussion with this tip about giving the user feedback on their touch. Fantasia Painter Released for Windows Phone 7 + Tips Nokola took the release of his Fantasia Painter on WP& as an opportunity not only to blog about the fact that we can go buy it, but has a blog full of hints and tips that he gathered while working on it. Games for Windows Phone 7 Resources: Reducing Load Times, RPG Kit; Other Nokola also blogged about the release of the new games education pack, and gives up the cursor he uses in his videos after being asked... The simplest way to do design-time ViewModels with MVVM and Blend. Avi Pilosof attacks the design-time ViewModel issue in Blend with a 'no code' solution. Sharing resources and styles between projects in Silverlight Chris Klug is talking about sharing resources and styles across a large Silverlight project... near and dear to my heart at this moment. Dynamically Generating Controls in WPF and Silverlight Pete Brown has a post up that's generated some interest... creating controls at runtime... and he's demonstrating several different ways for both Silverlight and WPF #twitter for Windows Phone 7 protips (#wp7) Laurent Bugnion was posting these great tips for Twitter for WP7 and rolled all 16 of them up into a blog post... check them and the app out... Increasing touch surface (#wp7dev) Laurent Bugnion's most current post should be of great interest to WP7 devs... providing more touch surface for your user's fat fingers, err, I mean their fat fingerings :) ... great information and samples ... and interesting it is a fail point as listed by Alfred Astort above. Windows Phone Application Performance at Silverlight Firestarter This material from Jaime Rodriguez actually hit prior to his Firestarter presentation, but should be required reading for anyone doing a WP7 app... great Performance tips from the trenches... slide deck, cheat-sheet, and code. UpdateSourceTrigger on Windows Phone data bindings Another post from Jaime Rodriguez actually went through a couple revisions already.. how about a WP7 TextBox that fires notifications to the ViewModel when the text changes? ... would you like a behavior with that? Details on the Push Notification app limits Jaime Rodriguez has yet another required reading post up on Push Notification limits ... what it really entails and how you can be a good WP7 citizen by the way you program your app. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Oracle Fusion Procurement Designed for User Productivity

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience Oracle Fusion Procurement Design Goals In Oracle Fusion Procurement, we set out to create a streamlined user experience based on the way users do their jobs. Oracle has spent hundreds of hours with customers to get to the heart of what users need to do their jobs. By designing a procurement application around user needs, Oracle has crafted a user experience that puts the tools that people need at their fingertips. In Oracle Fusion Procurement, the user experience is designed to provide the user with information that will drive navigation rather than requiring the user to find information. One of our design goals for Oracle Fusion Procurement was to reduce the number of screens and clicks that a user must go through to complete frequently performed tasks. The requisition process in Oracle Fusion Procurement (Figure 1) illustrates how we have streamlined workflows. Oracle Fusion Self-Service Procurement brings together billing metrics, descriptions of the order, justification for the order, a breakdown of the components of the order, and the amount—all in one place. Previous generations of procurement software required the user to navigate to several different pages to gather all of this information. With Oracle Fusion, everything is presented on one page. The result is that users can complete their tasks in less time. The focus is on completing the work, not finding the work. Figure 1. Creating a requisition in Oracle Fusion Self-Service Procurement is a consumer-like shopping experience. Will Oracle Fusion Procurement Increase Productivity? To answer this question, Oracle sought to model how two experts working head to head—one in an existing enterprise application and another in Oracle Fusion Procurement—would perform the same task. We compared Oracle Fusion designs to corresponding existing applications using the keystroke-level modeling (KLM) method. This method is based on years of research at universities such as Carnegie Mellon and research labs like Xerox Palo Alto Research Center. The KLM method breaks tasks into a sequence of operations and uses standardized models to evaluate all of the physical and cognitive actions that a person must take to complete a task: what a user would have to click, how long each click would take (not only the physical action of the click or typing of a letter, but also how long someone would have to think about the page when taking the action), and user interface changes that result from the click. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time is calculated. Task times from the model enable researchers to predict end-user productivity. For the study, we focused on modeling procurement business process task flows that were considered business or mission critical: high-frequency tasks and high-value tasks. The designs evaluated encompassed tasks that are currently performed by employees, professional buyers, suppliers, and sourcing professionals in advanced procurement applications. For each of these flows, we created detailed task scenarios that provided the context for each task, conducted task walk-throughs in both the Oracle Fusion design and the existing application, analyzed and documented the steps and actions required to complete each task, and applied standard time estimates to the operators in each task to estimate overall task completion times. The Results The KLM method predicted that the Oracle Fusion Procurement designs would result in productivity gains in each task, ranging from 13 percent to 38 percent, with an overall productivity gain of 22.5 percent. These performance gains can be attributed to a reduction in the number of clicks and screens needed to complete the tasks. For example, creating a requisition in Oracle Fusion Procurement takes a user through only two screens, while ordering the same item in a previous version requires six screens to complete the task. Modeling user productivity has resulted not only in advances in Oracle Fusion applications, but also in advances in other areas. We leveraged lessons learned from the KLM studies to establish products like Oracle E-Business Suite (EBS). New user experience features in EBS 12.1.3, such as navigational improvements to the main menu, a Google-type search using auto-suggest, embedded analytics, and an in-context list of values tool help to reduce clicks and improve efficiency. For more information about KLM, refer to the Measuring User Productivity blog.

    Read the article

  • Using Lambdas for return values in Rhino.Mocks

    - by PSteele
    In a recent StackOverflow question, someone showed some sample code they’d like to be able to use.  The particular syntax they used isn’t supported by Rhino.Mocks, but it was an interesting idea that I thought could be easily implemented with an extension method. Background When stubbing a method return value, Rhino.Mocks supports the following syntax: dependency.Stub(s => s.GetSomething()).Return(new Order()); The method signature is generic and therefore you get compile-time type checking that the object you’re returning matches the return value defined by the “GetSomething” method. You could also have Rhino.Mocks execute arbitrary code using the “Do” method: dependency.Stub(s => s.GetSomething()).Do((Func<Order>) (() => new Order())); This requires the cast though.  It works, but isn’t as clean as the original poster wanted.  They showed a simple example of something they’d like to see: dependency.Stub(s => s.GetSomething()).Return(() => new Order()); Very clean, simple and no casting required.  While Rhino.Mocks doesn’t support this syntax, it’s easy to add it via an extension method. The Rhino.Mocks “Stub” method returns an IMethodOptions<T>.  We just need to accept a Func<T> and use that as the return value.  At first, this would seem straightforward: public static IMethodOptions<T> Return<T>(this IMethodOptions<T> opts, Func<T> factory) { opts.Return(factory()); return opts; } And this would work and would provide the syntax the user was looking for.  But the problem with this is that you loose the late-bound semantics of a lambda.  The Func<T> is executed immediately and stored as the return value.  At the point you’re setting up your mocks and stubs (the “Arrange” part of “Arrange, Act, Assert”), you may not want the lambda executing – you probably want it delayed until the method is actually executed and Rhino.Mocks plugs in your return value. So let’s make a few small tweaks: public static IMethodOptions<T> Return<T>(this IMethodOptions<T> opts, Func<T> factory) { opts.Return(default(T)); // required for Rhino.Mocks on non-void methods opts.WhenCalled(mi => mi.ReturnValue = factory()); return opts; } As you can see, we still need to set up some kind of return value or Rhino.Mocks will complain as soon as it intercepts a call to our stubbed method.  We use the “WhenCalled” method to set the return value equal to the execution of our lambda.  This gives us the delayed execution we’re looking for and a nice syntax for lambda-based return values in Rhino.Mocks. Technorati Tags: .NET,Rhino.Mocks,Mocking,Extension Methods

    Read the article

  • How to install SharePoint Server 2013 Preview

    - by ybbest
    The Office 2013 and SharePoint Server 2013 Preview is announced yesterday and as a SharePoint Developer, I am really excited to learn all the new features and capabilities. Today I will show you how to install the preview. 1. Create a service account called SP2013Install and give this account Dbcreator and SecurityAdmin in SQL Server 2012 2. You need to run the following script to set the ‘maxdegree of parellism’ setting to the required value of 1 in SQL Server 2012(using sysadmin privilege) before configure the SharePoint Farm. Otherwise , you might get the error ‘This SQL Server Instance does not have the required maxdegree of parellism setting of 1’ sp_configure 'show advanced options', 1; GO RECONFIGURE WITH OVERRIDE; GO sp_configure 'max degree of parallelism', 1; GO RECONFIGURE WITH OVERRIDE; GO 3. Download the SharePoint preview from here and I am going to install it on Windows Server 2008R2 with SQL2012. 4. Click the Install software prerequisites, this works fine with the internet connection. (However, if you do not have internet connection, it is a bit tricky to install window azure AppFabric as it has to be installed using the prerequisite installer. Your computer might reboot a few times in the process.) 5.After the prerequisites are installed `completely, you can then install the Preview. Click the Install SharePoint Server and Enter the Product key you get from the Preview download page. 6. Accept the License terms and Click Next. 7. Leave the default path for the file location. 8. You can now start the installation process 9. After binary files are installed, you then can configure your farm using the farm configuration wizard. 10.Specify the Database server and the install account 11. Specify SharePoint farm passphrase. 12 Specify the port number , you should choose your own favorite port number. 13. Choose Create a New Server Farm and click next. 14. Double-check with the settings and click Next to Configure the farm install. 15. Finally, your farm is configured successfully and you now are able to go to your Central Admin site http://sp2010:6666/ 16. You should configure the services manually or automate using PowerShell (If you like to understand why,you can read the blog post here) ,however I will use the wizard to configure automatically here  as  this is a test machine. After the configuration is complete, you now be able to see your SharePoint Site. 17.To start the evaluate the Preview , you need to install Visual Studio 2012 RC , Microsoft Office Developer Tools for Visual Studio 2012,SharePoint 2013 Designer Preview , Office 2013 Preview. References: Download SharePoint2013 Server 2013 Download Microsoft Visio Professional 2013 Preview Install SharePoint 2013 Preview Hardware and software requirements for SharePoint 2013 Preview SharePoint 2013 IT Pro and Developer training materials released Plan for SharePoint 2013 Preview Microsoft Office Developer Tools for Visual Studio 2012 SharePoint 2013 Preview Office365 for the SharePoint 2013 preview SharePoint Designer 2013 Download: Microsoft Office 2013 Preview Language Pack Try Office

    Read the article

  • SharePoint 2010 Sandboxed solution SPGridView

    - by Steve Clements
    If you didn’t know, you probably will soon, the SPGridView is not available in Sandboxed solutions. To be honest there doesn’t seem to be a great deal of information out there about the whys and what nots, basically its not part of the Sandbox SharePoint API. Of course the error message from SharePoint is about as useful as punch in the face… An unexpected error has been encountered in this Web Part.  Error: A Web Part or Web Form Control on this Page cannot be displayed or imported. You don't have Add and Customize Pages permissions required to perform this action …that’s if you have debug=true set, if not the classic “This webpart cannot be added” !! Love that one! but will a little digging you should find something like this… [TypeLoadException: Could not load  type Microsoft.SharePoint.WebControls.SPGridView from assembly 'Microsoft.SharePoint, Version=14.900.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c'.]   Depending on what you want to do with the SPGridView, this may not help at all.  But I’m looking to inherit the theme of the site and style it accordingly. After spending a bit of time with Chrome’s FireBug I was able to get the required CSS classes.  I created my own class inheriting from GridView (note the lack of a preceding SP!) and simply set the styles in there. Inherit from the standard GridView public class PSGridView : GridView   Set the styles in the contructor… public PSGridView() {     this.CellPadding = 2;     this.CellSpacing = 0;     this.GridLines = GridLines.None;     this.CssClass = "ms-listviewtable";     this.Attributes.Add("style", "border-bottom-style: none; border-right-style: none; width: 100%; border-collapse: collapse; border-top-style: none; border-left-style: none;");       this.HeaderStyle.CssClass = "ms-viewheadertr";          this.RowStyle.CssClass = "ms-itmhover";     this.SelectedRowStyle.CssClass = "s4-itm-selected";     this.RowStyle.Height = new Unit(25); }   Then as you cant override the Columns property setter, a custom method to add the column and set the style… public PSGridView() {     this.CellPadding = 2;     this.CellSpacing = 0;     this.GridLines = GridLines.None;     this.CssClass = "ms-listviewtable";     this.Attributes.Add("style", "border-bottom-style: none; border-right-style: none; width: 100%; border-collapse: collapse; border-top-style: none; border-left-style: none;");       this.HeaderStyle.CssClass = "ms-viewheadertr";          this.RowStyle.CssClass = "ms-itmhover";     this.SelectedRowStyle.CssClass = "s4-itm-selected";     this.RowStyle.Height = new Unit(25); }   And that should be enough to get the nicely styled SPGridView without the need for the SPGridView, but seriously….get the SPGridView in the SandBox!!!   Technorati Tags: Sharepoint 2010,SPGridView,Sandbox Solutions,Sandbox Technorati Tags: Sharepoint 2010,SPGridView,Sandbox Solutions,Sandbox

    Read the article

  • How to detect and configure an output with xrandr?

    - by ysap
    I have a DELL U2410 monitor connected to a Compaq 100B desktop equipped with an integrated AMD/ATI graphics card (AMD E-350). The installed O/S is Ubuntu 10.04 LTS. The computer is connected to the monitor via the DVI connection. The problem is that I cannot set the desktop resolution to the native 1920x1200. The maximum allowed resolution is 1600x1200. Doing some research I found about the xrandr utility. Unfortunately, when trying to use it I cannot configure it to the required resolution. First, it does not report the output name (which supposed to be DVI-0), saying default instead. Without it I cannot use the --fb option. The EDID utility seems to identify the monitor well. Here's the output from get-edid: # EDID version 1 revision 3 Section "Monitor" # Block type: 2:0 3:ff # Block type: 2:0 3:fc Identifier "DELL U2410" VendorName "DEL" ModelName "DELL U2410" # Block type: 2:0 3:ff # Block type: 2:0 3:fc # Block type: 2:0 3:fd HorizSync 30-81 VertRefresh 56-76 # Max dot clock (video bandwidth) 170 MHz # DPMS capabilities: Active off:yes Suspend:yes Standby:yes Mode "1920x1200" # vfreq 59.950Hz, hfreq 74.038kHz DotClock 154.000000 HTimings 1920 1968 2000 2080 VTimings 1200 1203 1209 1235 Flags "-HSync" "+VSync" EndMode # Block type: 2:0 3:ff # Block type: 2:0 3:fc # Block type: 2:0 3:fd EndSection but the xrandr -q command returns: Screen 0: minimum 640 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 0.0* 1280x1024 0.0 1152x864 0.0 1024x768 0.0 800x600 0.0 640x480 0.0 720x400 0.0 When I try to set the resolution, I get: $ xrandr --fb 1920x1200 xrandr: screen cannot be larger than 1600x1200 (desired size 1920x1200) $ xrandr --output DVI-0 --auto warning: output DVI-0 not found; ignoring How can I set the screen resolution to 1920x1200? Why doesn't xrandr identify the DVI-0 output? Note that the same computer running Ubuntu version higher than 10.04 detects the correct resolution with no problems. On this machine I cannot upgrade due to some legacy hardware compatibility problems. Also, I don't see any optional screen drivers available in the Hardware Drivers dialog. ---- UPDATE: following the answer to this question, I got some advance. Now the required mode is listed in the xrandr -q list, but I can't switch to that mode. Using the Monitors applet (which now shows the new mode), I get the response that: The selected configuration for displays could not be applied. Could not set the configuration to CRTC 262. From the command line it looks like this: $ cvt 1920 1200 60 # 1920x1200 59.88 Hz (CVT 2.30MA) hsync: 74.56 kHz; pclk: 193.25 MHz Modeline "1920x1200_60.00" 193.25 1920 2056 2256 2592 1200 1203 1209 1245 -hsync +vsync $ xrandr --newmode "1920x1200_60.00" 193.25 1920 2056 2256 2592 1200 1203 1209 1245 -hsync +vsync $ xrandr -q Screen 0: minimum 640 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 0.0* 1280x1024 0.0 1152x864 0.0 1024x768 0.0 800x600 0.0 640x480 0.0 720x400 0.0 1920x1200_60.00 (0x120) 193.0MHz h: width 1920 start 2056 end 2256 total 2592 skew 0 clock 74.5KHz v: height 1200 start 1203 end 1209 total 1245 clock 59.8Hz $ xrandr --addmode default 1920x1200_60.00 $ xrandr -q Screen 0: minimum 640 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 0.0* 1280x1024 0.0 1152x864 0.0 1024x768 0.0 800x600 0.0 640x480 0.0 720x400 0.0 1920x1200_60.00 59.8 $ xrandr --output default --mode 1920x1200_60.00 xrandr: Configure crtc 0 failed Another piece of info (if it helps anyone): $ sudo lshw -c video *-display UNCLAIMED description: VGA compatible controller product: ATI Technologies Inc vendor: ATI Technologies Inc physical id: 1 bus info: pci@0000:00:01.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list configuration: latency=0 resources: memory:c0000000-cfffffff(prefetchable) ioport:f000(size=256) memory:feb00000-feb3ffff

    Read the article

  • InfiniBand Enabled Diskless PXE Boot

    - by Neeraj Gupta
    When you want to bring up a compute server in your environment and need InfiniBand connectivity, usually you go through various installation steps. This could involve operating systems like Linux, followed by a compatible InfiniBand software distribution, associated dependencies and configurations. What if you just want to run some InfiniBand diagnostics or troubleshooting tools from a test machine ? What if something happened to your primary machine and while recovering in rescue mode, you also need access to your InfiniBand network ? Often times we use opensource community supported small Linux distributions but they don't come with required InfiniBand support and tools. In this weblog, I am going to provide instructions on how to add InfniBand support to a specific Linux image - Parted Magic.This is a free to use opensource Linux distro often used to recover or rescue machines. The distribution itself will not be changed at all. Yes, you heard it right ! I have built an InfiniBand Add-on package that will be passed to the default kernel and initrd to get this all working. Pr-requisites You will need to have have a PXE server ready on your ethernet based network. The compute server you are trying to PXE boot should have a compatible IB HCA and must be connected to an active IB network. Required Downloads Download the Parted Magic small distribution for PXE from Parted Magic website. Download InfiniBand PXE Add On package. Right Click and Download from here. Do not extract contents of this file. You need to use it as is. Prepare PXE Server Extract the contents of downloaded pmagic distribution into a temporary directory. Inside the directory structure, you will see pmagic directory containing two files - bzImage and initrd.img. Copy this directory in your TFTP server's root directory. This is usually /tftpboot unless you have a different setup. For Example: cp pmagic_pxe_2012_2_27_x86_64.zip /tmp cd /tmp unzip pmagic_pxe_2012_2_27_x86_64.zip cd pmagic_pxe_2012_2_27_x86_64 # ls -l total 12 drwxr-xr-x  3 root root 4096 Feb 27 15:48 boot drwxr-xr-x  2 root root 4096 Mar 17 22:19 pmagic cp -r pmagic /tftpboot As I mentioned earlier, we dont change anything to the default pmagic distro. Simply provide the add-on package via PXE append options. If you are using a menu based PXE server, then add an entry to your menu. For example /tftpboot/pxelinux.cfg/default can be appended with following section. LABEL Diskless Boot With InfiniBand Support MENU LABEL Diskless Boot With InfiniBand Support KERNEL pmagic/bzImage APPEND initrd=pmagic/initrd.img,pmagic/ib-pxe-addon.cgz edd=off load_ramdisk=1 prompt_ramdisk=0 rw vga=normal loglevel=9 max_loop=256 TEXT HELP * A Linux Image which can be used to PXE Boot w/ IB tools ENDTEXT Note: Keep the line starting with "APPEND" as a single line. If you use host specific files in pxelinux.cfg, then you can use that specific file to add the above mentioned entry. Boot Computer over PXE Now boot your desired compute machine over PXE. This does not have to be over InfiniBand. Just use your standard ethernet interface and network. If using menus, then pick the new entry that you created in previous section. Enable IPoIB After a few minutes, you will be booted into Parted Magic environment. Open a terminal session and see if InfiniBand is enabled. You can use commands like: ifconfig -a ibstat ibv_devices ibv_devinfo If you are connected to InfiniBand network with an active Subnet Manager, then your IB interfaces must have come online by now. You can proceed and assign IP address to them. This will enable you at IPoIB layer. Example InfiniBand Diagnostic Tools I have added several InfiniBand Diagnistic tools in this add-on. You can use from following list: ibstat, ibstatus, ibv_devinfo, ibv_devices perfquery, smpquery ibnetdiscover, iblinkinfo.pl ibhosts, ibswitches, ibnodes Wrap Up This concludes this weblog. Here we saw how to bring up a computer with IPoIB and InfiniBand diagnostic tools without installing anything on it. Its almost like running diskless !

    Read the article

  • Who Makes a Good Product Owner

    - by Robert May
    In general, the best product owners are those that care passionately about the customer of the product.  Note that I didn’t say about the product itself.  Actually, people that only care about the product, generally do not make good product owners.  Products only matter in relationship to their customers.  If a product doesn’t provide value to the customer, then the product has no value, no matter what a person might think of the product, and no matter what cool technologies exist inside of the product. A good product owner is also a good negotiator.  They recognize that many different priorities exist inside of a corporation, but that there can be only one list that developers work from.  A good product owner recognizes that its their job to help others around them prioritize (perhaps with a Product Council), but also understand that they alone have the final say about priorities and are willing to make the tough decisions required.  Deciding the priority between two perfectly valid stories is very difficult, especially when the stories are from two different departments! A good product owner is deeply interested in helping the team be successful.  They don’t seek to control the team, but instead seek to understand what the team can do and then work with the team to get the best product possible for the Customer.  A good product owner is never denigrating to team members, ever.  They recognize that such behavior would damage the trust that needs to be present between team members and product owners and will avoid it at all costs. In general, technical people (i.e. former or current developers) make poor product owners.  In their minds, they can’t separate implementation details from user functionality, so their stories end up sounding like implementation details.  For example, “The user enters their username on the password screen” is something that a technical product owner would write.  The proper wording for that story is “A user supplies the system with their credentials.”  Because technical people think different from the rest of the population, they are generally not a good fit. A good product owner is also a good writer.  Writing good stories demands good writing.  The art of persuasion, descriptiveness and just general good grammar are all required.  A good Product Owner must also be well spoken, since most of what will be conveyed will be conveyed with the spoken word, not just written word. A good product owner is a “People Person.”  They like talking to people and are very patient.  They don’t mind having questions repeated or fielding many questions, because they want to make sure that the ideas they’re conveying are properly understood so the customer gets the best product possible.  They are happy to answer any questions a team member may have and invite feedback and criticism of designs and stories, since they want a good product.  They really have little ego that gets in the way of building a great product. All of these qualities can be hard to find, but if you look close enough, you’ll find the right person in your organization.  Product owners can be found anywhere, not just in upper management.  Some of the best product owners are those that are very close to the customer.  In fact, check your customer support staff.  I’d bet that several great product owners are lurking there. Final note about what makes a good product owner.  You’re probably NOT going to find a good product owner in a manager, especially if they consider themselves a “Manager.”  Product owners don’t manage anything but the backlog, so be especially careful if the person you’re selecting for Product Owner is a manager. Up Next, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • Using the SOA-BPM VIrtualBox Appliance

    - by antony.reynolds
    Quickstart Guide to Using Oracle Appliance for SOA/BPM Recently I have been setting up some machines for fellow engineers.  My base setup consists of Oracle Enterprise Linux with Oracle Virtual Box.  Note that after installing VirtualBox I needed to add the VirtualBox Extension Pack to enable RDP access amongst other features.  In order to get them started quickly with some images I downloaded the pre-built appliance for SOA/BPM from OTN. Out of the box this provides a VirtualBox image that is pre-installed with everything you will need to develop SOA/BPM applications. Specifically by using the virtual appliance I got the following pre-installed and configured. Oracle Enterprise Linux 5 User oracle password oracle User root password oracle. Oracle Database XE Pre-configured with SOA/BPM repository. Set to auto-start on OS startup. Oracle SOA Suite 11g PS2 Configured with a “collapsed domain”, all services (SOA/BAM/EM) running in AdminServer. Listening on port 7001 Oracle BPM Suite 11g Configured in same domain as SOA Suite. Oracle JDeveloper 11g With SOA/BPM extensions. Networking The VM by default uses NAT (Network Address Translation) for network access.  Make sure that the advanced settings for port forwarding allow access through the host to guest ports.  It should be pre-configured to forward requests on the following ports Purpose Host Port Guest Port (VBox Image) SSH 2222 22 HTTP 7001 7001 Database 1521 1521 Note that only one VirtualBox image can use a given host port, so make sure you are not clashing if it seems not to work. What’s Left to Do? There is still some customization of the environment that may be required. If you need to configure a proxy server as I did then for the oracle and root users to set up an HTTP proxy Added “export http_proxy=http://proxy-host:proxy-port” to ~oracle/.bash_profile and ~root/.bash_profile Added “export http_proxy=http://proxy-host:proxy-port” to /etc/.bashrc Edited System->Preferences to set Network Proxy In Firefox set Preferences->Network->Connection Settings to “Use system proxy settings” In JDeveloper set Edit->Preferences->Web Browser and Proxy to required proxy settings You may need to configure yum to point to a public OEL yum repository – such as http://public-yum.oracle.com. If you are going to be accessing the SOA server from outside the VirtualBox image then you may want to set the soa-infra Server URLs to be the hostname of the host OS. Snap! Once I had the machine configured how I wanted to use it I took a snapshot so that I can always get back to the pristine install I have now.  Snapshots are one of the big benefits of putting a development environment into a virtualized environment.  I can make changes to my installation and if I mess it up I can restore the image to a last known good snapshot. Hey Presto!, Ready to Go This is the quickest way to get up and running with SOA/BPM Suite.  Out of the box the download will work, I only did extra customization so I could use services outside the firewall and browse outside the firewall from within by SOA VirtualBox image.  I also use yum to update the OS to the latest binaries. So have fun.

    Read the article

  • HPCM 11.1.2.x - Outline Optimisation for Calculation Performance

    - by Jane Story
    When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times. There are several things that you may wish to consider. Because at least one dense dimension for an application is required to deploy from HPCM to Essbase, “Measures” and “AllocationType”, as the only required dimensions in an HPCM application, are created dense by default. However, for optimisation reasons, you may wish to consider changing this default dense/sparse configuration. In general, calculation scripts in HPCM execute best when they are targeting destinations with one or more dense dimensions. Therefore, consider your largest target stage i.e. the stage with the most assignment destinations and choose that as a dense dimension. When optimising an outline in this way, it is not possible to have a dense dimension in every target stage and so testing with the dense/sparse settings in every stage is the key to finding the best configuration for each individual application. It is not possible to change the dense/sparse setting of individual cloned dimensions from EPMA. When a dimension that is to be repeated in multiple stages, and therefore cloned, is defined in EPMA, every instance of that dimension has the same storage setting. However, such manual changes may not be preserved in all cases. Please see below for full explanation. However, once the application has been deployed from EPMA to HPCM and from HPCM to Essbase, it is possible to make the dense/sparse changes to a cloned dimension directly in Essbase. This can be done by editing the properties of the outline in Essbase Administration Services (EAS) and manually changing the dense/sparse settings of individual dimensions. There are two methods of deployment from HPCM to Essbase from 11.1.2.1. There is a “replace” deploy method and an “update” deploy method: “Replace” will delete the Essbase application and replace it. If this method is chosen, then any changes made directly on the Essbase outline will be lost. If you use the update deploy method (with or without archiving and reloading data), then the Essbase outline, including any manual changes you have made (i.e. changes to dense/sparse settings of the cloned dimensions), will be preserved. Notes If you are using the calculation optimisation technique mentioned in a previous blog to calculate multiple POVs (https://blogs.oracle.com/pa/entry/hpcm_11_1_2_optimising) and you are calculating all members of that POV dimension (e.g. all months in the Period dimension) then you could consider making that dimension dense. Always review Block sizes after all changes! The maximum block size recommended in the Essbase Database Administrator’s Guide is 100k for 32 bit Essbase and 200k for 64 bit Essbase. However, calculations may perform better with a larger than recommended block size provided that sufficient memory is available on the Essbase server. Test different configurations to determine the most optimal solution for your HPCM application. Please note that this blog article covers HPCM outline optimisation only. Additional performance tuning can be achieved by methodically testing database settings i.e data cache, index cache and/or commit block settings. For more information on Essbase tuning best practices, please review these items in the Essbase Database Administrators Guide. For additional information on the commit block setting, please see the previous PA blog article https://blogs.oracle.com/pa/entry/essbase_11_1_2_commit

    Read the article

  • Using GMail's SMTP and IMAP servers in Notification Mailer

    - by Saroja Kandepuneni
    Overview GMail offers free, reliable, popular SMTP and IMAP services, because of which many people are interested to use it. GMail can be used when there are no in-house SMTP/IMAP servers for testing or debugging purposes. This blog explains how to install GMail SSL certificate in Concurrent Tier, testing the connection using a standalone program, running Mailer diagnostics and configuring GMail IMAP and SMTP servers for Workflow Notification Mailer Inbound and Outbound connections. GMail servers configuration SMTP server Host Name  smtp.gmail.com SSL Port  465 TLS/SSL required  Yes User Name  Your full email address (including @gmail.com or @your_domain.com) Password  Your gmail passwor  IMAP server  Host Name imap.gmail.com  SSL Port 993 TLS/SSL Required Yes  User Name  Your full email address (including @gmail.com or @your_domain.com)  Password Your gmail password GMail SSL Certificate Installation The following is the procedure to install the GMail SSL certificate Copy the below GMail SSL certificate to a file eg: gmail.cer -----BEGIN CERTIFICATE-----MIIDWzCCAsSgAwIBAgIKaNPuGwADAAAisjANBgkqhkiG9w0BAQUFADBGMQswCQYDVQQGEwJVUzETMBEGA1UEChMKR29vZ2xlIEluYzEiMCAGA1UEAxMZR29vZ2xlIEludGVybmV0IEF1dGhvcml0eTAeFw0xMTAyMTYwNDQzMDRaFw0xMjAyMTYwNDUzMDRaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKEwpHb29nbGUgSW5jMRcwFQYDVQQDEw5pbWFwLmdtYWlsLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAqfPyPSEHpfzvXx+9zGUxoxcOXFrGKCbZ8bfUd8JonC7rfId32t0gyAoLCgM6eU4lN05VenNZUoChL/nrX+ApdMQv9UFV58aYSBMU/pMmK5GXansbXlpHao09Mc8eur2xV+4cnEtxUvzpco/OaG15HDXcr46c6hN6P4EEFRcb0ccCAwEAAaOCASwwggEoMB0GA1UdDgQWBBQj27IIOfeIMyk1hDRzfALz4WpRtzAfBgNVHSMEGDAWgBS/wDDr9UMRPme6npH7/Gra42sSJDBbBgNVHR8EVDBSMFCgTqBMhkpodHRwOi8vd3d3LmdzdGF0aWMuY29tL0dvb2dsZUludGVybmV0QXV0aG9yaXR5L0dvb2dsZUludGVybmV0QXV0aG9yaXR5LmNybDBmBggrBgEFBQcBAQRaMFgwVgYIKwYBBQUHMAKGSmh0dHA6Ly93d3cuZ3N0YXRpYy5jb20vR29vZ2xlSW50ZXJuZXRBdXRob3JpdHkvR29vZ2xlSW50ZXJuZXRBdXRob3JpdHkuY3J0MCEGCSsGAQQBgjcUAgQUHhIAVwBlAGIAUwBlAHIAdgBlAHIwDQYJKoZIhvcNAQEFBQADgYEAxHVhW4aII3BPrKQGUdhOLMmdUyyr3TVmhJM9tPKhcKQ/IcBYUev6gLsB7FH/n2bIJkkIilwZWIsj9jVJaQyJWP84Hjs3kus4fTpAOHKkLqrbIZDYjwVueLmbOqr1U1bNe4E/LTyEf37+Y5hcveWBQduIZnHn1sDE2gA7LnUxvAU=-----END CERTIFICATE----- Install the SSL certificate into the default JRE location or any other location using below command Installing into a dfeault JRE location in EBS instance         # keytool -import -trustcacerts -keystore $AF_JRE_TOP/lib/security/cacerts  -storepass changeit -alias gmail-lnx_chainnedcert -file gmail.cer Install into a custom location         # keytool -import -trustcacerts -keystore <customLocation>  -storepass changeit -alias gmail-lnx_chainnedcert -file gmail.cer       <customLocation> -- directory in instance where the certificate need to be installed After running the above command you can see the following response         Trust this certificate? [no]:  yes        Certificate was added to keystore Running Mailer Command Line Diagnostics Run Mailer command line diagnostics from conccurrent tier where Mailer is running, to check the IMAP connection using the below command $AFJVAPRG -classpath $AF_CLASSPATH -Dprotocol=imap -Ddbcfile=$FND_SECURE/$TWO_TASK.dbc -Dserver=imap.gmail.com -Dport=993 -Dssl=Y -Dtruststore=$AF_JRE_TOP/lib/security/cacerts -Daccount=<gmail username> -Dpassword=<password> -Dconnect_timeout=120 -Ddebug=Y -Dlogfile=GmailImapTest.log -DdebugMailSession=Y oracle.apps.fnd.wf.mailer.Mailer Run Mailer command line diagnostics from concurrent tier where Mailer is running, to check the SMTP connection using the below command   $AFJVAPRG -classpath $AF_CLASSPATH -Dprotocol=smtp -Ddbcfile=$FND_SECURE/$TWO_TASK.dbc -Dserver=smtp.gmail.com -Dport=465 -Dssl=Y -Dtruststore=$AF_JRE_TOP/lib/security/cacerts -Daccount=<gmail username> -Dpassword=<password> -Dconnect_timeout=120 -Ddebug=Y -Dlogfile=GmailSmtpTest.log -DdebugMailSession=Y oracle.apps.fnd.wf.mailer.Mailer Standalone program to verify the IMAP connection Run the below standalone program from the concurrent tier node where Mailer is running to verify the connection with GMail IMAP server. It connects to the Gmail IMAP server with the given GMail user name and password and lists all the folders that exist in that account. If the Gmail IMAP server is not working for the  Mailer check whether the PROCESSED and DISCARD folders exist for the GMail account, if not create manually by logging into GMail account.Sample program to test GMail IMAP connection  The standalone program can be run as below  $java GmailIMAPTest GmailUsername GMailUserPassword            Standalone program to verify the SMTP connection Run the below standalone program from the concurrent tier node where Mailer is running to verify the connection with GMail SMTP server. It connects to the GMail SMTP server by authenticating with the given user name and password  and sends a test email message to the give recipient user email address. Sample program to test GMail SMTP connection The standalone program can be run as below  $java GmailSMTPTest GmailUsername gMailPassword recipientEmailAddress    Warnings As gmail.com is an external domain, the Mailer concurrent tier should allow the connection with GMail server Please keep in mind when using it for corporate facilities, that the e-mail data would be stored outside the corporate network

    Read the article

  • Uninstall, Disable, or Remove Windows 7 Media Center

    - by Mysticgeek
    Although Windows 7 Media Center has improved a lot over previous versions of Windows, but you might want to disable it for different reasons. Here we take a look at a couple of methods to get rid of it. There are a variety of reasons you might want to disable Windows 7 Media Center. Maybe you own a business and don’t want it to run on the machines. Or perhaps you don’t use it at all and just don’t want it around. Turn Off WMC Using Programs and Features Probably the easiest way to get rid of it on all versions of Windows 7 is to open Control Panel and select Programs and Features. This method is similar to disabling Internet Explorer 8 in Windows 7. On the left hand panel click on Turn Windows Features on or off. Scroll down to Media Features and expand the folder. Then Uncheck Windows Media Center… You’ll get a verification message making sure you want to disable it, click Yes. Then the box next to Windows Media Center will be empty…click OK. Wait while WMC is disabled… To complete the process a reboot is required. After getting back from the restart, the WMC icon will be gone and there won’t be any way to launch it. Re-enable WMC If you want to re-enable it, just go back in and recheck it. Again you’ll need to wait while it’s configured, but when it’s done, a restart is not required.   Disable Media Center Using Group Policy Note: This process uses Group Policy Editor which is not available in Home versions of Windows 7. Click on the Start menu and type gpedit.msc into the Search box and hit Enter. Now navigate to User Configuration \ Administrative Templates \ Windows Components \ Windows Media Center. Double-click on Do not allow Windows Media Center to run. Then select the radio button next to Enabled, click OK and close out of Group Policy Editor. Now if a user tries to launch WMC they will get the following message. Conclusion If you’re not a fan of Windows Media Center or want to disable it for whatever reason, the process is simple and there are a couple of ways you can do it. WMC is not included in Starter or Home Basic versions of Windows 7. If you’re new to Windows 7 Media Center, you might want to check out our guide on getting started and setting up live TV. Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Disable Windows Mobility Center in Windows 7 or VistaMake Outlook Faster by Disabling Unnecessary Add-InsSchedule Updates for Windows Media CenterRemove "Map Network Drive" Menu Item from Windows Vista or XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find Downloads and Add-ins for Outlook Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa !

    Read the article

  • So You Want to Be a Social Media Director

    - by Mike Stiles
    Do you want to be a Social Media Director? Some say the title is already losing its relevance; that social should be a basic skill that is required and used no matter what your position is inside the enterprise. I suppose that’s visionary, and a fun thing for thought leaders to say. But in the vast majority of business organizations, we’re so far away from that reality that the thought of not having someone driving social’s implementation and guiding its proper usage conjures up images of anarchy. That said, social media has become so broad, so catch-all, and so extended across business functions, that today’s Social Media Director, depending on the size of their staff, must make jacks-of-all-trades look like one-trick-ponies. Just as the purview of the CMO has grown all-encompassing, the disciplines required of their heads of social are stacking up. Master of Content Every social pipeline you build must stay filled, with quantity and quality. Content takes time, and the job never stops. Never. And no, it’s not true that anybody can write. Master of Customer Experience You must have a passion for hearing from customers and making them really happy. Master of PR You must know how to communicate and leverage the trust you’ve built when crises strike. Couldn’t hurt to be a Master of Politics. Master of Social Technology So many social management tools on the market. You have to know what social tech ecosystem makes sense and avoid piecemeal point solutions. Master of Business Development Social for selling and prospecting is hot, and you have to know how to use social to do it. Master of Analytics Nothing else matters if you can’t prove social is helping the brand. That’s right, creative content guy has to also be a math and stats geek. Good luck with that. Master of Paid Media You’ve got to learn the language, learn the tactics, learn the vendors and learn how to measure results. Master of Education Guess who gets to teach everyone who has no clue how to use social for business. Master of Personal Likability You’ll be leading the voice, tone, image and personality of the brand. If you don’t instinctively know how to be liked by actual people, the brand will be starting from a deficit. How deep must you go in this parade of masteries? Again, that depends on your employer’s maturity level in social. Serious players recognize these as distinct disciplines requiring true experts for maximum effect. Less serious players will need you to execute personally in many of these areas. Do the best you can, and try to grow quickly at each. If you’re the sole person executing all social…well…you’re in the game of managing expectations and trying to socially educate your employer. The good news is, you should be making a certifiable killing. If you’re alone and your salary is modest, time to understand how many brands out there crave what you’ve mastered. Not to push back against thought leaders, but the need for brand social leadership has not gone away…not even a little bit. @mikestiles @oraclesocialPhoto: Stefan Wagner, freeimages.com

    Read the article

  • How do developer get rid of silly requirements?

    - by sugar
    Hmm ! First of all let me give you the note of the requirement. So that you can have idea what kind of problem I am facing. Words From Project Manager : Hey ! Sugar, I am assigning you a task for developing a framework. This framework is supposed to be developed for all iOS application. Please go through brief of the required framework. It should be able to detect the thickness of my Thumb. It should be able to detect whether User is using thumb or Fingers If user is using thumbs/Fingers, Framework should calculate the size of thumb/fingers. Once size is been calculated, all elements of user interface should arranged & resized automatically. ( not specified how & where as its framework - it should be smart enough to arrange automatically ) If thumb size is larger elements should get arranged near by center area of iPad/iPhone If thumb size is smaller elements should get arranged near by corners of iPad/iPhone If thumb size is larger, fonts of all elements should get smaller. ( assuming = aged person ) If thumb size is smaller, fonts of all elements should get larger. ( assuming smaller thumb = low aged person ) Summary : This framework is required for creating user-friendly user-interfaces programmatically. We need to develop a very developer-friendly framework. Framework should be developed in such a way that we can use in as many projects as needed. Well, I am a developer. What I want to have as an answer is as follows. How to describe them - the way of they thinking is bit ridiculous ? How do I explain them - we can better concentrate on developing actual projects ? How do I convince them - that this kind of things even if possible, is not recommended to develop such things ? How do I say politely, gently & respectfully NO to this ? What should I say, So that they can not point at my experience ? ( e.g. you are 3 years experienced guy & you must have abilities to develop such things ) Feeling horror. Please help. Thanks in advance, Sugar. Note : Please help me to tag this question properly. I am stuck & this is real situation. Frustrated & tensed. You guys might have faced such requirements from TopLevel. requesting you to help with your experience. Well ! I came to know that - those TOPLEVEL guys don't have any idea of iPad, iPhone, Apple etc. I would do one thing. Sir, before we go further for framework development. It is strongly recommended to read Apple Human Interface Guidlines.

    Read the article

  • SOA Community Newsletter October 2013

    - by JuergenKress
    Dear SOA & BPM Partner Community member, Our October newsletter edition focuses on Oracle OpenWorld 2013, highlights, keynotes and all presentations. Thanks to all partners who made the conference a huge success. If you could not come to San Francisco you will find all the details within this newsletter. As the newsletter edition contains a lot of content thus we have three sections - SOA, BPM & ACM, and AppAdvantage & UX. Make sure you share your content with the community, best via twitter @soacommunity #soacommunity! What is new in SOA Suite 12c? At OOW the product management team demonstrated some of the key features of the upcoming version. The important SOA topics are mobile integration and cloud integration - make sure you re-use your existing SOA platform! Bruce Tierney showcased the Agilent mobile integration and you try the new Mobile Order Management for EBS GSE Demo using middleware technology. On cloud integration the product management team presented several OOW sessions and published two whitepapers. As SOA becomes mature the awareness for SOA Governance continues to raise, Introducing Oracle Enterprise Repository Express Workflows and watch Luis Weir: Challenges to Implementing SOA Governance. Thanks to Ronald for the SOA Made Simple | Introduction to SOA series, the next article in the Industrial SOA series is SOA and User Interfaces (UI). Have you achieved successful BPM implementation? Nominate your customer references for the Gartner Business Process Management Excellence Awards 2014. Do you want to showcase the latest BPM Suite? Make sure you use the hosted BPM PS6 (11.1.1.7) demo. Do you want to become an expert in BPM Suite? Attend one of our BPM Bootcamps in Germany, Netherland, Spain or UK! If you can not make it – we offer plenty of on-demand content Advanced BPM Scenarios & BPM Architecture Topics & Process Modeling and Life Cycle & Adaptive Case Management & Smart Application Extensibility with Oracle Process Accelerators. I would also recommend to watch great introduction to Adaptive Case Management the on-demand webcast with Bruce Silver & Ajay Khanna. Thanks to Mark Foster from the A-team for the ACM article series & Leon Smiers for their blog posts. If you accomplished a SOA Suite or BPM Suite project and want to become a certified SOA or BPM expert, we are offering again free vouchers to become a certified SOA & BPM expert (limited to partners in Europe Middle East and Africa). Don't miss this opportunity and become Specialized! Best regards, Jürgen Kress To read the newsletter please visit http://tinyurl.com/soaNewsOctober2013 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: newsletter,SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Do you want to be an ALM Consultant?

    - by Martin Hinshelwood
    Northwest Cadence is looking for our next great consultant! At Northwest Cadence, we have created a work environment that emphasizes excellence, integrity, and out-of-the-box thinking.  Our customers have high expectations (rightfully so) and we wouldn’t have it any other way!   Northwest Cadence has some of the most exciting customers I have ever worked with and even though I have only been here just over a month I have already: Provided training/consulting for 3 government departments Created and taught courseware for delivering Scrum to teams within a high profile multinational company Started presenting Microsoft's ALM Engagement Program  So if you are interested in helping companies build better software more efficiently, then.. Enquire at [email protected] Application Lifecycle Management (ALM) Consultant An ALM Consultant with a minimum of 8 years of relevant experience with Application Lifecycle Management, Visual Studio (including Visual Studio Team System) and software design is needed. Must provide thought leadership on best practices for enterprise architecture, understand the Microsoft technology solution stack, and have a thorough understanding of enterprise application integration. The ALM Practice Lead will play a central role in designing and implementing the overall ALM Practice strategy, including creating, updating, and delivering ALM courseware and consultancy engagements. This person will also provide project support, deliverables, and quality solutions on Visual Studio Team System that exceed client expectations. Engagements will vary and will involve providing expert training, consulting, mentoring, formulating technical strategies and policies and acting as a “trusted advisor” to customers and internal teams. Sound sense of business and technical strategy required. Strong interpersonal skills as well as solid strategic thinking are key. The ideal candidate will be capable of envisioning the solution based on the early client requirements, communicating the vision to both technical and business stakeholders, leading teams through implementation, as well as training, mentoring, and hands-on software development. The ideal candidate will demonstrate successful use of both agile and formal software development methods, enterprise application patterns, and effective leadership on prior projects. Job Requirements Minimum Education: Bachelor’s Degree (computer science, engineering, or math preferred). Locale / Travel: The Practice Lead position requires estimated 50% travel, most of which will be in the Continental US (a valid national Passport must be maintained).  This is a full time position and will be based in the Kirkland office. Preferred Education: Master’s Degree in Information Technology or Software Engineering; Premium Microsoft Certifications on .NET (MCSD) or MCPD or relevant experience; Microsoft Certified Trainer (MCT) or relevant experience. Minimum Experience and Skills: 7+ years experience with business information systems integration or custom business application design and development in a professional technology consulting, corporate MIS or software development environment. Essential Duties & Responsibilities: Provide training, consulting, and mentoring to organizations on topics that include Visual Studio Team System and ALM. Create content, including labs and demonstrations, to be delivered as training classes by Northwest Cadence employees. Lead development teams through the complete ALM and/or Visual Studio Team System solution. Be able to communicate in detail how a solution will integrate into the larger technical problem space for large, complex enterprises. Define technical solution requirements. Provide guidance to the customer and project team with respect to technical feasibility, complexity, and level of effort required to deliver a custom solution. Ensure that the solution is designed, developed and deployed in accordance with the agreed upon development work plan. Create and deliver weekly status reports of training and/or consulting progress. Engagement Responsibilities: · Provide a strong desire to provide thought leadership related to technology and to help grow the business. · Work effectively and professionally with employees at all levels of a customer’s organization. · Have strong verbal and written communication skills. · Have effective presentation, organizational and planning skills. · Have effective interpersonal skills and ability to work in a team environment. Enquire at [email protected]

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >