Search Results

Search found 37989 results on 1520 pages for 'software as a service'.

Page 472/1520 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • Controlling server configurations with IPS

    - by barts
    I recently received a customer question regarding how they best could control which packages and which versions were used on their production Solaris 11 servers.  They had considered pointing each server at its own software repository - a common initial approach.  A simpler method leverages one of dependency mechanisms we introduced with Solaris 11, but is not immediately obvious to most people. Typically, most internal IT departments qualify particular versions for production use.  What this customer wanted to do was insure that their operations staff only installed internally qualified versions of Solaris on their servers.  The easiest way of doing this is to leverage the 'incorporate' type of dependency in a small package defined for each server type.  From the reference " Packaging and Delivering Software With the Image Packaging System in Oracle® Solaris 11.1":  The incorporate dependency specifies that if the given package is installed, it must be at the given version, to the given version accuracy. For example, if the dependent FMRI has a version of 1.4.3, then no version less than 1.4.3 or greater than or equal to 1.4.4 satisfies the dependency. Version 1.4.3.7 does satisfy this example dependency. The common way to use incorporate dependencies is to put many of them in the same package to define a surface in the package version space that is compatible. Packages that contain such sets of incorporate dependencies are often called incorporations. Incorporations are typically used to define sets of software packages that are built together and are not separately versioned. The incorporate dependency is heavily used in Oracle Solaris to ensurethat compatible versions of software are installed together. An example incorporate dependency is: depend type=incorporate fmri=pkg:/driver/network/ethernet/[email protected],5.11-0.175.0.0.0.2.1 So, to make sure only qualified versions are installed on a server, create a package that will be installed on the machines to be controlled.  This package will contain an incorporate dependency on the "entire" package, which controls the various components used to be build Solaris.  Every time a new version of Solaris has been qualified for production use, create a new version of this package specifying the new version of "entire" that was qualified.  Once this new control package is available in the repositories configured on the production server, the pkg update command will update that system to the specified version.  Unless a new version of the control package is made available, pkg update will report that no updates are available since no version of the control package can be installed that satisfies the incorporate constraint. Note that if desired, the same package can be used to specify which packages must be present on the system by adding either "require" or "group" dependencies; the latter permits removal of some of the packages, the former does not.  More details on this can be found in either the section 5 pkg man page or the previously mentioned reference document. This technique of using package dependencies to constrain system configuration leverages the SAT solver which is at the heart of IPS, and is basic to how we package Solaris itself.  

    Read the article

  • Sunshine after the iCloud release?

    - by Laila
    "Why should I believe them? They're the ones that brought us MobileMe? It was not our finest hour, but we learned a lot." Steve Jobs June 6th 2011 Apple's new cloud service has been met with uncritical excitement by industry commentators.  It is wonderful what a rename can do.  Apple has had a 'cloud' offering for three years called MobileMe, successor to .MAC and  iTools, so iCloud is now the fourth internet service Apple have attempted. If this had been Microsoft, there would have been catcalls all around the blogosphere.  I'll admit that there is a lot more functionality announced for iCloud than MobileMe has ever managed to achieve, but then almost anything has more functionality than MobileMe.  It's an expensive service (£120 a year in the UK, $90 in the states), launched as far back as  June 9, 2008, that has delivered very little and suffered a string of technical problems; the documentation was mainly  a community effort, built up gradually by the frustrated and angry users. It was supposed to synchronise PC Outlook calendars but couldn't manage Microsoft Exchange (Google could, of course). It used WebDAV to allow Windows users to attach to the filestore, but didn't document how to do it. The method for downloading and uploading files to the cloud-based filestore was ridiculously clunky. It allowed you to post photos on a public site, but forgot to include a way of deleting photos. I could go on with the list, but you can explore the many sites that have flourished to inhabit the support-vacuum left by Apple. MobileMe should have had all the bright new clever things announced for iCloud. Apple dropped the ball, and allowed services such as Flickr to fill the void. However, their PR skills are such that, a name-change later (the .ME.com email address remains), it has turned a rout into a victory, and hundreds of earnest bloggers have been extolling Apple's expertise in cloud matters. This must be frustrating for the other cloud providers who have quietly got the technology working right. I wish iCloud well, even though I resent the expensive mess they made of MobileMe. Apple promise that iCloud will sync files, apps, app data, and media across all the different iOS5 devices, Macs, and PCs. It also hopes to sync music across devices, but not video content. They've offered existing MobileMe users free use of the MobileMe service for a year as the product is morphed, and they will be able to transfer to iCloud when it is launched in the autumn.  On June 30, 2012, MobileMe will die, and Apple's iWeb is also soon to join iTools and .MAC in the hereafter. So why get excited about iCloud? That all depends on the level of PC integration. Whereas iOS5 machines will be full participants in the new world of data-sharing (Sorry iPod Touch users) what about .NET libraries? There is talk of synchronising 'My Pictures' libraries with iOS5 and iMac machines, but little more detail as yet. Apple has a lot to prove with iCloud and anyone with actual experience of their past attempts to get into cloud services will be wary.

    Read the article

  • Communication between state machines with hidden transitions

    - by slartibartfast
    The question emerged for me in embedded programming but I think it can be applied to quite a number of general networking situations e.g. when a communication partner fails. Assume we have an application logic (a program) running on a computer and a gadget connected to that computer via e.g. a serial interface like RS232. The gadget has a red/green/blue LED and a button which disables the LED. The LEDs color can be driven by software commands over the serial interface and the state (red/green/blue/off) is read back and causes a reaction in the application logic. Asynchronous behaviour of the application logic with regard to the LED color down to a certain delay (depending on the execution cycle of the application) is tolerated. What we essentially have is a resource (the LED) which can not be reserved and handled atomically by software because the (organic) user can at any time press the button to interfere/break the software attempt to switch the LED color. Stripping this example from its physical outfit I dare to say that we have two communicating state machines A (application logic) and G (gadget) where G executes state changes unbeknownst to A (and also the other way round, but this is not significant in our example) and only A can be modified at a reasonable price. A needs to see the reaction and state of G in one piece of information which may be (slightly) outdated but not inconsistent with respect to the short time window when this information was generated on the side of G. What I am looking for is a concise method to handle such a situation in embedded software (i.e. no layer/framework like CORBA etc. available). A programming technique which is able to map the complete behaviour of both participants on classical interfaces of a classical programming language (C in this case). To complicate matters (or rather, to generalize), a simple high frequency communication cycle of A to G and back (IOW: A is rapidly polling G) is out of focus because of technical restrictions (delay of serial com, A not always active, etc.). What I currently see as a general solution is: the application logic A as one thread of execution an adapter object (proxy) PG (presenting G inside the computer), together with the serial driver as another thread a communication object between the two (A and PG) which is transactionally safe to exchange The two execution contexts (threads) on the computer may be multi-core or just interrupt driven or tasks in an RTOS. The com object contains the following data: suspected state (written by A): effectively a member of the power set of states in G (in our case: red, green, blue, off, red_or_green, red_or_blue, red_or_off...etc.) command data (written by A): test_if_off, switch_to_red, switch_to_green, switch_to_blue operation status (written by PG): operation_pending, success, wrong_state, link_broken new state (written by PG): red, green, blue, off The idea of the com object is that A writes whichever (set of) state it thinks G is in, together with a command. (Example: suspected state="red_or_green", command: "switch_to_blue") Notice that the commands issued by A will not work if the user has switched off the LED and A needs to know this. PG will pick up such a com object and try to send the command to G, receive its answer (or a timeout) and set the operation status and new state accordingly. A will take back the oject once it is no longer at operation_pending and can react to the outcome. The com object could be separated of course (into two objects, one for each direction) but I think it is convenient in nearly all instances to have the command close to the result. I would like to have major flaws pointed out or hear an entirely different view on such a situation.

    Read the article

  • New spreadsheet accompanying SmartAssembly 6.0 provides statistics for prioritizing bug fixes

    - by Jason Crease
    One problem developers face is how to prioritize the many voices providing input into software bugs. If there is something wrong with a function that is the darling of a particular user, he or she tends to want action - now! The developer's dilemma is how to ascertain that the problem is major or minor, and when it should be addressed. Now there is a new spreadsheet accompanying SmartAssembly that provides exactly that information in an objective manner. This might upset those used to getting their way by being the loudest or pushiest, but ultimately it will ensure that the biggest problems get the priority they deserve. Here's how it works: Feature Usage Reporting (FUR) in SmartAssembly 6.0 provides a wealth of data about how your software is used by its end-users, but in the SmartAssembly UI the data isn't mined to its full extent. The new Excel spreadsheet for FUR extracts statistics from that data and presents them in easy-to-understand forms. I developed the spreadsheet feature in Microsoft Excel, using a fair amount of VBA. The spreadsheet connects directly to the database which stores the feature-usage data, and shows a wide variety of statistics and tables extracted from that data.  You want to know what percentage of users have used the 'Export as XML' button?  No problem.  How popular is v5.3 is compared to v5.1?  There's graphs for that. You need to know whether you have more users in Russia or Brazil? There's a big pie chart for that. I recently witnessed the spreadsheet in use here at Red Gate Software. My bug is exposed as minor While testing new features in .NET Reflector, I found a usability bug in the Refresh button and filed it in the Red Gate bug-tracking system. The bug was labelled "V.NEXT MINOR," which means it would be fixed in the next point release. Although I'm a professional tester, I'm not much different than most software users when they discover a bug that affects them personally: I wanted it fixed immediately. There was an ulterior motive at play here, of course. I would get to see my colleagues put the spreadsheet to work. The Reflector team loaded up the spreadsheet to view the feature-usage statistics that SmartAssembly collected for the refresh button. The resulting statistics showed that only 8% of users have ever pressed the Refresh button, and only 2.6% of sessions involve pressing the button. When Refresh is used, it's only pressed on average 1.6 times a session, with a maximum of 8 times during a session. This was in stark contrast to what I was doing as a conscientious tester: pressing it dozens of times per session. The spreadsheet provides evidence that my bug was a minor one. On to more serious things Based on the solid evidence uncovered by the spreadsheet, the Reflector team concluded that my experience does not represent that of the vast majority of Reflector's recorded users. The Reflector team had ample data to send me back to my desk and keep the bug classified as "V.NEXT MINOR." The team then went back to fixing more serious bugs. If I'm in the shoes of the user, I might not be thoroughly happy, but I cannot deny that the evidence clearly placed me in a very small minority. Next time I'm hoping the spreadsheet will prove that my bug is more important. Find out more about Feature-Usage Reporting here. The spreadsheet is available for free download here.

    Read the article

  • Why We Do What We Do. (Part 3 of 5 Part Series on JDE 5G Postponed)

    - by Kem Butller-Oracle
    By Lyle Ekdahl - Oracle JD Edwards Sr. VP General Manager  In the closing of part two of this 5 part blog series, I stated that in the next installment I would explore the expected results of the digital overdrive era and the impact it will have on our economy. While I have full intentions of writing on that topic, I am inspired today to write about something that is top of mind. It’s top of mind because it has come up several times recently conversations with my Oracle’s JD Edwards team members, with customers and our partners, plus I feel passionately about why I do what I do…. It is not what we do but why we do that thing that we do Do you know what you do? For the most part, I bet you could tell me what you do even if your work has changed over the years.  My real question is, “Do you get excited about what you do, and are you fulfilled? Does your work deliver a sense of purpose, a cause to work for, and something to believe in?”  Alright, I guess that was not a single question. So let me just ask, “Why?” Why are you here, right now? Why do you get up in the morning? Why do you go to work? Of course, I can’t answer those questions for you but I can share with you my POV.   For starters, there are several things that drive me. As many of you know by now, I have a somewhat competitive nature but it is not solely the thrill of winning that actually fuels me. Now don’t get me wrong, I do like winning occasionally. However winning is only a potential result of competing and is clearly not guaranteed. So why compete? Why compete in business, and particularly why in this Enterprise Software business?  Here’s why! I am fascinated by creative and building processes. It is about making or producing things, causing something to come into existence. With the right skill, imagination and determination, whether it’s art or invention; the result can deliver value and inspire. In both avocation and vocation I always gravitate towards the create/build processes.  I believe one of the skills necessary for the create/build process is not just the aptitude but also, and especially, the desire and attitude that drives one to gain a deeper understanding. The more I learn about our customers, the more I seek to understand what makes the successful and what difficult issues cause them to struggle. I like to look for the complex, non-commodity process problems where streamlined design and modern technology can provide an easy and simple solution. It is especially gratifying to see our customers use our software to increase their own ability to deliver value to the market. What an incredible network effect! I know many of you share this customer obsession as well as the create/build addiction focused on simple and elegant design. This is what I believe is at the root of our common culture.  Are JD Edwards customers on a whole different than other ERP solutions’ customers? I would argue that for the most part, yes, they are. They selected our software, and our software is different. Why? Because I believe that the create/build process will generally result in solutions that reflect who built it and their culture. And a culture of people focused on why they create/build will attract different customers than one that is based on what is built or how the solution is delivered. In the past I have referred to this idea as character of the customer, and it transcends industry, size and run rate. Now some would argue that JD Edwards has some customers who are characters. But that is for a different post. As I have told you before, the JD Edwards culture is unique, and its resulting economy is valuable and deserving of our best efforts. 

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • What is Devops and Why You Should Care?

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} According to Wikipedia, DevOps (a portmanteau of development and operations) is a software development method that stresses communication, collaboration and integration between software developers and information technology (IT) professionals. DevOps is a response to the interdependence of software development and IT operations. It aims to help an organization rapidly produce software products and services. That definition of DevOps is the – what. The “why” is even easier. Standardized development methodology, clear communication and documented processes supported by a standards-based, proven middleware platform improves application development and management cycles, brings agility and provides greater availability and security to your IT infrastructure. Clearly, DevOps is about connecting people, products and processes. Ultimately, DevOps is about connecting IT to business. If you haven’t already seen it, do check out Bob Rhubart’s feature on DevOps in the latest issue of Oracle Magazine. And for more information on how Oracle Fusion Middleware, the #1 application infrastructure foundation, visit us on oracle.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Web Developer - How to enhance my skillset?

    - by atif089
    First of all pardon my English. I am not a native English speaker I have been a Web Developer for the past 4 years. In these 4 years I have spent my time on the internet to learn things. My current skillset comprises of HTML CSS PHP MySQL jQuery (I would not say js and rather say jQuery because I am good at using jQuery and bad with plain javascript.) The above things seemed like an easier part of my life as I quickly learned them. But now I would really like to enhance my skillset and I am pretty confused which way to move ahead considering that I have to learn things using the web and references on my own. Design My first option is towards design. Shall I get started with design and start using Adobe Illustrator, Photoshop, Flash, Flex. Designing along with my previous skills looks like a money maker to me. As both are co-related to each other when web design is considered. And its easier to learn the first 2 and I hope I can get tutorials for the last 2 as well. Marketing A lot of my existing clients asked me if I do SEO. So this looked as a good field to me as well. I cannot estimate the scope of SEO but I assume it has a long future. Since I am business minded as well and there are a lot of tutorials around, should I start with SEO, SEM, Social Media, PPC or whatever it consists of. Software Development The complex plight and hardest thing (perhaps) but the easiest way to find a decent job in my location. If I go for software development what platform should be that I should be ideally going after? Should it be C# for windows development, or ASP.NET (once again enhances my skill set), J2EE (there are a lot of jobs for J2EE developers here) or plain C and C++. Also I think it is difficult to learn software languages right from Hello World, using internet? I have no clue how I learned PHP but I am sort of a pro now, but these other languages seems like a disaster to me? I cant figure out the reason if its because PHP is easier or there was a lot of tutorials around for PHP. Anyways is it also possible to learn software development right from Hello World using the web? Database / Server (Linux) / Network Administration Seems like a job with a decent pay but less number of jobs and a bit harder to learn online. (not sure) What should be the right track I should move ahead. P.S - Age is not a constraint for me as I am between 20-21, and I come from an IT background. I know quite little basics about C (upto structures) C++ (upto objects, I was not able to understand templates) Core Java (some basics and OOP concept) RDBMS Visual Basic 6 (used to do this long back) UNIX (a bunch of commands like who, finger, chmod, ls and a bit of #bash) Or is there anything else that I left out? I need you guys to please give me a feedback and the reason why I should select that field.

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • What are they buying &ndash; work or value?

    - by Jamie Kurtz
    When was the last time you ordered a pizza like this: “I want the high school kid in the back to do the following… make a big circle with some dough, curl up the edges, then put some sauce on it using a small ladle, then I want him to take a handful of shredded cheese from the metal container and spread it over the circle and sauce, then finally I want the kid to place 36 pieces of pepperoni over the top of the cheese” ?? Probably never. My typical pizza order usually goes more like this: “I want a large pepperoni pizza”. In the world of software development, we try so hard to be all things agile. We: Write lots of unit tests We refactor our code, then refactor it some more We avoid writing lengthy requirements documents We try to keep processes to a minimum, and give developers freedom And we are proud of our constantly shifting focus (i.e. we’re “responding to change”) Yet, after all this, we fail to really lean and capitalize on one of agile’s main differentiators (from the twelve principles behind the Agile Manifesto): “Working software is the primary measure of progress.” That is, we foolishly commit to delivering tasks instead of features and bug fixes. Like my pizza example above, we fall into the trap of signing contracts that bind us to doing tasks – rather than delivering working software. And the biggest problem here… by far the most troubling outcome… is that we don’t let working software be a major force in all the work we do. When teams manage to ruthlessly focus on the end product, it puts them on the path of true agile. It doesn’t let them accidentally write too much documentation, or spend lots of time and money on processes and fancy tools. It forces early testing that reveals problems in the feature or bug fix. And it forces lots and lots of customer interaction.  Without that focus on the end product as your deliverable… by committing to a list of tasks instead of a list features and bug fixes… you are doomed to NOT be agile. You will end up just doing stuff, spending time on the keyboard, burning time on timesheets. Doing tasks doesn’t force you to minimize documentation. It makes it much harder to respond to change. And it will eventually force you and the client into contract haggling. Because the customer isn’t really paying you to do stuff. He’s ultimately paying for features and bug fixes. And when the customer doesn’t get what they want, responding with “well, look at the contract - we did all the tasks we committed to” doesn’t typically generate referrals or callbacks. In short, if you’re trying to deliver real value to the customer by going agile, you will most certainly fail if all you commit to is a list of things you’re going to do. Give agile what it needs by committing to features and bug fixes – not a list of ToDo items. So the next time you are writing up a contract, remember that the customer should be buying this: Not this:

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • 4 Key Ingredients for the Cloud

    - by Kellsey Ruppel
    It's a short week here with the US Thanksgiving Holiday. So, before we put on our stretch pants and get ready to belly up to the dinner table for turkey, stuffing and mashed potatoes, let's spend a little time this week talking about the Cloud (kind of like the feathery whipped goodness that tops the infamous Thanksgiving pumpkin pie!) But before we dive into the Cloud, let's do a side by side comparison of the key ingredients for each. Cloud Whipped Cream  Application Integration  1 cup heavy cream  Security  1/4 cup sugar  Virtual I/O  1 teaspoon vanilla  Storage  Chilled Bowl It’s no secret that millions of people are connected to the Internet. And it also probably doesn’t come as a surprise that a lot of those people are connected on social networking sites.  Social networks have become an excellent platform for sharing and communication that reflects real world relationships and they play a major part in the everyday lives of many people. Facebook, Twitter, Pinterest, LinkedIn, Google+ and hundreds of others have transformed the way we interact and communicate with one another.Social networks are becoming more than just an online gathering of friends. They are becoming a destination for ideation, e-commerce, and marketing. But it doesn’t just stop there. Some organizations are utilizing social networks internally, integrated with their business applications and processes and the possibility of social media and cloud integration is compelling. Forrester alone estimates enterprise cloud computing to grow to over $240 billion by 2020. It’s hard to find any current IT project today that is NOT considering cloud-based deployments. Security and quality of service concerns are no longer at the forefront; rather, it’s about focusing on the right mix of capabilities for the business. Cloud vs. On-Premise? Policies & governance models? Social in the cloud? Cloud’s increasing sophistication, security in applications, mobility, transaction processing and social capabilities make it an attractive way to manage information. And Oracle offers all of this through the Oracle Cloud and Oracle Social Network. Oracle Social Network is a secure private network that provides a broad range of social tools designed to capture and preserve information flowing between people, enterprise applications, and business processes. By connecting you with your most critical applications, Oracle Social Network provides contextual, real-time communication within and across enterprises. With Oracle Social Network, you and your teams have the tools you need to collaborate quickly and efficiently, while leveraging the organization’s collective expertise to make informed decisions and drive business forward. Oracle Social Network is available as part of a portfolio of application and platform services within the Oracle Cloud. Oracle Cloud offers self-service business applications delivered on an integrated development and deployment platform with tools to rapidly extend and create new services. Oracle Social Network is pre-integrated with the Fusion CRM Cloud Service and the Fusion HCM Cloud Service within the Oracle Cloud. If you are looking for something to watch as you veg on the couch in a post-turkey dinner hangover, you might consider watching these how-to videos! And yes, it is perfectly ok to have that 2nd piece of pie

    Read the article

  • PBCS Hyperion Planning in the Cloud PartnerLab 2-Day Training

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Objective of the PartnerLab:  To help partners engage the interest and commitment of their clients for Oracle Planning and Budgeting Cloud Service projects. This is your unique opportunity to learn how to expand your business with the PBCS Application. This 2-day PartnerLab workshop will enable your team to understand the fundamental concepts of the PBCS Application, the implications of Oracle Public Cloud deployment, and to effectively present and demonstrate PBCS to prospective clients. Participants must already be competent with the on-premise Hyperion Planning application: this training will build on existing expertise to cover SaaS Cloud specific deployment implications and how best to demonstrate this to clients and win services led PBCS implementation engagements. Register here now and see full Agenda for 07-08 July 2014 in Oracle Paris – Colombes 15, bd Charles de Gaulle, 92715 Colombes Cedex France Register here now and see full Agenda for 15-16 July 2014 in Oracle Italy via Fulvio Testi 136, Cinisello Balsamo, Milan, Italy This training is free of charge to OPN Member Partners This PartnerLab is a 2 day in-class workshop event led by Oracle Pre-Sales subject matter experts. These 2 days consist of discussions, presentations, demonstration and hands-on exercises. Note: the hands-on exercises are in an already installed environment that you can have access to after the event (see more @ Hyperion Demonstration Systems for Partners). The PartnerLab will be delivered in English or local language. Mandatory prerequisites for a participant: Please view material available and complete the assessments before you attend the PartnerLab event. Material and assessments cover foundational information about Oracle Hyperion Planning and Oracle Planning and Budgeting Cloud Service. View material prior to live PartnerLab: Oracle Hyperion Planning 11 Sales Specialist guided learning path Oracle Hyperion Planning 11 PreSales Specialist guided learning path Oracle Hyperion Planning 11 Implementation Specialist guided learning path Oracle Planning and Budgeting Cloud Service Specialist guided learning path PBCS How-to Videos Learn More at Oracle Planning and Budgeting Cloud Service Take and pass these on-line assessments prior to the live PartnerLab training: Oracle Hyperion Planning 11 Sales Specialist on-line exam Oracle Hyperion Planning 11 PreSales Specialist on-line exam /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • I.T. Chargeback : Core to Cloud Computing

    - by Anand Akela
    Contributed by Mark McGill Consolidation and Virtualization have been widely adopted over the years to help deliver benefits such as increased server utilization, greater agility and lower cost to the I.T. organization. These are key enablers of cloud, but in themselves they do not provide a complete cloud solution. Building a true enterprise private cloud involves moving from an admin driven world, where the I.T. department is ultimately responsible for the provisioning of servers, databases, middleware and applications, to a world where the consumers of I.T. resources can provision their infrastructure, platforms and even complete application stacks on demand. Switching from an admin-driven provisioning model to a user-driven model creates some challenges. How do you ensure that users provisioning resources will not provision more than they need? How do you encourage users to return resources when they have finished with them so that others can use them? While chargeback has existed as a concept for many years (especially in mainframe environments), it is the move to this self-service model that has created a need for a new breed of chargeback applications for cloud. Enabling self-service without some form of chargeback is like opening a shop where all of the goods are free. A successful chargeback solution will be able to allocate the costs of shared I.T. infrastructure based on the relative consumption by the users. Doing this creates transparency between the I.T. department and the consumers of I.T. When users are able to understand how their consumption translates to cost they are much more likely to be prudent when it comes to their use of I.T. resources. This also gives them control of their I.T. costs, as moderate usage will translate to a lower charge at the end of the month. Implementing Chargeback successfully create a win-win situation for I.T. and the consumers. Chargeback can help to ensure that I.T. resources are used for activities that deliver business value. It also improves the overall utilization of I.T. infrastructure as I.T. resources that are not needed are not left running idle. Enterprise Manager 12c provides an integrated metering and chargeback solution for Enterprise Manager Targets. This solution is built on top of the rich configuration and utilization information already available in Enterprise Manager. It provides metering not just for virtual machines, but also for physical hosts, databases and middleware. Enterprise Manager 12c provides metering based on the utilization and configuration of the following types of Enterprise Manager Target: Oracle VM Host Oracle Database Oracle WebLogic Server Using Enterprise Manager Chargeback, administrators are able to create a set of Charge Plans that are used to attach prices to the various metered resources. These plans can contain fixed costs (eg. $10/month/database), configuration based costs (eg. $10/month if OS is Windows) and utilization based costs (eg. $0.05/GB of Memory/hour) The self-service user provisioning these resources is then able to view a report that details their usage and helps them understand how this usage translates into cost. Armed with this information, the user is able to determine if the resources are delivering adequate business value based on what is being charged. Figure 1: Chargeback in Self-Service Portal Enterprise Manager 12c provides a variety of additional interfaces into this data. The administrator can access summary and trending reports. Summary reports allow the administrator to drill-down through the cost center hierarchy to identify, for example, the top resource consumers across the organization. Figure 2: Charge Summary Report Trending reports can be used for I.T. planning and budgeting as they show utilization and charge trends over a period of time. Figure 3: CPU Trend Report We also provide chargeback reports through BI Publisher. This provides a way for users who do not have an Enterprise Manager login (such as Line of Business managers) to view charge and usage information. For situations where a bill needs to be produced, chargeback can be integrated with billing applications such as Oracle Billing and Revenue Management (BRM). Further information on Enterprise Manager 12c’s integrated metering and chargeback: White Paper Screenwatch Cloud Management on OTN

    Read the article

  • Understanding the Value of SOA

    - by Mala Narasimharajan
    Written By: Debra Lilley, ACE Director, Fusion Applications Again I want to talk from my area of expertise of Fusion Applications and talk about their design fundamentals. If you look at the table below and start at the bottom Oracle have defined all of the business objects e.g. accounts, people, customers, invoices etc. used by Fusion Applications; each of these objects contain all of the information required and can be expanded if necessary.  That Oracle have created for each of these business objects every action that is needed for the applications e.g. all the actions to create a new customer, checking to see if it exists, credit checking with D&B (Dun & Bradstreet < http://www.dnb.co.uk/> ) , creating the record, notifying those required etc. Each of these actions is a stand-alone web service. Again you can create a new actions or subscribe to an external provided web service e.g. the D&B check. The diagram also shows that all of development of Fusion Applications is from their Fusion Middleware offerings. Then the Intelligent Business Process is the order in which you run these actions, this is Service Orientated Architecture, SOA. Not only is SOA used to orchestrate actions within Fusion Applications it is also used in the integration of Fusion Applications with the rest of the Oracle stable of applications such as EBS, PeopleSoft, JDE and Siebel. The other applications are written with propriety development tools so how do they work with SOA? It’s a very simple answer, with the introduction of the Oracle SOA platform each process within these applications was made available to be called as a web service. I won’t go into technically how that is done but what’s known as a wrapper to allow each of them to act in this way was added. Finally at the top of the diagram are the questions that each Fusion Application process must answer, and this is the ‘special’ sauce that makes them so good, the User Experience, but that is a topic for another day, or you can read about it in my blog http://debrasoracle.blogspot.co.uk/2014/04/going-on-record-about-fusion-apps-cloud.html or Oracle’s own UX blog https://blogs.oracle.com/usableapps/ The concept behind AppAdvantage is not new the idea that Oracle technology can add value to your Oracle applications investments is pretty fundamental. Nishit Rao who is in AppAdvantage team provided myself and other ACE Directors with demo kits so that we could demonstrate SOA running with the applications. The example I learnt to build was that of the EBS inventory open interface. The simple concept is that request records can be added to a table and an import run that creates these as transactions in inventory. What’s SOA allows you to do is to add to the table from any source and then run this process automatically whereas traditionally you had to run the process at regular intervals because you didn’t know if the table was empty or not. This may just sound like a different way of doing the same thing but if the process is critical for your business then the interval was very small and the process run potentially many times unnecessarily. Using SOA it only happened when necessary without any delay. So in my post today I’ve talked about how SOA is used with Fusion Applications and in the linking with more traditional applications but that is only the tip of the iceberg of potential, your applications are just part of your IT systems and SOA can orchestrate your data across all of them; the beauty of open standards.  Debra Lilley, Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director.  Lilley has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Lead and Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts

    Read the article

  • Using 3rd Party JavaScript Plugins Hardwired With &lsquo;document.write&rsquo;

    - by ToStringTheory
    Introduction Have you ever had the need to implement a 3rd party JavaScript plugin, but your needs didn’t fit the model and usage defined by the API or documentation of the plugin?  Recently I ran into this issue when I was trying to implement a web snapshot plugin into our site.  To use their plugin, you had to include a script tag to the plugin on their server with an API key.  The second part of the usage was to include a <script> tag around a function call wherever you wanted a snapshot to appear. The Problem When trying to use the service, the images did not display.  I checked a couple of things and didn’t find anything wrong at first..  It wasn’t until I looked at the function that was called by the inline script did I find the issue – a call to the webservice, followed by a call to ‘document.write’ in its callback.  The solution in which I was trying to implement the plugin happened to be in response to an AJAX call after the document had completely loaded.  After the page has loaded, document.write does nothing. My first thought for a solution was to just cache the script from the service, and edit it do something like a return function or callback that I could use to edit the document from.  However, I quickly discovered that there is no way to cache the script from the service, as it had a hash in the function where it would call the server.  The hash was updated every few seconds/minutes, expiring old hashes.  This meant that I wouldn’t be able to edit the script and upload a new version to my server, as the script would not work after a few minutes from originally getting the script from the service. Solution The solution eluded me until I realized that this was JavaScript I was dealing with.  A language designed so that you could do just about anything to any library, function, or object…  At this point, the solution was simple – take control of the document.write function.  Using a buffer variable, and a simple function call, it is eerily simple to perform: //what would have been output to the document var buffer = ""; //store a reference to the real document.write var dw = document.write; //redefine document.write to store to our buffer document.write = function (str) {buffer += str;} //execute the function containing calls to document.write eval('{function encapsulated in <script></script> tags}'); //restore the original document.write function (just in case) document.write = dw; That’s it.  Instead of using the script tags where I wanted to include a snapshot, I called a function passing in the URL to the page I wanted a snapshot of.  After that last line of code, what would have been output to the document (or not in the case of the ajax call) was instead stored in buffer. Conclusion While the solution itself is simple, coming from a background much more footed in the .Net platform, I believe that this is a prime example of always keeping the language that you are working in in mind.  While this may seem obvious at first, as I KNEW I was in JavaScript, I never thought of taking control of the document.write function because I am more accustomed to the .Net world.  I can’t simply replace the functionality of Console.WriteLine.

    Read the article

  • How can I get TFS2010 to run MSDEPLOY for me through MSBUILD?

    - by Simon_Weaver
    There is an excellent PDC talk available here which describes the new MSDEPLOY features in Visual Studio 2010 - as well as how to deploy an application within TFS. You can use MSBUILD within TFS2010 to call through to MSDEPLOY to deploy your package to IIS. This is done by means of parameters to MSBUILD. The talk explains some of the command line parameters such as : /p:DeployOnBuild /p:DeployTarget=MsDeployPublish /p:CreatePackageOnPublish=True /p:MSDeployPublishMethod=InProc /p:MSDeployServiceURL=localhost /p:DeployIISAppPath="Default Web Site" But where is the documentation for this - I can't find any? I've been spending all day trying to get this to work and can't quite get it right and keep ending up with various errors. If I run the package's cmd file it deploys perfectly. But I want to get the whole deployment running through msbuild using these arguments and not a separate call to msdeploy or running the package .cmd file. How can I do this? PS. Yes I do have the Web Deployment Agent Service running. I also have the management service running under IIS. I've tried using both. Args I'm using : /p:DeployOnBuild=True /p:DeployTarget=MsDeployPublish /p:Configuration=Release /p:CreatePackageOnPublish=True /p:DeployIisAppPath=staging.example.com /p:MsDeployServiceUrl=https://staging.example.com:8172/msdeploy.axd /p:AllowUntrustedCertificate=True giving me : C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets (2660): VsMsdeploy failed.(Remote agent (URL https://staging.example.com:8172/msdeploy.axd?site=staging.example.com) could not be contacted. Make sure the remote agent service is installed and started on the target computer.) Error detail: Remote agent (URL https://staging.example.com:8172/msdeploy.axd?site=staging.example.com) could not be contacted. Make sure the remote agent service is installed and started on the target computer. An unsupported response was received. The response header 'MSDeploy.Response' was '' but 'v1' was expected. The remote server returned an error: (401) Unauthorized.

    Read the article

  • Axis2 embedded in my web app is not working

    - by Shamik
    OKay I lost alsmost the whole day on this. I have a webapp where I would like to add AXIS2 and start working. I added AxisServlets in the web.xml file like - <servlet> <servlet-name>AxisServlet</servlet-name> <display-name>Apache-Axis Servlet</display-name> <servlet-class>org.apache.axis2.transport.http.AxisServlet</servlet-class> <load-on-startup>2</load-on-startup> </servlet> <servlet-mapping> <servlet-name>AxisServlet</servlet-name> <url-pattern>/services/*</url-pattern> </servlet-mapping> I also added Services.xml file like <service name="ReportViewerService"> <description> This is a sample Web Service for illustrating Attachments API of Axis2 </description> <parameter name="ServiceClass">myclass</parameter> <operation name="getReport"> <actionMapping>urn:getReport</actionMapping> <messageReceiver class="org.apache.axis2.rpc.receivers.RPCMessageReceiver"/> </operation> </service> The directory structure is as mentioned here WEB-ING | - conf | |- axis2.xml |-lib | |- all libs |-services |-ReportViewerService | - META-INF |-services.xml |- web.xml The problem is - after all of these, the service endpoint will not come, I can not see the WSDL file http://localhost:8080/BOReportingServer/services/ReportViewerService?wsdl -- this gives an exception like - Throwable occurred: javax.servlet.ServletException: File &quot;/axis2-web/listSingleService.jsp&quot; not found

    Read the article

  • Unable to install SQL 2008 on Windows 7

    - by Axel
    SQL 2008 install hangs on Windows 7 The story: Trying to install SQL2008 on Windows 7 hangs on SqlEngineDBStartconfigAction_install_configrc_Cpu32. What I Tried: Uninstall hangs on validation Manual uninstall using msiinv.exe and msiexec /x works Added SQL service accounts to local admins no help Turn of UAC no help Last lines in setup log: 2010-04-01 16:18:05 SQLEngine: : Checking Engine checkpoint 'GetSqlServerProcessHandle' 2010-04-01 16:18:05 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' to be created 2010-04-01 16:18:07 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' or sql process handle to be signaled 2010-04-01 16:18:07 SQLEngine: : Checking Engine checkpoint 'WaitSqlServerStartEvents' 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize script 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize default connection string 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NotSpecified 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NamedPipes 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connection String: Data Source=\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup 2010-04-01 16:18:53 SQLEngine: : Checking Engine checkpoint 'ServiceConfigConnect' 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connecting to SQL.... 2010-04-01 16:18:53 Slp: Sco: Attempting to connect script 2010-04-01 16:18:53 Slp: Connection string: Data Source=\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup And now comes the fun part: When I open conf mgr I can see the service running, I enabled named pipes and TCP/IP, restarted the service I'm able to connect to the server using an OLE DB connection but not with the Native Client. And what I find suspicious is the following error in my app log: .NET Runtime Optimization Service (clr_optimization_v2.0.50727_32) - Failed to compile: C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\Tools\VDT\DataProjects.dll . Error code = 0x8007000b In MS connect this is reported as a bug but MS is unable to reproduce the problem altough when you search the fora I'm not the only one with this problem. So any help is appreciated.

    Read the article

  • Exception "The operation is not valid for the state of the transaction" using TransactionScope

    - by Lanfear
    We have a web service on server #1 and a database on server #2. Web service uses transaction scope to produce distributed transaction. Everything is correct. And we have another database on server #3. We had some problems with this server and we reinstalled operation system and software. We configured MSDTC and tried to use web service from server #1 to communicate with database on this server. And now after first select statement within transaction scope we get: "The operation is not valid for the state of the transaction". This exception falls in every web service request if it is using transaction scope. Server #2 and Server #3 is almost similar. The difference can be only in settings. .NET framework 3.5 SP1 installed and SQL Server SP3 on all servers. Full stacktrace: System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) ? System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) ? System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction t ? System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction t ? System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) ? System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) ? System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) ? System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) ? System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) ? System.Data.SqlClient.SqlConnection.Open() ? NHibernate.Connection.DriverConnectionProvider.GetConnection() ? NHibernate.Impl.SessionFactoryImpl.OpenConnection() I searched this message but didn't found any appropriate solution. So what settings should I check and what exactly should I do to fix it?

    Read the article

  • Java jasper report NullPointerException

    - by William Welch
    I am new to Java and I am running into this issue that I can't figure out. I inherited this project and I have the following code in one of my scriptlets: DefaultLogger.logMessage("DEBUG path: "+ reportFile.getPath()); DefaultLogger.logMessage("DEBUG parameters: "+ parameters); DefaultLogger.logMessage("DEBUG jr: "+ jr); bytes = JasperRunManager.runReportToPdf(reportFile.getPath(), parameters, jr); And I am getting the following results (the fourth line there is line 287 in FootwearReportsServlet.doGet): DEBUG path: C:\Development\JavaWorkspaces\Workspace1\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\RSLDevelopmentStandard\reports\templates\footwear\RslFootwearReport.jasper DEBUG parameters: {class_list_subreport=net.sf.jasperreports.engine.JasperReport@b07af1, signature_path=images/logo_reports.jpg, submission_id=20070213154168780, test_results_subreport=net.sf.jasperreports.engine.JasperReport@5795ce, logo_path=images/logos_3.gif, report_connection_secondary=com.mysql.jdbc.JDBC4Connection@2c39d2, testing_packages_subreport=net.sf.jasperreports.engine.JasperReport@1883d5f, signature_path2=images/logo_reports.jpg, tpid_subreport=net.sf.jasperreports.engine.JasperReport@17531fd, first_page_subreport=net.sf.jasperreports.engine.JasperReport@12504e0} DEBUG jr: net.sf.jasperreports.engine.data.JRMapCollectionDataSource@1630eb6 Apr 29, 2010 4:15:13 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet FootwearReportsServlet threw exception java.lang.NullPointerException at net.sf.jasperreports.engine.fill.JRFiller.fillReport(JRFiller.java:89) at net.sf.jasperreports.engine.JasperFillManager.fillReport(JasperFillManager.java:601) at net.sf.jasperreports.engine.JasperFillManager.fillReport(JasperFillManager.java:517) at net.sf.jasperreports.engine.JasperRunManager.runReportToPdf(JasperRunManager.java:385) at com.rsl.reports.FootwearReportsServlet.doGet(FootwearReportsServlet.java:287) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Unknown Source) What I can't figure out is where the null reference is. From the debug lines I can see that each parameter has a value. Could it be referring to a bad path to one of the images? Any ideas? For some reason my server won't start in debug mode in Eclipse so I am having trouble figuring this out.

    Read the article

  • Webservices error for dev.virtualearth.net/webservices/geocode

    - by Xaisoft
    I am getting the following stacktrace and have no idea what I am looking at and how to debug and fix it: Here is the error: Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Reference.svcmap: Failed to generate code for the service reference 'GeocodeService'. Cannot import wsdl:portType Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.DataContractSerializerMessageContractImporter Error: Could not load file or assembly 'System.Xml, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e' or one of its dependencies. The system cannot find the file specified. XPath to Error Source: //wsdl:definitions[@targetNamespace='http//dev.virtualearth.net /webservices/v1/geocode/contracts']/wsdl:portType[@name='IGeocodeService'] Cannot import wsdl:binding Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on. XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http: //dev.virtualearth.net/webservices/v1/geocode/contracts'] /wsdl:portType[@name='IGeocodeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http: //dev.virtualearth.net /webservices/v1/geocode']/wsdl:binding[@name='BasicHttpBinding_IGeocodeService'] Cannot import wsdl:port Detail: There was an error importing a wsdl:binding that the wsdl:port is dependent on. XPath to wsdl:binding: //wsdl:definitions[@targetNamespace='http://dev.virtualearth.net /webservices/v1/geocode']/wsdl:binding[@name='BasicHttpBinding_IGeocodeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://dev.virtualearth.net /webservices/v1/geocode']/wsdl:service[@name='GeocodeService'] /wsdl:port[@name='BasicHttpBinding_IGeocodeService'] Cannot import wsdl:binding Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on. XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http: //dev.virtualearth.net/webservices/v1/geocode/contracts'] /wsdl:portType[@name='IGeocodeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http: //dev.virtualearth.net /webservices/v1/geocode']/wsdl:binding[@name='CustomBinding_IGeocodeService'] Cannot import wsdl:port Detail: There was an error importing a wsdl:binding that the wsdl:port is dependent on. XPath to wsdl:binding: //wsdl:definitions[@targetNamespace='http://dev.virtualearth.net /webservices/v1/geocode']/wsdl:binding[@name='CustomBinding_IGeocodeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://dev.virtualearth.net /webservices/v1/geocode']/wsdl:service[@name='GeocodeService'] /wsdl:port[@name='CustomBinding_IGeocodeService']

    Read the article

  • GWT - occasional com.google.gwt.user.client.rpc.SerializationException

    - by user214984
    Hello we are haunted by occasional occurences of exceptions such as: com.google.gwt.user.client.rpc.SerializationException: Type 'xxx' was not assignable to 'com.google.gwt.user.client.rpc.IsSerializable' and did not have a custom field serializer.For security purposes, this type will not be serialized.: instance = xxx at com.google.gwt.user.server.rpc.impl.ServerSerializationStreamWriter.serialize(ServerSerializationStreamWriter.java:610) at com.google.gwt.user.client.rpc.impl.AbstractSerializationStreamWriter.writeObject(AbstractSerializationStreamWriter.java:129) at com.google.gwt.user.server.rpc.impl.ServerSerializationStreamWriter$ValueWriter$8.write(ServerSerializationStreamWriter.java:152) at com.google.gwt.user.server.rpc.impl.ServerSerializationStreamWriter.serializeValue(ServerSerializationStreamWriter.java:534) at com.google.gwt.user.server.rpc.RPC.encodeResponse(RPC.java:609) at com.google.gwt.user.server.rpc.RPC.encodeResponseForSuccess(RPC.java:467) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:564) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:188) at de.softconex.travicemanager.server.TraviceManagerServiceImpl.processCall(TraviceManagerServiceImpl.java:615) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:710) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:179) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:84) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:157) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:262) at org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:419) at org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:378) at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1508) at java.lang.Thread.run(Thread.java:619) The application is normally running fine. The indicated class implements Serializable (the whole object graph). So far the only patterns / observations are: we seem to have the issue only when the application is used inside an iframe the problem seems to happen when a new version of the application has been deployed running firefox in privacy mode (disabling all caches etc.) doesn't fix the problem Any ideas? Holger

    Read the article

  • WCF: Manually configuring Binding and Endpoint causes SerciveChannel Faulted State

    - by Matthias
    Hi there, I've created a ComVisible assembly to be used in a classic-asp application. The assembly should act as a wcf client and connect to a wcf service host (inside a windows service) on the same machine using named pipes. The wcf service host works fine with other clients, so the problem must be within this assembly. In order to get things work I added a service reference to the ComVisible assembly and proxy classes and the corresponding app.config settings were generated for me. Everything fine so far except that the app config would not be recognized when doing an CreateObject with my assembly in the asp code. I went and tried to hardcode (just for testing) the Binding and Endpoint and pass those two to the constructor of my ClientBase derived proxy using this code: private NetNamedPipeBinding clientBinding = null; private EndpointAddress clientAddress = null; clientBinding = new NetNamedPipeBinding(); clientBinding.OpenTimeout = new TimeSpan(0, 1, 0); clientBinding.CloseTimeout = new TimeSpan(0, 0, 10); clientBinding.ReceiveTimeout = new TimeSpan(0, 2, 0); clientBinding.SendTimeout = new TimeSpan(0, 1, 0); clientBinding.TransactionFlow = false; clientBinding.TransferMode = TransferMode.Buffered; clientBinding.TransactionProtocol = TransactionProtocol.OleTransactions; clientBinding.HostNameComparisonMode = HostNameComparisonMode.StrongWildcard; clientBinding.MaxBufferPoolSize = 524288; clientBinding.MaxBufferSize = 65536; clientBinding.MaxConnections = 10; clientBinding.MaxReceivedMessageSize = 65536; clientAddress = new EndpointAddress("net.pipe://MyService/"); MyServiceClient client = new MyServiceClient(clientBinding, clientAddress); client.Open(); // do something with the client client.Close(); But this causes the following error: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the faulted state. The environment is .Net Framework 3.5 / C#. What am I missing here?

    Read the article

  • Philosophy of [WebInvoke(ResponseFormat = WebMessageFormat.Json)]

    - by Mikey Cee
    Hi everyone, I'm writing what I'm referring to as a POJ (Plain Old JSON) WCF web service - one that takes and emits standard JSON with none of the crap that ASP.NET Ajax likes to add to it. It seems that there are three steps to accomplish this: Change to in the endpoint's tag Decorate the method with [WebInvoke(ResponseFormat = WebMessageFormat.Json)] Add an incantation of [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] to the service contract This is all working OK for me - I can pass in and am being returned nice plain JSON. If I remove the WebInvoke attribute, then I get XML returned instead, so it is certainly doing what it is supposed to do. But it strikes me as odd that the option to specify JSON output appears here and not in the configuration file. Say I wanted to expose my method as an XML endpoint too - how would I do this? Currently the only way I can see would be to have a second method that does exactly the same thing but does not have WebMethodFormat.Json specified. Then rinse and repeat for every method in my service? Yuck. Specifying that the output should be serialized to JSON in the attribute seems to be completely contrary to the philosophy of WCF, where the service is implemented is a transport and encoding agnostic manner, leaving the nasty details of how the data will be moved around to the configuration file. Is there a better way of doing what I want to do? Or are we stuck with this awkward attribute? Or do I not understanding WCF deeply enough?

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >