Search Results

Search found 5295 results on 212 pages for 'transaction scope'.

Page 169/212 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Mobile Deals: the Consumer Wants You in Their Pocket

    - by Mike Stiles
    Mobile deals offer something we talk about a lot in social marketing, relevant content. If a consumer is already predisposed to liking your product and gets a timely deal for it that’s easy and convenient to use, not only do you score on the marketing side, it clearly generates some of that precious ROI that’s being demanded of social. First, a quick gut-check on the public’s adoption of mobile. Nielsen figures have 55.5% of US mobile owners using smartphones. If young people are indeed the future, you can count on the move to mobile exploding exponentially. Teens are the fastest growing segment of smartphone users, and 58% of them have one. But the largest demographic of smartphone users is 25-34 at 74%. That tells you a focus on mobile will yield great results now, and even better results straight ahead. So we can tell both from statistics and from all the faces around you that are buried in their smartphones this is where consumers are. But are they looking at you? Do you have a valid reason why they should? Everybody likes a good deal. BIA/Kelsey says US consumers will spend $3.6 billion this year for daily deals (the Groupons and LivingSocials of the world), up 87% from 2011. The report goes on to say over 26% of small businesses are either "very likely" or "extremely likely" to offer up a deal in the next 6 months. Retail Gazette reports 58% of consumers shop with coupons, a 40% increase in 4 years. When you consider that a deal can be the impetus for a real-world transaction, a first-time visit to a store, an online purchase, entry into a loyalty program, a social referral, a new fan or follower, etc., that 26% figure shows us there’s a lot of opportunity being left on the table by brands. The existing and emerging technologies behind mobile devices make the benefits of offering deals listed above possible. Take how mobile payment systems are being tied into deal delivery and loyalty programs. If it’s really easy to use a coupon or deal, it’ll get used. If it’s complicated, it’ll be passed over as “not worth it.” When you can pay with your mobile via technologies that connects store and user, you get the deal, you get the loyalty credit, you pay, and your receipt is uploaded, all in one easy swipe. Nothing to keep track of, nothing to lose or forget about. And the store “knows” you, so future offers will be based on your tastes. Consider the endgame. A customer who’s a fan of your belt buckle store’s Facebook Page is in one of your physical retail locations. They pull up your app, because they’ve gotten used to a loyalty deal being offered when they go to your store. Voila. A 10% discount active for the next 30 minutes. Maybe the app also surfaces social references to your brand made by friends so they can check out a buckle someone’s raving about. If they aren’t a fan of your Page or don’t have your app, perhaps they’ve opted into location-based deal services so you can still get them that 10% deal while they’re in the store. Or maybe they’ve walked in with a pre-purchased Groupon or Living Social voucher. They pay with one swipe, and you’ve learned about their buying preferences, credited their loyalty account and can encourage them to share a pic of their new buckle on social. Happy customer. Happy belt buckle company. All because the brand was willing to use the tech that’s available to meet consumers where they are, incentivize them, and show them how much they’re valued through rewards.

    Read the article

  • The Three-Legged Milk Stool - Why Oracle Fusion Incentive Compensation makes the difference!

    - by Richard Lefebvre
    During the London Olympics, we were exposed to dozens of athletes who worked with sports psychologists to maximize their performance. Executives often hire business psychologists to coach their teams to excellence. In the same vein, Fusion Incentive Compensation can be used to get people to change their sales behavior so we can make our numbers. But what about using incentive compensation solutions in a non-sales scenario to drive change? Recently, I was working an opportunity where a company was having a low user adoption rate for Salesforce.com, which was causing problems for them. I suggested they use Fusion Incentive Comp to change the reps' behavior. We tossed around the idea of tracking user adoption by creating a variable bonus for reps based on how well they forecasted revenues in the new system. Another thought was to reward the reps for how often they logged into the system or for the percentage of leads that became opportunities and turned into revenue. A new twist on a great product. Fusion CRM's Sweet Spot I'm excited about the sales performance management (SPM) tools in Fusion CRM. This trio of Incentive Compensation, Territory Management, and Quota Management sets us apart from the competition because Oracle is the only vendor that provides all three of these capabilities on a single tech stack, in a single application, and with a single look and feel. The niche vendors offer standalone territory or incentive compensation solutions, but then the customer has to custom build the other tools and can end up with a Frankenstein-type environment. On average, companies overpay sales commissions by three to eight percent. You calculate that number for a company the size of Oracle for one quarter and it makes a pretty air-tight financial case for using SPM tools to figure accurate commissions. Plus when sales reps get the right compensation, they can be out selling rather than spending precious time figuring out what they didn't get paid or looking for another job. And one more thing ... Oracle knows incentive comp. We have been a Gartner Market Scope leader in this space for the last five years. Our solution gets high marks because of its scalability and because of its interoperability with other technologies. And now that we're leading with Fusion, our incentive compensation offering includes the innovations that the Fusion team built, plus enhancements from the E-Business Suite Incentive Comp team. It's a case of making a good thing even better. (See product video.) The "Wedge" Apps In a number of accounts that I'm working on, there is a non-Oracle CRM system of record. That gives me the perfect opportunity to introduce the benefits of our SPM tools and to get the customer using Fusion. Then the door is wide open for the company to uptake more of Fusion CRM, especially since all the integrations they need are out of the box. I really believe that implementing this wedge of SPM tools is the ticket to taking market share away from other vendors. It allows us to insert ourselves in an environment where no other CRM solution in the market has the extending capabilities of Fusion. Not Just Your Usual Suspects Usually the stakeholders that I talk to for Territory Management are tightly aligned with the sales management team. When I sell the quota planning tool, I'm talking to finance people on the ERP side of the house who are measuring quotas and forecasting revenue. And then Incentive Comp is of most interest to the sales operations people, and generally these people roll up to either HR or the payroll department. I think of our Fusion SPM tools as a three-legged stool straddling an organization's Sales, Finance, and HR departments. So when you're prospecting for opportunities -- yes, people with a CRM perspective will be very interested -- but don't limit yourselves to that constituency. You might find stakeholders in accounting, revenue planning, or HR compensation teams. You just might discover, as I did at United Airlines, that the HR organization is spearheading the CRM project because incentive compensation is what they need ... and they're the ones with the budget. Jason Loh Global Solutions Manager, Fusion CRM Sales Planning Oracle Corporation

    Read the article

  • Web Developer - How to enhance my skillset?

    - by atif089
    First of all pardon my English. I am not a native English speaker I have been a Web Developer for the past 4 years. In these 4 years I have spent my time on the internet to learn things. My current skillset comprises of HTML CSS PHP MySQL jQuery (I would not say js and rather say jQuery because I am good at using jQuery and bad with plain javascript.) The above things seemed like an easier part of my life as I quickly learned them. But now I would really like to enhance my skillset and I am pretty confused which way to move ahead considering that I have to learn things using the web and references on my own. Design My first option is towards design. Shall I get started with design and start using Adobe Illustrator, Photoshop, Flash, Flex. Designing along with my previous skills looks like a money maker to me. As both are co-related to each other when web design is considered. And its easier to learn the first 2 and I hope I can get tutorials for the last 2 as well. Marketing A lot of my existing clients asked me if I do SEO. So this looked as a good field to me as well. I cannot estimate the scope of SEO but I assume it has a long future. Since I am business minded as well and there are a lot of tutorials around, should I start with SEO, SEM, Social Media, PPC or whatever it consists of. Software Development The complex plight and hardest thing (perhaps) but the easiest way to find a decent job in my location. If I go for software development what platform should be that I should be ideally going after? Should it be C# for windows development, or ASP.NET (once again enhances my skill set), J2EE (there are a lot of jobs for J2EE developers here) or plain C and C++. Also I think it is difficult to learn software languages right from Hello World, using internet? I have no clue how I learned PHP but I am sort of a pro now, but these other languages seems like a disaster to me? I cant figure out the reason if its because PHP is easier or there was a lot of tutorials around for PHP. Anyways is it also possible to learn software development right from Hello World using the web? Database / Server (Linux) / Network Administration Seems like a job with a decent pay but less number of jobs and a bit harder to learn online. (not sure) What should be the right track I should move ahead. P.S - Age is not a constraint for me as I am between 20-21, and I come from an IT background. I know quite little basics about C (upto structures) C++ (upto objects, I was not able to understand templates) Core Java (some basics and OOP concept) RDBMS Visual Basic 6 (used to do this long back) UNIX (a bunch of commands like who, finger, chmod, ls and a bit of #bash) Or is there anything else that I left out? I need you guys to please give me a feedback and the reason why I should select that field.

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • PowerShell Script To Find Where SharePoint 2007 Features Are Activated

    - by Brian T. Jackett
    Recently I posted a script to find where SharePoint 2010 Features Are Activated.  I built the original version to use SharePoint 2010 PowerShell commandlets as that saved me a number of steps for filtering and gathering features at each level.  If there was ever demand for a 2007 version I could modify the script to handle that by using the object model instead of commandlets.  Just the other week a fellow SharePoint PFE Jason Gallicchio had a customer asking about a version for SharePoint 2007.  With a little bit of work I was able to convert the script to work against SharePoint 2007.   Solution    Below is the converted script that works against a SharePoint 2007 farm.  Note: There appears to be a bug with the 2007 version that does not give accurate results against a SharePoint 2010 farm.  I ran the 2007 version against a 2010 farm and got fewer results than my 2010 version of the script.  Discussing with some fellow PFEs I think the discrepancy may be due to sandboxed features, a new concept in SharePoint 2010.  I have not had enough time to test or confirm.  For the time being only use the 2007 version script against SharePoint 2007 farms and the 2010 version against SharePoint 2010 farms.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first. 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(     [Parameter(position = 1, valueFromPipeline=$true)]     [string]     $Identity )#end param     Begin     {         # load SharePoint assembly to access object model         [void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")             # declare empty array to hold results. Will add custom member for Url to show where activated at on objects returned from Get-SPFeature.         $results = @()                 $params = @{}     }     Process     {         if([string]::IsNullOrEmpty($Identity) -eq $false)         {             $params = @{Identity = $Identity}         }                 # create hashtable of farm features to lookup definition ids later         $farm = [Microsoft.SharePoint.Administration.SPFarm]::Local                         # check farm features         $results += ($farm.FeatureDefinitions | Where-Object {$_.Scope -eq "Farm"} | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value ([string]::Empty) -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                 # check web application features         $contentWebAppServices = $farm.services | ? {$_.typename -like "Windows SharePoint Services Web Application"}                 foreach($webApp in $contentWebAppServices.WebApplications)         {             $results += ($webApp.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $webApp.GetResponseUri(0).AbsoluteUri -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                         # check site collection features in current web app             foreach($site in ($webApp.Sites))             {                 $results += ($site.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                  % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $site.Url -PassThru} |                                  Select-Object -Property Scope, DisplayName, Id, Url)                                 # check site features in current site collection                 foreach($web in ($site.AllWebs))                 {                     $results += ($web.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                      % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $web.Url -PassThru} |                                      Select-Object -Property Scope, DisplayName, Id, Url)                                                        $web.Dispose()                 }                 $site.Dispose()             }         }     }     End     {         $results     } } #end Get-SPFeatureActivated Get-SPFeatureActivated   Conclusion    I have posted this script to the TechNet Script Repository (click here).  As always I appreciate any feedback on scripts.  If anyone is motivated to run this 2007 version script against a SharePoint 2010 to see if they find any differences in number of features reported versus what they get with the 2010 version script I’d love to hear from you.         -Frog Out

    Read the article

  • Randomely loosing wireless connexion with Cubuntu 12.04

    - by statquant
    I am presently experiencing random disconnections from my wireless network. It looks like it is more and more frequent (however I have not seen any clear pattern). This is killing me... Here is some information that should help (from ubuntu forums). Thanks for reading Machine : Acer Aspire S3 statquant@euclide:~$ lsb_release -d Description: Ubuntu 12.04.1 LTS statquant@euclide:~$ uname -mr 3.2.0-33-generic x86_64 statquant@euclide:~$ sudo /etc/init.d/networking restart * Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces * Reconfiguring network interfaces... statquant@euclide:~$ lspci 02:00.0 Network controller: Atheros Communications Inc. AR9485 Wireless Network Adapter (rev 01) statquant@euclide:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 004: ID 064e:c321 Suyin Corp. Bus 002 Device 003: ID 0bda:0129 Realtek Semiconductor Corp. statquant@euclide:~$ ifconfig wlan0 Link encap:Ethernet HWaddr 74:de:2b:dd:c4:78 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::76de:2bff:fedd:c478/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:913 errors:0 dropped:0 overruns:0 frame:0 TX packets:802 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:873218 (873.2 KB) TX bytes:125826 (125.8 KB) statquant@euclide:~$ iwconfig wlan0 IEEE 802.11bgn ESSID:"Bbox-D646D1" Mode:Managed Frequency:2.437 GHz Access Point: 00:19:70:80:01:6C Bit Rate=65 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Link Quality=56/70 Signal level=-54 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:71 Missed beacon:0 statquant@euclide:~$ dmesg | grep "wlan" [ 17.495866] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 17.498950] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 20.072015] wlan0: authenticate with 00:19:70:80:01:6c (try 1) [ 20.269853] wlan0: authenticate with 00:19:70:80:01:6c (try 2) [ 20.272386] wlan0: authenticated [ 20.298682] wlan0: associate with 00:19:70:80:01:6c (try 1) [ 20.302321] wlan0: RX AssocResp from 00:19:70:80:01:6c (capab=0x431 status=0 aid=1) [ 20.302325] wlan0: associated [ 20.307307] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 30.402292] wlan0: no IPv6 routers present statquant@euclide:~$ sudo lshw -C network [sudo] password for statquant: *-network description: Wireless interface product: AR9485 Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 01 serial: 74:de:2b:dd:c4:78 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-33-generic firmware=N/A ip=192.168.1.3 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:c0400000-c047ffff memory:afb00000-afb0ffff statquant@euclide:~$ iwlist scan wlan0 Scan completed : Cell 01 - Address: 00:19:70:80:01:6C Channel:6 Frequency:2.437 GHz (Channel 6) Quality=56/70 Signal level=-54 dBm Encryption key:on ESSID:"Bbox-D646D1" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000125fb152bb Extra: Last beacon: 40020ms ago IE: Unknown: 000B42626F782D443634364431 IE: Unknown: 010882848B960C121824 IE: Unknown: 030106 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F2020101820003A4000027A4000042435E0062322F00 IE: Unknown: 2D1A4C101BFF00000000000000000000000000000000000000000000 IE: Unknown: 3D1606080800000000000000000000000000000000000000 IE: Unknown: DD0900037F01010000FF7F IE: Unknown: DD0A00037F04010000000000 And... finally, please note that I did the following (after looking for fixes of similar problems), but unfortunately it did not work sudo modprobe -r iwlwifi sudo modprobe iwlwifi 11n_disable=1

    Read the article

  • How much is a subscriber worth?

    - by Tom Lewin
    This year at Red Gate, we’ve started providing a way to back up SQL Azure databases and Azure storage. We decided to sell this as a service, instead of a product, which means customers only pay for what they use. Unfortunately for us, it makes figuring out revenue much trickier. With a product like SQL Compare, a customer pays for it, and it’s theirs for good. Sure, we offer support and upgrades, but, fundamentally, the sale is a simple, upfront transaction: we’ve made this product, you need this product, we swap product for money and everyone is happy. With software as a service, it isn’t that easy. The money and product don’t change hands up front. Instead, we provide a service in exchange for a recurring fee. We know someone buying SQL Compare will pay us $X, but we don’t know how long service customers will stay with us, or how much they will spend. How do we find this out? We use lifetime value analysis. What is lifetime value? Lifetime value, or LTV, is how much a customer is worth to the business. For Entrepreneurs has a brilliant write up that we followed to conduct our analysis. Basically, it all boils down to this equation: LTV = ARPU x ALC To make it a bit less of an alphabet-soup and a bit more understandable, we can write it out in full: The lifetime value of a customer equals the average revenue per customer per month, times the average time a customer spends with the service Simple, right? A customer is worth the average spend times the average stay. If customers pay on average $50/month, and stay on average for ten months, then a new customer will, on average, bring in $500 over the time they are a customer! Average spend is easy to work out; it’s revenue divided by customers. The problem comes when we realise that we don’t know exactly how long a customer will stay with us. How can we figure out the average lifetime of a customer, if we only have six months’ worth of data? The answer lies in the fact that: Average Lifetime of a Customer = 1 / Churn Rate The churn rate is the percentage of customers that cancel in a month. If half of your customers cancel each month, then your average customer lifetime is two months. The problem we faced was that we didn’t have enough data to make an estimate of one month’s cancellations reliable (because barely anybody cancels)! To deal with this data problem, we can take data from the last three months instead. This means we have more data to play with. We can still use the equation above, we just need to multiply the final result by three (as we worked out how many three month periods customers stay for, and we want our answer to be in months). Now these estimates are likely to be fairly unreliable; when there’s not a lot of data it pays to be cautious with inference. That said, the numbers we have look fairly consistent, and it’s super easy to revise our estimates when new data comes in. At the very least, these numbers give us a vague idea of whether a subscription business is viable. As far as Cloud Services goes, the business looks very viable indeed, and the low cancellation rates are much more than just data points in LTV equations; they show that the product is working out great for our customers, which is exactly what we’re looking for!

    Read the article

  • WiX, MSDeploy and an appealing configuration/deployment paradigm

    - by alexhildyard
    I do a lot of application and server configuration; I've done this for many years and have tended to view the complexity of this strictly in terms of the complexity of the ultimate configuration to be deployed. For example, specific APIs aside, I would tend to regard installing a server certificate as a more complex activity than, say, copying a file or adding a Registry entry.My prejudice revolved around the idea of a sequential deployment script that not only had the explicit prescription to apply a specific server configuration, but also made the implicit presumption that the server in question was in a good known state. Scripts like this fail for hundreds of reasons -- the Default Website didn't exist; the application had already been deployed; the application had already been partially deployed and failed to rollback fully, and so on. And so the problem is that the more complex the configuration activity, the more scope for error in any individual part of that activity, and therefore the greater the chance the server in question will not end up at exactly the desired configuration level.Recently I was introduced to a completely different mindset, which, for want of a better turn of phrase, I will call the "make it so" mindset. It's extremely simple both to explain and to implement. In place of the head-down, imperative script you used to use, you substitute a set of checks -- much like exception handlers -- around each configuration activity, starting with a check of the current system state. Thus the configuration logic becomes: "IF these services aren't started then start them, and IF XYZ website doesn't exist then create it, and IF these shares don't exist then create them, and IF these shares aren't permissioned in some particular way, then permission them so." This works. Really well, in my experience. Scenario 1: You want to get a system into a good known state; it's already in a good known state; you quickly realise there is nothing to do.Scenario 2: You want to get the system into a good known state; your script is flawed or the system is bust; it cannot be put into that state. You know exactly where (at least part of) the problem is and why.Scenario 3: You want to get the system into a good known state; people are fiddling around with the system just now. That's fine. You do what you can, and later you come back and try it againScenario 4: No one wants to deploy anything; they want you to prove that the previous deployment was successful. So you re-run the deployment script with the "-WhatIf" flag. It reports that there was nothing to change. There's your proof.I mentioned two technologies in the title -- MSI and MSDeploy. I am thinking specifically of the conversation that took place here. Having worked with both technologies, I think Rob Mensching's response is appropriately nuanced, and in essence the difference is this: sometimes your target is either to achieve a specific new server state, or to rollback to a known good one. Then again, your target may be to configure what you can, and to understand what you can't. Implicitly MSDeploy's "rollback" is simply to redeploy the previous version, whereas a well-crafted MSI will actively put your system into that state without further intervention. Either way, if all goes well it will leave you with a system in one of two states, whereas MSDeploy could leave your system in one of many states. The key is that MSDeploy and MSI are complementary technologies; which suits you best depends as much on Operational guidance as your Configuration remit.What I wanted to say was that I have always been for atomic, transactional-based configuration, but having worked with the "make it so" paradigm, I have been favourably impressed by the actual results. I'm tempted to put a more technical post up on this in due course.

    Read the article

  • SQL to select random mix of rows fairly [migrated]

    - by Matt Sieker
    Here's my problem: I have a set of tables in a database populated with data from a client that contains product information. In addition to the basic product information, there is also information about the manufacturer, and categories for those products (a product can be in one or more categories). These categories are then referred to as "Product Categories", and which stores these products are available at. These tables are updated once a week from a feed from the customer. Since for our purposes, some of the product categories are the same, or closely related for our purposes, there is another level of categories called "General Categories", a general category can have one or more product categories. For the scope of these tables, here's some rough numbers: Data Tables: Products: 475,000 Manufacturers: 1300 Stores: 150 General Categories: 245 Product Categories: 500 Mapping Tables: Product Category -> Product: 655,000 Stores -> Products: 50,000,000 Now, for the actual problem: As part of our software, we need to select n random products, given a store and a general category. However, we also need to ensure a good mix of manufacturers, as in some categories, a single manufacturer dominates the results, and selecting rows at random causes the results to strongly favor that manufacturer. The solution that is currently in place, works for most cases, involves selecting all of the rows that match the store and category criteria, partition them on manufacturer, and include their row number from within their partition, then select from that where the row number for that manufacturer is less than n, and use ROWCOUNT to clamp the total rows returned to n. This query looks something like this: SET ROWCOUNT 6 select p.Id, GeneralCategory_Id, Product_Id, ISNULL(m.DisplayName, m.Name) AS Vendor, MSRP, MemberPrice, FamilyImageName from (select p.Id, gc.Id GeneralCategory_Id, p.Id Product_Id, ctp.Store_id, Manufacturer_id, ROW_NUMBER() OVER (PARTITION BY Manufacturer_id ORDER BY NEWID()) AS 'VendorOrder', MSRP, MemberPrice, FamilyImageName from GeneralCategory gc inner join GeneralCategoriesToProductCategories gctpc ON gc.Id=gctpc.GeneralCategory_Id inner join ProductCategoryToProduct pctp on gctpc.ProductCategory_Id = pctp.ProductCategory_Id inner join Product p on p.Id = pctp.Product_Id inner join StoreToProduct ctp on p.Id = ctp.Product_id where gc.Id = @GeneralCategory and ctp.Store_id=@StoreId and p.Active=1 and p.MemberPrice >0) p inner join Manufacturer m on m.Id = p.Manufacturer_id where VendorOrder <=6 order by NEWID() SET ROWCOUNT 0 (I've tried to somewhat format it to make it cleaner, but I don't think it really helps) Running this query with an execution plan shows that for the majority of these tables, it's doing a Clustered Index Seek. There are two operations that take up roughly 90% of the time: Index Seek (Nonclustered) on StoreToProduct: 17%. This table just contains the key of the store, and the key of the product. It seems that NHibernate decided not to make a composite key when making this table, but I'm not concerned about this at this point, as compared to the other seek... Clustered Index Seek on Product: 69%. I really have no clue how I could make this one more performant. On categories without a lot of products, performance is acceptable (<50ms), however larger categories can take a few hundred ms, with the largest category taking 3s (which has about 170k products). It seems I have two ways to go from this point: Somehow optimize the existing query and table indices to lower the query time. As almost every expensive operation is already a clustered index scan, I don't know what could be done there. The inner query could be tuned to not return all of the possible rows for that category, but I am unsure how to do this, and maintain the requirements (random products, with a good mix of manufacturers) Denormalize this data for the purpose of this query when doing the once a week import. However, I am unsure how to do this and maintain the requirements. Does anyone have any input on either of these items?

    Read the article

  • Proving What You are Worth

    - by Ted Henson
    Here is a challenge for everyone. Just about everyone has been asked to provide or calculate the Return on Investment (ROI), so I will assume everyone has a method they use. The problem with stopping once you have an ROI is that those in the C-Suite probably do not care about the ROI as much as Return on Equity (ROE). Shareholders are mostly concerned with their return on the money the invested. Warren Buffett looks at ROE when deciding whether to make a deal or not. This article will outline how you can add more meaning to your ROI and show how you can potentially enhance the ROE of the company.   First I want to start with a base definition I am using for ROI and ROE. Return on investment (ROI) and return on equity (ROE) are ways to measure management effectiveness, parts of a system of measures that also includes profit margins for profitability, price-to-earnings ratio for valuation, and various debt-to-equity ratios for financial strength. Without a set of evaluation metrics, a company's financial performance cannot be fully examined by investors. ROI and ROE calculate the rate of return on a specific investment and the equity capital respectively, assessing how efficient financial resources have been used. Typically, the best way to improve financial efficiency is to reduce production cost, so that will be the focus. Now that the challenge has been made and items have been defined, let’s go deeper. Most research about implementation stops short at system start-up and seldom addresses post-implementation issues. However, we know implementation is a continuous improvement effort, and continued efforts after system start-up will influence the ultimate success of a system.   Most UPK ROI’s I have seen only include the cost savings in developing the training material. Some will also include savings based on reduced Help Desk calls. Using just those values you get a good ROI. To get an ROE you need to go a little deeper. Typically, the best way to improve financial efficiency is to reduce production cost, which is the purpose of implementing/upgrading an enterprise application. Let’s assume the new system is up and running and all users have been properly trained and are comfortable using the system. You provide senior management with your ROI that justifies the original cost. What you want to do now is develop a good base value to a measure the current efficiency. Using usage tracking you can look for various patterns. For example, you may find that users that are accessing UPK assistance are processing a procedure, such as entering an order, 5 minutes faster than those that don’t.  You do some research and discover each minute saved in processing a claim saves the company one dollar. That translates to the company saving five dollars on every transaction. Assuming 100,000 transactions are performed a year, and all users improve their performance, the company will be saving $500,000 a year. That $500,000 can be re-invested, used to reduce debt or paid to the shareholders.   With continued refinement during the life cycle, you should be able to find ways to reduce cost. These are the type of numbers and productivity gains that senior management and shareholders want to see. Being able to quantify savings and increase productivity may also help when seeking a raise or promotion.

    Read the article

  • Framework for Everything - Where to begin? [Longer post]

    - by SquaredSoft
    Back story of this question, feel free to skip down for the specific question Hello, I've been very interested in the idea of abstract programming the last few years. I've made about 30 attempts at creating a piece of software that is capable of almost anything you throw at it. I've undertook some attempts at this that have taken upwards of a year, while getting close, never releasing it beyond my compiler. This has been something I've always tried wrapping my head around, and something is always missing. With the title, I'm sure you're assuming, "Yes of course you noob! You can't account for everything!" To which I have to reply, "Why not?" To give you some background into what I'm talking about, this all started with doing maybe a shade of gray hat SEO software. I found myself constantly having to create similar, but slightly different sets of code. I've gone through as many iterations of way to communicate on http as the universe has particles. "How many times am I going to have to write this multi-threaded class?" is something I found myself asking a lot. Sure, I could create a class library, and just work with that, but I always felt I could optimize what I had, which often was a large undertaking and typically involved frequent use of the CRTL+A keyboard shortcut, mixed with the delete button. It dawned on me that it was time to invest in a plugin system. This would allow me to simply add snippets of code. as time went on, and I could subversion stuff out, and distribute small chunks of code, rather than something that encompasses only a specific function or design. This comes with its own complexity, of course, and by the time I had finished the software scope for this addition, it hit me that I would want to add to everything in the software, not just a new http method, or automation code for a specific website. Great, we're getting more abstract. However, the software that I have in my mind comes down to a quite a few questions regarding its execution. I have to have some parameters to what I am going to do. After writing what the perfect software would do in my mind, I came up with this as a list of requirements: Should be able to use networking A "Macro" or "Expression system" which would allow people to do something like : =First(=ParseToList(=GetUrl("http://www.google.com?q=helloworld!"), Template.Google)) Multithreaded Able to add UI elements through some type of XML -- People can make their own addons etc. Can use third party API through the plugins, such as Microsoft CRM, Exchange, etc. This would allow the software to essentially be used for everything. Really, any task you wish to automate, in a simple way. Making the UI was as also extremely hard. How do you do all of this? Its very difficult. So my question: With so many attempts at this, I'm out of ideas how to successfully complete this. I have a very specific idea in my mind, but I keep failing to execute it. I'm a self taught programmer. I've been doing it for years, and work professionally in it, but I've never encountered something that would be as complex and in-depth as a system which essentially does everything. Where would you start? What are the best practices for design? How can I avoid constantly having to go back and optimize my software. What can I do to generalize this and draw everything out to completion. These are things I struggle with. P.s., I'm using c# as my main language. I feel like in this example, I might be hitting the outer limit of the language, although, I don't know if that is the case, or if I'm just a bad programmer. Thanks for your time.

    Read the article

  • Windows Phone 7 and WS-Trust

    - by Your DisplayName here!
    A question that I often hear these days is: “Can I connect a Windows Phone 7 device to my existing enterprise services?”. Well – since most of my services are typically issued token based, this requires support for WS-Trust and WS-Security on the client. Let’s see what’s necessary to write a WP7 client for this scenario. First I converted the Silverlight library that comes with the Identity Training Kit to WP7. Some things are not supported in WP7 WCF (like message inspectors and some client runtime hooks) – but besides that this was a simple copy+paste job. Very nice! Next I used the WSTrustClient to request tokens from my STS: private WSTrustClient GetWSTrustClient() {     var client = new WSTrustClient(         new WSTrustBindingUsernameMixed(),         new EndpointAddress("https://identity.thinktecture.com/…/issue.svc/mixed/username"),         new UsernameCredentials(_txtUserName.Text, _txtPassword.Password));     return client; } private void _btnLogin_Click(object sender, RoutedEventArgs e) {     _client = GetWSTrustClient();       var rst = new RequestSecurityToken(WSTrust13Constants.KeyTypes.Bearer)     {         AppliesTo = new EndpointAddress("https://identity.thinktecture.com/rp/")     };       _client.IssueCompleted += client_IssueCompleted;     _client.IssueAsync(rst); } I then used the returned RSTR to talk to the WCF service. Due to a bug in the combination of the Silverlight library and the WP7 runtime – symmetric key tokens seem to have issues currently. Bearer tokens work fine. So I created the following binding for the WCF endpoint specifically for WP7. <customBinding>   <binding name="mixedNoSessionBearerBinary">     <security authenticationMode="IssuedTokenOverTransport"               messageSecurityVersion="WSSecurity11 WSTrust13 WSSecureConversation13 WSSecurityPolicy12 BasicSecurityProfile10">       <issuedTokenParameters keyType="BearerKey" />     </security>     <binaryMessageEncoding />     <httpsTransport/>   </binding> </customBinding> The binary encoding is not necessary, but will speed things up a little for mobile devices. I then call the service with the following code: private void _btnCallService_Click(object sender, RoutedEventArgs e) {     var binding = new CustomBinding(         new BinaryMessageEncodingBindingElement(),         new HttpsTransportBindingElement());       _proxy = new StarterServiceContractClient(         binding,         new EndpointAddress("…"));     using (var scope = new OperationContextScope(_proxy.InnerChannel))     {         OperationContext.Current.OutgoingMessageHeaders.Add(new IssuedTokenHeader(Globals.RSTR));         _proxy.GetClaimsAsync();     } } works. download

    Read the article

  • 4 Key Ingredients for the Cloud

    - by Kellsey Ruppel
    It's a short week here with the US Thanksgiving Holiday. So, before we put on our stretch pants and get ready to belly up to the dinner table for turkey, stuffing and mashed potatoes, let's spend a little time this week talking about the Cloud (kind of like the feathery whipped goodness that tops the infamous Thanksgiving pumpkin pie!) But before we dive into the Cloud, let's do a side by side comparison of the key ingredients for each. Cloud Whipped Cream  Application Integration  1 cup heavy cream  Security  1/4 cup sugar  Virtual I/O  1 teaspoon vanilla  Storage  Chilled Bowl It’s no secret that millions of people are connected to the Internet. And it also probably doesn’t come as a surprise that a lot of those people are connected on social networking sites.  Social networks have become an excellent platform for sharing and communication that reflects real world relationships and they play a major part in the everyday lives of many people. Facebook, Twitter, Pinterest, LinkedIn, Google+ and hundreds of others have transformed the way we interact and communicate with one another.Social networks are becoming more than just an online gathering of friends. They are becoming a destination for ideation, e-commerce, and marketing. But it doesn’t just stop there. Some organizations are utilizing social networks internally, integrated with their business applications and processes and the possibility of social media and cloud integration is compelling. Forrester alone estimates enterprise cloud computing to grow to over $240 billion by 2020. It’s hard to find any current IT project today that is NOT considering cloud-based deployments. Security and quality of service concerns are no longer at the forefront; rather, it’s about focusing on the right mix of capabilities for the business. Cloud vs. On-Premise? Policies & governance models? Social in the cloud? Cloud’s increasing sophistication, security in applications, mobility, transaction processing and social capabilities make it an attractive way to manage information. And Oracle offers all of this through the Oracle Cloud and Oracle Social Network. Oracle Social Network is a secure private network that provides a broad range of social tools designed to capture and preserve information flowing between people, enterprise applications, and business processes. By connecting you with your most critical applications, Oracle Social Network provides contextual, real-time communication within and across enterprises. With Oracle Social Network, you and your teams have the tools you need to collaborate quickly and efficiently, while leveraging the organization’s collective expertise to make informed decisions and drive business forward. Oracle Social Network is available as part of a portfolio of application and platform services within the Oracle Cloud. Oracle Cloud offers self-service business applications delivered on an integrated development and deployment platform with tools to rapidly extend and create new services. Oracle Social Network is pre-integrated with the Fusion CRM Cloud Service and the Fusion HCM Cloud Service within the Oracle Cloud. If you are looking for something to watch as you veg on the couch in a post-turkey dinner hangover, you might consider watching these how-to videos! And yes, it is perfectly ok to have that 2nd piece of pie

    Read the article

  • In the Firing Line: The impact of project and portfolio performance on the CEO

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} What are the primary measurements for rating CEO performance? For corporate boards, business analysts, investors, and the trade press the metrics they deploy are relatively binary in nature; what is being done to generate earnings, and what is being done to build and sustain high performance? As for the market, interest is primarily aroused when operational and financial performance falls outside planned commitments for the year. When organizations announce better than predicted results, they usually experience an immediate increase in share price. Likewise, poor results have an obviously negative impact on the share price and impact the role and tenure of the incumbent CEO. The danger for the CEO is that the risk of failure is ever present, ranging from manufacturing delays and supply chain issues to labor shortages and scope creep. This risk is enhanced by the involvement of secondary suppliers providing services critical to overall work schedules, and magnified further across a portfolio of programs and projects underway at any one time – and all set within a global context. All can impact planned return on investment and have an inevitable impact on the share price – the primary empirical measure of day-to-day performance. Read this complete complementary report, In the Firing Line and explore what is the direct link between the health of the portfolio and CEO performance. This report will provide an overview of the responsibility the CEO has for implementing and maintaining a culture of accountability, offer examples of some of the higher profile project failings in recent years, and detail the capabilities available to the CEO to mitigate the risks residing in their own portfolios. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Intermittent internet connectivity

    - by Rob Oplawar
    UPDATED: I recently built a new computer and set it up to dual-boot Windows 7 and Ubuntu 11.10. In Windows, using the same hardware, my LAN connectivity is solid. In Ubuntu, however, my network interface periodically dies and resets itself; I'll have a solid connection for 30 seconds, and then it will go out for 30 seconds. When I tail the log: tail -f /var/log/kern.log I see "eth0 link up" messages appear periodically, corresponding with the return of connectivity. I posted the original question months ago, and misinterpreted what was going on. With a working Internet connection in Windows, I ignored the problem for some months. See my answer below for the solution (drivers). ORIGINAL POST In Ubuntu, although I maintain a solid connection to my LAN (pinging the router IP address consistently returns a good result), my internet connectivity drops in and out. When I continuously ping 74.125.227.18 (a google.com server), I get responses for a while, then I start getting "Destination Host Unreachable" for a while, then I get responses again. This happens consistently, dropping the connection for about 30 seconds out of every minute or two. Whether I configure my network via the network manager or via /etc/network/interfaces seems to make no difference. I configure with the following settings: address 192.168.1.101 network 192.168.1.0 gateway 192.168.1.99 (my router's IP address) netmask 255.255.255.0 (confirmed as the right netmask for the router) broadcast 192.168.1.255 (also confirmed with the router). ifconfig confirms that these settings are working: eth0 Link encap:Ethernet HWaddr 50:e5:49:40:da:a6 inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::52e5:49ff:fe40:daa6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11557 errors:0 dropped:11557 overruns:0 frame:11557 TX packets:13117 errors:0 dropped:211 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9551488 (9.5 MB) TX bytes:1930952 (1.9 MB) Interrupt:41 Base address:0xa000 I get the same issue when I use automatic DHCP address settings, although I did confirm that there is no other machine on the network with the static IP address I want to use. As I said, the connection to the local network stays solid - I never have any trouble pinging 192.168.1.* - it's internet addresses that I intermittently cannot reach. It's not a DNS issue because pinging known IP addresses directly shows the same behavior. Also, I don't think it's a hardware issue, as I never have any internet connectivity problems on the same machine in Windows. The network hardware is built into the motherboard: Gigabyte Z68XP-UD3P. I managed to bring the OS fully up to date, according to the update manager, but it didn't fix the issue, and with my limited understanding of network architecture I'm at my wit's end. The only clue I can see is that ifconfig is reporting a lot of dropped packets, but I'm not sure what to do about it. UPDATE: It seems my problem is a little more generic than I described; now when I try pinging my router and google simultaneously, they both go unreachable at the same time. Running ifdown eth0 and then ifup eth0 brings it back temporarily; if I just wait it comes back after a couple of minutes. I'll broaden my search through intermittent network connectivity problems.

    Read the article

  • Confused about modifying the sprint backlog during a sprint

    - by Maltiriel
    I've been reading a lot about scrum lately, and I've found what seem to me to be conflicting information about whether or not it's ok to change the sprint backlog during a sprint. The Wikipedia article on scrum says it's not ok, and various other articles say this as well. Also my Software Development professor taught the same thing during an overview of scrum. However, I read Scrum and XP from the Trenches and that describes a section for unplanned items on the taskboard. So then I looked up the Scrum Guide and it says that during the sprint "No changes are made that would affect the Sprint Goal" and in the discussion of the Sprint Goal "If the work turns out to be different than the Development Team expected, then they collaborate with the Product Owner to negotiate the scope of Sprint Backlog within the Sprint." It goes on to say in the discussion of the Sprint Backlog: The Sprint Backlog is a plan with enough detail that changes in progress can be understood in the Daily Scrum. The Development Team modifies Sprint Backlog throughout the Sprint, and the Sprint Backlog emerges during the Sprint. This emergence occurs as the Development Team works through the plan and learns more about the work needed to achieve the Sprint Goal. As new work is required, the Development Team adds it to the Sprint Backlog. As work is performed or completed, the estimated remaining work is updated. When elements of the plan are deemed unnecessary, they are removed. Only the Development Team can change its Sprint Backlog during a Sprint. The Sprint Backlog is a highly visible, real-time picture of the work that the Development Team plans to accomplish during the Sprint, and it belongs solely to the Development Team. So at this point I'm altogether confused. Thinking about it, it makes more sense to me to take the second approach. The individual, specific items in the backlog don't seem to me to be the most important thing, but rather the sprint goal, so not changing the sprint goal but being able to change the backlog makes sense. For instance if both the product owner and the team thought they were on the same page about a story, but as the sprint progressed they figured out there was a misunderstanding, it seems like it makes sense to change the tasks that make up that story accordingly. Or if there was some story or task that was forgotten about, but is required to reach the sprint goal, I would think it would be best to add the story or task to the backlog during the sprint. However, there are a lot of people who seem quite adamant that any change to the sprint backlog is not ok. Am I misunderstanding that position somehow? Are those folks defining the sprint backlog differently somehow? My understanding of the sprint backlog is that it consists of both the stories and the tasks they're broken down into. Anyway I would really appreciate input on this issue. I'm trying to figure out both what the idealistic scrum approach is to changing the sprint backlog during a sprint, and whether people who use scrum successfully for development allow changing the sprint backlog during a sprint.

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • Best way to process a queue in C# (PDF treatment)

    - by Bartdude
    First of all let me expose what I would like to do : I already dispose of a long-time running webapp developed in ASP.NET (C#) 2.0. In this app, users can upload standard PDF files (text+pics). The app is running in production on a Windows Server 2003 and has a dedicated database server (SQL server 2008) also running Windows Server 2003. I myself am a quite experienced web developer, but never actually programmed anything non-web (or at least nothing serious). I plan on adding a functionality to the webapp for which I would need a jpg snapshot of each page of the PDF. Creating these "thumbnails" isn't the big deal as such, I already do it inside my webapp using ghostscript. I've only done it on 1 page documents for now though, and the new functionality will need to process bigger documents. In order for this process to be transparent aswell for the admins as the final users, I would like to implement some kind of queue to delay the processing of the thumbnails. There again, no problem to create the queue, it will consist of records in a table, with enough info to find the pdf document back. Then I will need to process this queue, and that's were my interrogations start. Obviously the best solution to process it isn't an ASP script or so, so I will have to get out of my known environment. No problem, but I have no idea which direction to go. Therefore, a few questions : What should I develop ? I presumably need something that is "standby" on the server, runs when needed, then returns to idle state until further notice.Should I be looking into Windows service ? Is there another more appropriate type of project ? Depending on the first answer, what will be the approach ? Should I have somehow SQL server "tell" the program/service/... to process the queue, or should I have that program/service/... periodically check the state of the queue and treat new items. In both case, which functionality can I use ? we're not talking about hundreds of PDF a day (max 50 maybe), I can totally afford to treat the queue 1 item at a time. Can you confirm I don't have to look much further on threads and so ? (I found a lot of answers talking about threads in queue treatment, but it looks quite overkill for my needs) Maybe linked to the previous question : what about concurrent call to the program, whatever it is ? Let's suppose it is currently running, and a new record comes in the queue, what should be the behaviour ? I don't need much detailed answers and would already be happy with answers like "You can do the processing with a service, and yes it's possible to have sqlserver on machine A trigger a service start on machine B" or "You have to develop xxx and then use the scheduler to run it every xxx minutes". I don't mind reading articles and so, but I can hardly afford to spend too much time learning stuff to finally realize I went the wrong way for this project, so basically I'm trying to narrow down the scope of matters I need to investigate. Thanks for reading me, I hope I'll find some helping hands on here :-)

    Read the article

  • Redundant Interconnect with Highly Available IP (HAIP) ??

    - by JaneZhang(???)
      ?11.2.0.2??,Oracle ?????Grid Infrastructure(GI)????Redundant Interconnect with Highly Available IP(HAIP).  ?11.2.0.2??,???????????OS?????????,??HAIP??,?????????????????????  ???GI????,??????????????????,??:   ???,HAIP???????169.254.*.*,????????????HAIP ???1?,???4?(???????),???????????  ??:$ crsctl stat res -t -init NAME           TARGET  STATE        SERVER   STATE_DETAILS Cluster Resources--------------------------------------------------------------------------------ora.cluster_interconnect.haip       1        ONLINE  ONLINE       node2                       #ifconfig -aeth1      Link encap:Ethernet  HWaddr 00:14:22:BD:59:DE  <=====????          inet addr:192.168.10.2  Bcast:192.168.10.255  Mask:255.255.255.0          inet6 addr: fe80::214:22ff:febd:59de/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:54297359 errors:0 dropped:0 overruns:0 frame:0          TX packets:58151488 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000           RX bytes:837602539 (798.8 MiB)  TX bytes:3809085161 (3.5 GiB)          Interrupt:169 eth1:1    Link encap:Ethernet  HWaddr 00:14:22:BD:59:DE  <=====????????          inet addr:169.254.185.195  Bcast:169.254.255.255  Mask:255.255.0.0          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          Interrupt:169 ???????HAIP??cluster interconnect: Cluster communication is configured to use the following interface(s) for this instance  169.254.185.195cluster interconnect IPC version:Oracle UDP/IP (generic)IPC Vendor 1 proto 2ASM ?????? HAIP ??cluster interconnect: Cluster communication is configured to use the following interface(s) for this instance  169.254.185.195cluster interconnect IPC version:Oracle UDP/IP (generic)IPC Vendor 1 proto 2  Oracle????ASM??????HAIP??????????????????????,????????????????????,???????????,????HAIP????????????????,???????????  HAIP ????????????,?????????????????   ??HAIP?????,???My Oracle Support Note ??1210883.1.

    Read the article

  • ??AMDU?????MOUNT?DISKGROUP???????

    - by Liu Maclean(???)
    AMDU?ORACLE??ASM??????????,????ASM Metadata Dump Utility(AMDU) AMDU??????????: 1. ?ASM DISK?????????????????2. ?ASM?????????????OS????,Diskgroup??mount??3. ????????,???C?????16????? ?????????AMDU??ASM DISKGROUP??????; ASM???????????????, ?????????????,?????????ASM????? ??DISKGROUP??MOUNT????????????????????????? AMDU???????, ????????ASM DISKGROUP ??MOUNT???????,???RDBMS?????ASM??????? ?? AMDU???11g??????,?????10g?ASM ???? ???????????, ORACLE DATABASE?SPFILE?CONTROLFILE?DATAFILE????ASM DISKGROUP?,?????ASM ORA-600??????MOUNT?DISKGROUP, ???????AMDU??????ASM DISK?????? ?? 1 ??? ??SPFILE?CONTROLFILE?DATAFILE ????: ???????SPFILE ,????SPFILE??PFILE???,?????????????control_files??? SQL> show parameter control_files NAME TYPE VALUE———————————— ———– ——————————control_files string +DATA/prodb/controlfile/current.260.794687955, +FRA/prodb/controlfile/current.256.794687955 ??control_files ?????ASM???????????,+DATA/prodb/controlfile/current.260.794687955 ?? 260????????+DATA ??DISKGROUP??FILE NUMBER ???????ASM DISK?DISCOVERY PATH??,??????ASM?SPFILE??asm_diskstring ???? [oracle@mlab2 oracle.SupportTools]$ unzip amdu_X86-64.zipArchive: amdu_X86-64.zipinflating: libskgxp11.soinflating: amduinflating: libnnz11.soinflating: libclntsh.so.11.1 [oracle@mlab2 oracle.SupportTools]$ export LD_LIBRARY_PATH=./ [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.260amdu_2009_10_10_20_19_17/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_19_17/[oracle@mlab2 amdu_2009_10_10_20_19_17]$ lsDATA_260.f report.txt[oracle@mlab2 amdu_2009_10_10_20_19_17]$ ls -ltotal 9548-rw-r–r– 1 oracle oinstall 9748480 Oct 10 20:19 DATA_260.f-rw-r–r– 1 oracle oinstall 9441 Oct 10 20:19 report.txt ???????DATA_260.f ??????,?????????startup mount RDBMS??: SQL> alter system set control_files=’/opt/oracle.SupportTools/amdu_2009_10_10_20_19_17/DATA_260.f’ scope=spfile; System altered. SQL> startup force mount;ORACLE instance started. Total System Global Area 1870647296 bytesFixed Size 2229424 bytesVariable Size 452987728 bytesDatabase Buffers 1409286144 bytesRedo Buffers 6144000 bytesDatabase mounted. SQL> select name from v$datafile; NAME——————————————————————————–+DATA/prodb/datafile/system.256.794687873+DATA/prodb/datafile/sysaux.257.794687875+DATA/prodb/datafile/undotbs1.258.794687875+DATA/prodb/datafile/users.259.794687875+DATA/prodb/datafile/example.265.794687995+DATA/prodb/datafile/mactbs.267.794688457 6 rows selected. startup mount???,???v$datafile????????,????????DISKGROUP??FILE NUMBER ???./amdu -diskstring ‘/dev/asm*’ -extract ???? ??????????? [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.256amdu_2009_10_10_20_22_21/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_22_21/[oracle@mlab2 amdu_2009_10_10_20_22_21]$ lsDATA_256.f report.txt[oracle@mlab2 amdu_2009_10_10_20_22_21]$ dbv file=DATA_256.f DBVERIFY: Release 11.2.0.3.0 – Production on Sat Oct 10 20:23:12 2009 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY – Verification starting : FILE = /opt/oracle.SupportTools/amdu_2009_10_10_20_22_21/DATA_256.f DBVERIFY – Verification complete Total Pages Examined : 90880Total Pages Processed (Data) : 59817Total Pages Failing (Data) : 0Total Pages Processed (Index): 12609Total Pages Failing (Index): 0Total Pages Processed (Other): 3637Total Pages Processed (Seg) : 1Total Pages Failing (Seg) : 0Total Pages Empty : 14817Total Pages Marked Corrupt : 0Total Pages Influx : 0Total Pages Encrypted : 0Highest block SCN : 1125305 (0.1125305)

    Read the article

  • i get the exception org.hibernate.MappingException: No Dialect mapping for JDBC type: -9

    - by ramesh m
    i am using hibernate .i wrote Native sql query. this query will be execute in sqlSever command promt try { session=HibernateUtil.getInstance().getSession(); transaction=session.beginTransaction(); SQLQuery query = session.createSQLQuery("SELECT AP.PROJECT_NAME, AP.SKILLSET, PA.START_DATE, PA.END_DATE, RS.EMPLOYEE_ID, RS.EMPLOYEE_NAME, RS.REPORTING_PM FROM RESOURCE_MASTER RS,SHARED_PROPOSAL S, ACTUAL_PROPOSAL AP, PROJECT_APPROVED PA, PROJECT_ALLOCATION PL WHERE RS.EMPLOYEE_ID = PL.EMPLOYEE_ID AND PA.PROJECT_ID = PL.PROJECT_ID AND PA.SHARED_PROPOSAL_ID = S.SHARED_PROPOSAL_ID AND S.ACTUAL_PROPOSAL_ID=AP.ACTUAL_PROPOSAL_ID"); List<Object[]> obj=query.list(); Object[] object=new Object[arrayList.size()]; for (int i = 0; i < arrayList.size(); i++) { object[i]=arrayList.get(i); System.out.println(object[i]); } arrayList.get(0); String name=(String)arrayList.get(0); logger.info("In find All searchDeveloper"); }catch(Exception exception) { throw new PPAMException("Contact admin","Problem retrieving resource master list",exception); } like that i am using on that time i got this Exception: org.hibernate.MappingException: No Dialect mapping for JDBC type: -9 this query is executed in sqlserver command propt , i maaped seven tables, but remove ACTUAL_PROPOSAL AP table .it is execute correctly please help me

    Read the article

  • KnpLabs / DoctrineBehaviors Translatable - currentLocale = null

    - by Ruben
    Using the trait \Knp\DoctrineBehaviors\Model\Translatable\Translation inside an Entity, I see that the property currentLocale is never setted , so we always obtain the default locale ('en') in all the calls to $this->translate(). If I change this getDefaultLocale, all the translations are made correctly, so I think that is no problem with the fallback. I tried debug the currentLocaleCallable. I see that if I put a "var_dump ($this-container-get('request'));" in the contructor of currentLocaleCallable, the request have a locale to null. And outside the request has the correct locale.It seems that container is not in the scope: request , i don't know how can I get it to work I post an issue in github https://github.com/KnpLabs/DoctrineBehaviors/issues/71 EDITED This service is defined in vendor/knplabs/doctrine-behaviors/config/orm-services.yml and is: knp.doctrine_behaviors.reflection.class_analyzer: class: "%knp.doctrine_behaviors.reflection.class_analyzer.class%" public: false knp.doctrine_behaviors.translatable_listener: class: "%knp.doctrine_behaviors.translatable_listener.class%" public: false arguments: - "@knp.doctrine_behaviors.reflection.class_analyzer" - "%knp.doctrine_behaviors.reflection.is_recursive%" - "@knp.doctrine_behaviors.translatable_listener.current_locale_callable" tags: - { name: doctrine.event_subscriber } knp.doctrine_behaviors.translatable_listener.current_locale_callable: class: "%knp.doctrine_behaviors.translatable_listener.current_locale_callable.class%" arguments: - "@service_container" # lazy request resolution public: false EDIT 2: My composer.json "php": ">=5.3.3", "symfony/symfony": "2.3.*", "doctrine/orm": ">=2.2.3,<2.4-dev", "doctrine/doctrine-bundle": "1.2.*", "twig/extensions": "1.0.*", "symfony/assetic-bundle": "2.3.*", "symfony/swiftmailer-bundle": "2.3.*", "symfony/monolog-bundle": "2.3.*", "sensio/distribution-bundle": "2.3.*", "sensio/framework-extra-bundle": "2.3.*", "sensio/generator-bundle": "2.3.*", "incenteev/composer-parameter-handler": "~2.0", "friendsofsymfony/user-bundle": "1.3.*", "avalanche123/imagine-bundle": "v2.1", "raulfraile/ladybug-bundle": "~1.0", "genemu/form-bundle": "2.2.*", "friendsofsymfony/rest-bundle": "0.12.0", "stof/doctrine-extensions-bundle": "dev-master", "sonata-project/admin-bundle": "dev-master", "a2lix/translation-form-bundle": "1.*@dev", "sonata-project/user-bundle": "dev-master", "psliwa/pdf-bundle": "dev-master", "jms/serializer-bundle": "dev-master", "jms/di-extra-bundle": "dev-master", "knplabs/doctrine-behaviors": "dev-master", "sonata-project/doctrine-orm-admin-bundle": "dev-master", "knplabs/knp-paginator-bundle": "dev-master", "friendsofsymfony/jsrouting-bundle": "~1.1", "zendframework/zend-validator": ">=2.0.0-rc2", "zendframework/zend-barcode": ">=2.0.0-rc2"

    Read the article

  • How to implement Gmail OAuth API to send email (especially via SMTP)?

    - by Curtis Gibby
    I'm developing a web application that will send emails on behalf of a logged-in user. I'm trying to use the new Gmail OAuth protocol announced described here to send these emails through the user's Gmail account (preferably using SMTP rather than IMAP, but I'm easy). However, the sample PHP code gives me a couple of problems. All of the sample code is based on IMAP, not SMTP. Why "support" the SMTP protocol if you're not going to show people how to use it? The sample code gives me a fatal error from an uncaught Zend exception -- it can't find the "INBOX" folder. Fatal error: Uncaught exception 'Zend_Mail_Storage_Exception' with message 'cannot change folder, maybe it does not exist' in path\to\xoauth-php-samples\Zend\Mail\Storage\Imap.php:467 Stack trace: #0 path\to\xoauth-php-samples\Zend\Mail\Storage\Imap.php(248): Zend_Mail_Storage_Imap-selectFolder('INBOX') #1 path\to\xoauth-php-samples\three-legged.php(184): Zend_Mail_Storage_Imap-__construct(Object(Zend_Mail_Protocol_Imap)) #2 {main} Next exception 'Zend_Mail_Storage_Exception' with message 'cannot select INBOX, is this a valid transport?' in path\to\xoauth-php-samples\Zend\Mail\Storage\Imap.php:254 Stack trace: #0 path\to\xoauth-php-samples\three-legged.php(184): Zend_Mail_Storage_Imap-__construct(Object(Zend_Mail_Protocol_Imap)) #1 {main} in path\to\xoauth-php-samples\Zend\Mail\Storage\Imap.php on line 254 I've verified that I'm getting good OAuth tokens back, I just don't know how to make the actual email transaction happen. This protocol is still rather new, so there's not much unofficial community documentation about it out there, and the official docs are unhelpfully dry stuff about the SMTP RFC. So if anyone can help get this going, I'd greatly appreciate it. Note: I've already been able to connect to Gmail's SMTP server via SSL and successfully send an email, provided that the user has given my application his/her Gmail username and password. I'd like to avoid this method, because it encourages phishing and security-minded users won't accept it. This question is not about that.

    Read the article

  • AbstractMethodError on org.apache.xalan.processor.TransformerFactoryImpl

    - by JBristow
    With the following code: private Document transformDoc(Source source) throws TransformerException, IOException { Transformer xslTransformer = TransformerFactory.newInstance().newTransformer(new StreamSource(pdfTransformXslt.getInputStream())); xslTransformer.setParameter("http://apache.org/xml/features/nonvalidating/load-external-dtd", false); xslTransformer.setParameter("http://xml.org/sax/features/validation", false); JDOMResult result = new JDOMResult(); xslTransformer.transform(source, result); return result.getDocument(); } I'm getting the following error: java.lang.AbstractMethodError: org.apache.xalan.processor.TransformerFactoryImpl.setFeature(Ljava/lang/String;Z)V Why is this? Here's my Maven dependency tree: ------------------------------------------------------------------------ Building mc-hub-batch task-segment: [dependency:tree] ------------------------------------------------------------------------ snapshot com.billmelater:mc-test-support:2.0.0.11-SNAPSHOT: checking for updates from repository.jboss.org [dependency:tree {execution: default-cli}] com.billmelater:mc-hub-batch:jar:2.0.0.11-SNAPSHOT +- com.billmelater:mc-hub-core:jar:2.0.0.11-SNAPSHOT:compile | +- commons-lang:commons-lang:jar:2.4:compile | +- commons-collections:commons-collections:jar:3.2.1:compile | +- commons-beanutils:commons-beanutils:jar:1.8.0:compile | +- commons-digester:commons-digester:jar:2.0:compile | | +- (commons-beanutils:commons-beanutils:jar:1.8.0:compile - omitted for duplicate) | | \- (commons-logging:commons-logging:jar:1.1.1:compile - version managed from 1.0.4; omitted for duplicate) | \- (org.springframework.batch:spring-batch-core:jar:2.0.2.RELEASE:compile - omitted for duplicate) +- com.billmelater:mc-test-support:jar:2.0.0.11-SNAPSHOT:test | +- (com.billmelater:mc-hub-core:jar:2.0.0.11-SNAPSHOT:test - omitted for duplicate) | +- (org.springframework:spring:jar:2.5.6:test - omitted for duplicate) | +- org.springframework:spring-jdbc:jar:2.5.6.SEC01:test | | +- (commons-logging:commons-logging:jar:1.1.1:test - omitted for duplicate) | | +- (org.springframework:spring-beans:jar:2.5.6.SEC01:test - omitted for conflict with 2.5.6) | | +- (org.springframework:spring-context:jar:2.5.6.SEC01:test - omitted for conflict with 2.5.6) | | +- (org.springframework:spring-core:jar:2.5.6.SEC01:test - omitted for conflict with 2.5.6) | | \- (org.springframework:spring-tx:jar:2.5.6.SEC01:test - omitted for conflict with 2.5.6) | +- (org.dbunit:dbunit:jar:2.4.5:test - omitted for duplicate) | +- (log4j:log4j:jar:1.2.15:test - omitted for duplicate) | +- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.5.8; scope updated from test; omitted for duplicate) | +- (org.slf4j:slf4j-log4j12:jar:1.5.6:test - omitted for duplicate) | +- org.jboss.seam:jboss-seam:jar:2.2.0.GA:test | | +- xstream:xstream:jar:1.1.3:test | | +- (xpp3:xpp3_min:jar:1.1.3.4.O:compile - scope updated from test; omitted for duplicate) | | \- org.jboss.el:jboss-el:jar:1.0_02.CR4:test | +- (org.testng:testng:jar:jdk15:5.8:test - omitted for duplicate) | +- (org.hibernate:hibernate-core:jar:3.3.2.GA:test - version managed from 3.3.0.SP1; omitted for duplicate) | +- org.hibernate:hibernate-entitymanager:jar:3.4.0.GA:test | | +- (org.hibernate:ejb3-persistence:jar:1.0.2.GA:test - omitted for duplicate) | | +- (org.hibernate:hibernate-commons-annotations:jar:3.1.0.GA:test - omitted for duplicate) | | +- (org.hibernate:hibernate-annotations:jar:3.4.0.GA:test - omitted for duplicate) | | +- (org.hibernate:hibernate-core:jar:3.3.2.GA:test - version managed from 3.3.0.SP1; omitted for duplicate) | | +- (org.slf4j:slf4j-api:jar:1.5.6:test - version managed from 1.4.2; omitted for duplicate) | | +- (dom4j:dom4j:jar:1.6.1-jboss:test - version managed from 1.6.1; omitted for duplicate) | | +- (javax.transaction:jta:jar:1.0.1B:test - version managed from 1.1; omitted for duplicate) | | \- javassist:javassist:jar:3.4.GA:test | +- (org.hibernate:hibernate-validator:jar:3.1.0.GA:test - omitted for duplicate) | +- (org.apache.velocity:velocity:jar:1.6.2:test - omitted for duplicate) | \- (ojdbc:ojdbc:jar:14:test - omitted for duplicate) +- org.springframework:spring:jar:2.5.6:compile +- org.springframework.batch:spring-batch-core:jar:2.0.2.RELEASE:compile | +- org.springframework.batch:spring-batch-infrastructure:jar:2.0.2.RELEASE:compile | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | +- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | | \- (stax:stax:jar:1.2.0:compile - omitted for duplicate) | +- org.aspectj:aspectjrt:jar:1.5.4:compile | +- org.aspectj:aspectjweaver:jar:1.5.4:compile | +- com.thoughtworks.xstream:xstream:jar:1.3:compile | | \- xpp3:xpp3_min:jar:1.1.4c:compile | +- org.codehaus.jettison:jettison:jar:1.0:compile | +- org.springframework:spring-aop:jar:2.5.6:compile | | +- aopalliance:aopalliance:jar:1.0:compile | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | +- (org.springframework:spring-beans:jar:2.5.6:compile - omitted for duplicate) | | \- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | +- org.springframework:spring-beans:jar:2.5.6:compile | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | \- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | +- org.springframework:spring-context:jar:2.5.6:compile | | +- (aopalliance:aopalliance:jar:1.0:compile - omitted for duplicate) | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | +- (org.springframework:spring-beans:jar:2.5.6:compile - omitted for duplicate) | | \- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | +- org.springframework:spring-core:jar:2.5.6:compile | | \- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | +- org.springframework:spring-tx:jar:2.5.6:compile | | +- (commons-logging:commons-logging:jar:1.1.1:compile - omitted for duplicate) | | +- (org.springframework:spring-beans:jar:2.5.6:compile - omitted for duplicate) | | +- (org.springframework:spring-context:jar:2.5.6:compile - omitted for duplicate) | | \- (org.springframework:spring-core:jar:2.5.6:compile - omitted for duplicate) | \- stax:stax:jar:1.2.0:compile | \- stax:stax-api:jar:1.0.1:compile +- commons-dbcp:commons-dbcp:jar:1.2.2:compile | \- commons-pool:commons-pool:jar:1.3:compile +- org.hibernate:hibernate-core:jar:3.3.2.GA:compile | +- antlr:antlr:jar:2.7.7:compile (version managed from 2.7.6) | +- dom4j:dom4j:jar:1.6.1-jboss:compile (version managed from 1.6.1) | +- javax.transaction:jta:jar:1.0.1B:compile (version managed from 1.1) | \- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) +- org.hibernate:hibernate-validator:jar:3.1.0.GA:compile | +- (org.hibernate:hibernate-core:jar:3.3.2.GA:compile - version managed from 3.3.0.SP1; omitted for duplicate) | +- org.hibernate:hibernate-commons-annotations:jar:3.1.0.GA:compile | | \- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) | \- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) +- org.hibernate:hibernate-annotations:jar:3.4.0.GA:compile | +- org.hibernate:ejb3-persistence:jar:1.0.2.GA:compile | +- (org.hibernate:hibernate-commons-annotations:jar:3.1.0.GA:compile - omitted for duplicate) | +- (org.hibernate:hibernate-core:jar:3.3.2.GA:compile - version managed from 3.3.0.SP1; omitted for duplicate) | +- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) | \- (dom4j:dom4j:jar:1.6.1-jboss:compile - version managed from 1.6.1; omitted for duplicate) +- ojdbc:ojdbc:jar:14:compile +- org.slf4j:slf4j-api:jar:1.5.6:compile +- org.slf4j:slf4j-log4j12:jar:1.5.6:compile | \- (org.slf4j:slf4j-api:jar:1.5.6:compile - version managed from 1.4.2; omitted for duplicate) +- log4j:log4j:jar:1.2.15:compile +- org.apache.velocity:velocity:jar:1.6.2:compile | +- (commons-collections:commons-collections:jar:3.2.1:compile - omitted for duplicate) | +- (commons-lang:commons-lang:jar:2.4:compile - omitted for duplicate) | \- oro:oro:jar:2.0.8:compile +- org.testng:testng:jar:jdk15:5.8:test +- org.dbunit:dbunit:jar:2.4.5:test | +- junit:junit:jar:4.7:test (version managed from 3.8.2) | +- (org.slf4j:slf4j-api:jar:1.5.6:test - version managed from 1.4.2; omitted for duplicate) | \- (commons-collections:commons-collections:jar:3.2.1:test - omitted for duplicate) +- hsqldb:hsqldb:jar:1.8.0.7:test +- jboss:javassist:jar:3.3.ga:provided +- org.jdom:jdom:jar:1.1:compile +- jaxen:jaxen:jar:1.1.1:provided +- org.apache.xmlgraphics:fop:jar:0.95:compile | +- (org.apache.xmlgraphics:xmlgraphics-commons:jar:1.3.1:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile - omitted for cycle) | | +- org.apache.xmlgraphics:batik-anim:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-dom:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | | \- (org.apache.xmlgraphics:batik-parser:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-css:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-dom:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-css:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-xml:jar:1.7:compile - omitted for duplicate) | | | +- (xalan:xalan:jar:2.6.0:compile - omitted for duplicate) | | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-parser:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | | \- (org.apache.xmlgraphics:batik-xml:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-util:jar:1.7:compile | | \- xml-apis:xml-apis-ext:jar:1.3.04:compile | +- org.apache.xmlgraphics:batik-bridge:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-anim:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-css:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for cycle) | | +- (org.apache.xmlgraphics:batik-gvt:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-parser:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for cycle) | | +- org.apache.xmlgraphics:batik-script:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-xml:jar:1.7:compile | | | \- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | +- xalan:xalan:jar:2.6.0:compile | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile | | \- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-gvt:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-gvt:jar:1.7:compile - omitted for cycle) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for duplicate) | | \- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-transcoder:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-gvt:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile - omitted for duplicate) | | +- org.apache.xmlgraphics:batik-svggen:jar:1.7:compile | | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | | \- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-xml:jar:1.7:compile - omitted for duplicate) | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-extension:jar:1.7:compile | | +- (org.apache.xmlgraphics:batik-awt-util:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-bridge:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-css:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-ext:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-gvt:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-parser:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-svg-dom:jar:1.7:compile - omitted for duplicate) | | +- (org.apache.xmlgraphics:batik-util:jar:1.7:compile - omitted for duplicate) | | \- (xml-apis:xml-apis-ext:jar:1.3.04:compile - omitted for duplicate) | +- org.apache.xmlgraphics:batik-ext:jar:1.7:compile | +- commons-logging:commons-logging:jar:1.1.1:compile | +- commons-io:commons-io:jar:1.3.1:compile | \- org.apache.avalon.framework:avalon-framework-api:jar:4.3.1:compile +- org.apache.xmlgraphics:xmlgraphics-commons:jar:1.3.1:compile | +- (commons-io:commons-io:jar:1.3.1:compile - omitted for duplicate) | \- (commons-logging:commons-logging:jar:1.1.1:compile - version managed from 1.0.4; omitted for duplicate) +- org.easymock:easymock:jar:2.0:test \- org.easymock:easymockclassextension:jar:2.2:test +- (org.easymock:easymock:jar:2.2:test - omitted for conflict with 2.0) \- cglib:cglib-nodep:jar:2.2:test (version managed from 2.1_3) Can anyone tell me how to clear out intellij's classpath too?

    Read the article

  • CA2000 and disposal of WCF client

    - by Mayo
    There is plenty of information out there concerning WCF clients and the fact that you cannot simply rely on a using statement to dispose of the client. This is because the Close method can throw an exception (i.e. if the server hosting the service doesn't respond). I've done my best to implement something that adheres to the numerous suggestions out there. public void DoSomething() { MyServiceClient client = new MyServiceClient(); // from service reference try { client.DoSomething(); } finally { client.CloseProxy(); } } public static void CloseProxy(this ICommunicationObject proxy) { if (proxy == null) return; try { if (proxy.State != CommunicationState.Closed && proxy.State != CommunicationState.Faulted) { proxy.Close(); } else { proxy.Abort(); } } catch (CommunicationException) { proxy.Abort(); } catch (TimeoutException) { proxy.Abort(); } catch { proxy.Abort(); throw; } } This appears to be working as intended. However, when I run Code Analysis in Visual Studio 2010 I still get a CA2000 warning. CA2000 : Microsoft.Reliability : In method 'DoSomething()', call System.IDisposable.Dispose on object 'client' before all references to it are out of scope. Is there something I can do to my code to get rid of the warning or should I use SuppressMessage to hide this warning once I am comfortable that I am doing everything possible to be sure the client is disposed of? Related resources that I've found: http://www.theroks.com/2011/03/04/wcf-dispose-problem-with-using-statement/ http://www.codeproject.com/Articles/151755/Correct-WCF-Client-Proxy-Closing.aspx http://codeguru.earthweb.com/csharp/.net/net_general/tipstricks/article.php/c15941/

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >