Search Results

Search found 650 results on 26 pages for 'caller'.

Page 22/26 | < Previous Page | 18 19 20 21 22 23 24 25 26  | Next Page >

  • Great Customer Service Example

    - by MightyZot
    A few days ago I wrote about what I consider a poor customer service interaction with TiVo, a company that I have been faithful to for the past 12 years or so. In that post I talked about how they helped me, but I felt like I was doing something wrong at the end of the call – when in reality I was just following through with an offer that TiVo made possible through my cable company. Today I had a wonderful customer service interaction with American Express, another company that I have been loyal to for many years.(I am a Gold Card member.) I like my Amex card because I can use it for big purchases and it forces me to pay them off at the end of the month. Well, the reality is that I’m not always so good at doing that, so sometimes my payments are over a couple of months.  :) A few days ago I received an email from “American Express” fraud detection. The email stated that I should call a toll free number and have the last four digits of my card handy. I grew up during the BBS era with some creative and somewhat mischievous friends. I’ve learned to be extremely cautious with regard to my online life! So, I did what you would expect…I sent them a nice reply that said “Go screw yourself.” For the past couple of days someone has been trying to call me and I assumed it was the same prankster trying to get the last four digits of my card. The last caller left a message indicating that they were from American Express and they wanted to talk to me about my card. After looking up their customer service numbers on the www.americanexpress.com web site, I called and was put through to the fraud detection group. The rep explained that there were some charges on my wife’s card that did not fit our purchase profile. She went through each charge and, for the most part, they looked like charges my wife may have made. My wife had asked to use the card for some Christmas shopping during the same timeframe as the charges. The American Express rep very politely explained that these looked out of character to her. She continued through the charges. She listed a charge for $160 – at this point my adrenaline started kicking in. My wife said she was going to charge about $25 or $30 dollars, not $160. Next, the rep listed a charge for over $1200. Uh oh!! Now I know that my account has been compromised. I informed the rep that we definitely did not make those charges. She replied with, “that’s ok Mr Pope, we declined those charges as well as some others.” We went through the pending charges and there were a couple more that were questionable. The rep very patiently waited while I called my wife on my office phone to verify the charges. Sure enough, my wife had not ordered anything from Netflix or purchased anything with Yahoo Wallet! “No problem Mr Pope, we will remove those charges as well.” “We are going to cancel your wife’s card and send her a new one. She will receive it by 7pm tomorrow via Federal Express. Please watch your statements over the next couple of months. If you notice anything fishy, give us a call and we will take care of it for you.” (Wow, I’m thinking to myself!) “Is there anything else I can help you with Mr Pope?” “Nope, thank you very much for catching this so early and declining those charges!”, I said smiling. Apparently she could hear me smiling on the other end of the phone line because she replied with “keep smiling Mr Pope and have a good rest of your week.” Now THAT’s customer service!  Thank you American Express!!! I shall remain an ever faithful customer. Interesting…

    Read the article

  • Creating a Synchronous BPEL composite using File Adapter

    - by [email protected]
    By default, the JDeveloper wizard generates asynchronous WSDLs when you use technology adapters. Typically, a user follows these steps when creating an adapter scenario in 11g: 1) Create a SOA Application with either "Composite with BPEL" or an "Empty Composite". Furthermore, if  the user chooses "Empty Composite", then he or she is required to drop the "BPEL Process" from the "Service Components" pane onto the SOA Composite Editor. Either way, the user comes to the screen below where he/she fills in the process details. Please note that the user is required to choose "Define Service Later" as the template. 2) Creates the inbound service and outbound references and wires them with the BPEL component:     3) And, finally creates the BPEL process with the initiating <receive> activity to retrieve the payload and an <invoke> activity to write the payload.     This is how most BPEL processes that use Adapters are modeled. And, if we scrutinize the generated WSDL, we can clearly see that the generated WSDL is one way and that makes the BPEL process asynchronous (see below)   In other words, the inbound FileAdapter would poll for files in the directory and for every file that it finds there, it would translate the content into XML and publish to BPEL. But, since the BPEL process is asynchronous, the adapter would return immediately after the publish and perform the required post processing e.g. deletion/archival and so on.  The disadvantage with such asynchronous BPEL processes is that it becomes difficult to throttle the inbound adapter. In otherwords, the inbound adapter would keep sending messages to BPEL without waiting for the downstream business processes to complete. This might lead to several issues including higher memory usage, CPU usage and so on. In order to alleviate these problems, we will manually tweak the WSDL and BPEL artifacts into synchronous processes. Once we have synchronous BPEL processes, the inbound adapter would automatically throttle itself since the adapter would be forced to wait for the downstream process to complete with a <reply> before processing the next file or message and so on. Please see the tweaked WSDL below and please note that we have converted the one-way to a two-way WSDL and thereby making the WSDL synchronous: Add a <reply> activity to the inbound adapter partnerlink at the end of your BPEL process e.g.   Finally, your process will look like this:   You are done.   Please remember that such an excercise is NOT required for Mediator since the Mediator routing rules are sequential by default. In other words, the Mediator uses the caller thread (inbound file adapter thread) for processing the routing rules. This is the case even if the WSDL for mediator is one-way.

    Read the article

  • Question on the implementation of my Entity System

    - by miguel.martin
    I am currently creating an Entity System, in C++, it is almost completed (I have all the code there, I just have to add a few things and test it). The only thing is, I can't figure out how to implement some features. This Entity System is based off a bit from the Artemis framework, however it is different. I'm not sure if I'll be able to type this out the way my head processing it. I'm going to basically ask whether I should do something over something else. Okay, now I'll give a little detail on my Entity System itself. Here are the basic classes that my Entity System uses to actually work: Entity - An Id (and some methods to add/remove/get/etc Components) Component - An empty abstract class ComponentManager - Manages ALL components for ALL entities within a Scene EntitySystem - Processes entities with specific components Aspect - The class that is used to help determine what Components an Entity must contain so a specific EntitySystem can process it EntitySystemManager - Manages all EntitySystems within a Scene EntityManager - Manages entities (i.e. holds all Entities, used to determine whether an Entity has been changed, enables/disables them, etc.) EntityFactory - Creates (and destroys) entities and assigns an ID to them Scene - Contains an EntityManager, EntityFactory, EntitySystemManager and ComponentManager. Has functions to update and initialise the scene. Now in order for an EntitySystem to efficiently know when to check if an Entity is valid for processing (so I can add it to a specific EntitySystem), it must recieve a message from the EntityManager (after a call of activate(Entity& e)). Similarly the EntityManager must know when an Entity is destroyed from the EntityFactory in the Scene, and also the ComponentManager must know when an Entity is created AND destroyed. I do have a Listener/Observer pattern implemented at the moment, but with this pattern I may remove a Listener (which is this case is dependent on the method being called). I mainly have this implemented for specific things related to a game, i.e. Teams, Tagging of entities, etc. So... I was thinking maybe I should call a private method (using friend classes) to send out when an Entity has been activated, deleted, etc. i.e. taken from my EntityFactory void EntityFactory::killEntity(Entity& e) { // if the entity doesn't exsist in the entity manager within the scene if(!getScene()->getEntityManager().doesExsist(e)) { return; // go back to the caller! (should throw an exception or something..) } // tell the ComponentManager and the EntityManager that we killed an Entity getScene()->getComponentManager().doOnEntityWillDie(e); getScene()->getEntityManager().doOnEntityWillDie(e); // notify the listners for(Mouth::iterator i = getMouth().begin(); i != getMouth().end(); ++i) { (*i)->onEntityWillDie(*this, e); } _idPool.addId(e.getId()); // add the ID to the pool delete &e; // delete the entity } As you can see on the lines where I am telling the ComponentManager and the EntityManager that an Entity will die, I am calling a method to make sure it handles it appropriately. Now I realise I could do this without calling it explicitly, with the help of that for loop notifying all listener objects connected to the EntityFactory's Mouth (an object used to tell listeners that there's an event), however is this a good idea (good design, or what)? I've gone over the PROS and CONS, I just can't decide what I want to do. Calling Explicitly: PROS Faster? Since these functions are explicitly called, they can't be "removed" CONS Not flexible Bad design? (friend functions) Calling through Listener objects (i.e. ComponentManager/EntityManager inherits from a EntityFactoryListener) PROS More Flexible? Better Design? CONS Slower? (virtual functions) Listeners can be removed, i.e. may be removed and not get called again during the program, which could cause in a crash. P.S. If you wish to view my current source code, I am hosting it on BitBucket.

    Read the article

  • History of Mobile Technology

    - by David Dorf
    Over the last ten years, mobile phones have gone through several incremental technology leaps that have added capabilities that impact the retail industry.  I've listed the six major ones below, along with their long-lasting impact. 1. Location In the US, the FCC required mobile phones to implement E911 (emergency calls) by 2006, requiring the caller to be located to within 300 meters.  Back in 2000, GPS was opened up for civilian use, and by 2004 Qualcomm had figured out how to use GPS in mobile phones.  So mobile operators moved from cell tower triangulation to GPS, principally for E911.  But then lots of other uses became apparent, especially navigation.  The earliest mobile apps from retailers made it easy to find nearby stores, and companies are looking at ways to use WiFi triangulation inside stores. 2. Computer Vision In 1997 Philippe Kahn shared a photo of his newborn using a mobile phone thus launching the popularity of instant visual communications.  Over the years the quality of the cameras got better, reaching the point where barcodes could be read around 2008.  That's when Occipital came on the scene with their Red Laser application, which was eventually acquired by eBay.  This opened up the ability for consumers to easily price compare inside stores.  Other interesting apps included Tesco's Wine Finder and Amazon's Price Checker, both allowing products to be identified by picture. 3. Augmented Reality Once the mobile phone had GPS, a video camera, and compass functionality it was suddenly possible to overlay digital information on the screen in real-time.  Yelp, which was using GPS to find nearby merchants, created a backdoor called Monocle on the iPhone that showed nearby merchants overlayed on the video camera view.  Today AR apps are mostly used by retailers for marketing, like Moosejaw's app that undresses models in their catalog. 4. Geo-Fencing So if we're able to track the location of a mobile phone, why not use that context to offer timely information?  My first experience with geo-fencing came courtesy of North Face, the outdoor enthusiast store. When a mobile phone enters a predetermined area, like near a store, a text message is sent to phone with an offer or useful information.  Of course retailers can geo-fence their competitors as well and find out which customers are aren't so loyal. 5. Digital Wallet Mobile payments leverage different technologies such as NFC, QRCodes, bluetooth, and SMS to facilitate communication between the consumers's phone and the retailer's point-of-sale. The key here is the potential to consolidate loyalty cards, coupons, and bank cards into the mobile phone and enable faster checkout.  Nobody does this better than Starbucks today, but McDonald's and Duncan Donuts aren't far behind.  Google, Isis, Paypal, Square, and MCX are all vying for leadership in this area.  If NFC does finally take off, it will be leveraged by retailers in more places than just the POS. 6. Voice Response Mobile Phones have had the ability to interpret simple voice commands for a while, but Google and Amazon were the first to use voice to allow searches for products.  Allowing searches by text, barcode, and voice makes it easy to comparison shop in the aisles.  Walmart even uses voice to build shopping lists, and if the Siri API is even opened we could see lots more innovation in this area.

    Read the article

  • Scenarios for Throwing Exceptions

    - by Joe Mayo
    I recently came across a situation where someone had an opinion that differed from mine of when an exception should be thrown. This particular case was an issue opened on LINQ to Twitter for an Exception on EndSession.  The premise of the issue was that the poster didn’t feel an exception should be raised, regardless of authentication status.  As first, this sounded like a valid point.  However, I went back to review my code and decided not to make any changes. Here's my rationale: 1. The exception doesn’t occur if the user is authenticated when EndAccountSession is called. 2. The exception does occur if the user is not authenticated when EndAccountSession is called. 3. The exception represents the fact that EndAccountSession is not able to fulfill its intended purpose - to end the session.  If a session never existed, then it would not be possible to perform the requested action.  Therefore, an exception is appropriate. To help illustrate how to handle this situation, I've modified the following code in Program.cs in the LinqToTwitterDemo project to illustrate the situation: static void EndSession(ITwitterAuthorizer auth) { using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) { try { //Log twitterCtx.Log = Console.Out; var status = twitterCtx.EndAccountSession(); Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } catch (TwitterQueryException tqe) { var webEx = tqe.InnerException as WebException; if (webEx != null) { var webResp = webEx.Response as HttpWebResponse; if (webResp != null && webResp.StatusCode == HttpStatusCode.Unauthorized) Console.WriteLine("Twitter didn't recognize you as having been logged in. Therefore, your request to end session is illogical.\n"); } var status = tqe.Response; Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } } } As expected, LINQ to Twitter wraps the exception in a TwitterQueryException as the InnerException.  The TwitterQueryException serves a very useful purpose through it's Response property.  Notice in the example above that the response has Request and Error proprieties.  These properties correspond to the information that Twitter returns as part of it's response payload.  This is often useful while debugging to help you understand why Twitter was unable to perform the  requested action.  Other times, it's cryptic, but that's another story.  At least you have some way of knowing in your code how to anticipate and handle these situations, along with having extra information to debug with. To sum things up, there are two points to make: when and why an exception should be raised and when to wrap and re-throw an exception in a custom exception type. I felt it was necessary to allow the exception to be raised because the called method was unable to perform the task it was designed for.  I also felt that it is inappropriate for a general library to do anything with exceptions because that could potentially hide a problem from the caller.  A related point is that it should be the exclusive decision of the application that uses the library on what to do with an exception.  Another aspect of this situation is that I wrapped the exception in a custom exception and re-threw.  This is a tough call because I don’t want to hide any stack trace information.  However, the need to make the exception more meaningful by including vital information returned from Twitter swayed me in the direction to design an interface that was as helpful as possible to library consumers.  As shown in the code above, you can dig into the exception and pull out a lot of good information, such as the fact that the underlying HTTP response was a 401 Unauthorized.  In all, trade-offs are seldom perfect for all cases, but combining the fact that the method was unable to perform its intended function, this is a library, and the extra information can be more helpful, it seemed to be the better design. @JoeMayo

    Read the article

  • OpenWorld: Our (Road) Maps are Looking Good!

    - by Tony Berk
    Wow, only one (or two) days down at Oracle OpenWorld! Are you on overload yet? I'm still trying to figure out how to be in 3 sessions at the same time... I guess everyone needs to prioritize! There was a lot to see in Monday's sessions, especially some great forward-looking roadmap sessions. In case you aren't here or you decided to go to other sessions, this is my quick summary of what I could capture from a couple of the roadmaps: In the Fusion CRM Strategy and Roadmap session, Anthony Lye provided an overview of the Fusion CRM strategy including the key design principles of 3 E's: Easy, Effective and Efficient. After an overview of how Oracle has deployed Fusion CRM internally to 25,000 users worldwide, Anthony discussed the features coming in the next release, the releases in the next 12 months and beyond. I can't detail too much since you haven't read Oracle's Safe Harbor statement, but check out Fusion Tap and look for new features and added functionality for sales prediction, marketing, social and integration with a number of the key Customer Experience products.  In the Oracle RightNow CX Cloud Service Vision and Roadmap session, Chris Hamilton presented the focus areas for the RightNow product. As a result of the large increase in development resources after the acquisition, the RightNow CX team is planning a lot of enhancements to the functionality, infrastructure and integrations. As a key piece of the Oracle Customer Experience (CX) strategy, RightNow will be integrated with Oracle Social Network, Oracle Commerce (ATG and Endeca), Oracle Knowledge, Oracle Policy Automation and, of course, further integration with Fusion Sales and Marketing. Look forward to seeing more on the Virtual Assistant, Smart Interaction Hub and Mobility. In addition to the roadmaps, I was looking forward to hearing from Oracle CRM customers. So, I sat in on two great Siebel customer panels: The Maximizing User Adoption Rates for Siebel Sales and Siebel Partner Relationship Management panel consisted of speakers from CSL Behring, McKesson and Intuit. It was great to get an overview of implementations for both B2B and B2C companies. It was great hearing that all of these companies have more than 1,000 sales users (Intuit has 4,000) and how the 360 degree view of the customer in Siebel is helping these customers improve their customers' experience (CX). They are all great examples of centralized implementations which have standardized processes across the globe and across business units.  Waste Management, Farmers Insurance and the US Citizenship & Immigration Services presented in the Driving Great Customer Experiences with Siebel Service Applications session. Talk about serving large customer bases! Is it possible that Farmers with only 10 million households is the smallest of these 3? All of them provided great examples of how they are improving the customer experience (CX) including 60-70% improvements in efficiency or reducing the number of applications the customer service reps (CSRs) need to use from 10 to 1 (Waste Management) and context aware call transfers to avoid the caller explaining their issue 3 times (USCIS). So that's my wrap up of only 4 sessions from Monday. In between sessions, I stopped by the Oracle DEMOgrounds and CRM Pavilion to visit with a group of great partners and see the products and partner integrations in action. Don't miss a recap of Mark Hurd's Keynote. I can't believe there were another 40+ sessions covering CRM, Fusion, Cloud, etc. that I missed today! Anyone else see any great sessions?

    Read the article

  • Design Pattern for building a Budget

    - by Scott
    So I've looked at the Builder Pattern, Abstract Interfaces, other design patterns, etc. - and I think I'm over thinking the simplicity behind what I'm trying to do, so I'm asking you guys for some help with either recommending a design pattern I should use, or an architecture style I'm not familiar with that fits my task. So I have one model that represents a Budget in my code. At a high level, it looks like this: public class Budget { public int Id { get; set; } public List<MonthlySummary> Months { get; set; } public float SavingsPriority { get; set; } public float DebtPriority { get; set; } public List<Savings> SavingsCollection { get; set; } public UserProjectionParameters UserProjectionParameters { get; set; } public List<Debt> DebtCollection { get; set; } public string Name { get; set; } public List<Expense> Expenses { get; set; } public List<Income> IncomeCollection { get; set; } public bool AutoSave { get; set; } public decimal AutoSaveAmount { get; set; } public FundType AutoSaveType { get; set; } public decimal TotalExcess { get; set; } public decimal AccountMinimum { get; set; } } To go into more detail about some of the properties here shouldn't be necessary, but if you have any questions about those I will fill more out for you guys. Now, I'm trying to create code that builds one of these things based on a set of BudgetBuildParameters that the user will create and supply. There are going to be multiple types of these parameters. For example, on the sites homepage, there will be an example section where you can quickly see what your numbers look like, so they would be a much simpler set of SampleBudgetBuildParameters then say after a user registers and wants to create a fully filled out Budget using much more information in the DebtBudgetBuildParameters. Now a lot of these builds are going to be using similar code for certain tasks, but might want to also check the status of a users DebtCollection when formulating a monthly spending report, where as a Budget that only focuses on savings might not want to. I'd like to reduce code duplication (obviously) as much as possible, but in my head, every way I can think to do this would require using a base BudgetBuilderFactory to return the correct builder to the caller, and then creating say a SimpleBudgetBuilder that inherits from a BudgetBuilder, and put all duplicate code in the BudgetBuilder, and let the SimpleBudgetBuilder handle it's own cases. Problem is, a lot of the unique cases are unique to 2/4 builders, so there will be duplicate code somewhere in there obviously if I did that. Can anyone think of a better way to either explain a solution to this that may or may not be similar to mine, or a completely different pattern or way of thinking here? I really appreciate it.

    Read the article

  • Dont Throw Duplicate Exceptions

    In your code, youll sometimes have write code that validates input using a variety of checks.  Assuming you havent embraced AOP and done everything with attributes, its likely that your defensive coding is going to look something like this: public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new InvalidArgumentException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } Do you see a problem here?  Heres the deal Exceptions should be meaningful.  They have value at a number of levels: In the code, throwing an exception lets the develop know that there is an unsupported condition here In calling code, different types of exceptions may be handled differently At runtime, logging of exceptions provides a valuable diagnostic tool Its this last reason I want to focus on.  If you find yourself literally throwing the exact exception in more than one location within a given method, stop.  The stack trace for such an exception is likely going to be identical regardless of which path of execution led to the exception being thrown.  When that happens, you or whomever is debugging the problem will have to guess which exception was thrown.  Guessing is a great way to introduce additional problems and/or greatly increase the amount of time require to properly diagnose and correct any bugs related to this behavior. Dont Guess Be Specific When throwing an exception from multiple code paths within the code, be specific.  Virtually ever exception allows a custom message use it and ensure each case is unique.  If the exception might be handled differently by the caller, than consider implementing a new custom exception type.  Also, dont automatically think that you can improve the code by collapsing the if-then logic into a single call with short-circuiting (e.g. if(x == null || !x.IsValid()) ) that will guarantee that you cant easily throw different information into the message as easily as constructing the exception separately in each case. The code above might be refactored like so:   public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new ArgumentNullException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } In this case its taking advantage of the fact that there is already an ArgumentNullException in the framework, but if you didnt have an IsValid() method and were doing validation on your own, it might look like this: public void Foo(SomeClass someArgument) { if(someArgument.Quantity < 0) { throw new InvalidArgumentException("someArgument", "Quantity cannot be less than 0. Quantity: " + someArgument.Quantity); } if(someArgument.Quantity > 100) { throw new InvalidArgumentException("someArgument", "SomeArgument.Quantity cannot exceed 100. Quantity: " + someArgument.Quantity); }   // Do Real Work }   Note that in this last example, Im throwing the same exception type in each case, but with different Message values.  Im also making sure to include the value that resulted in the exception, as this can be extremely useful for debugging.  (How many times have you wished NullReferenceException would tell you the name of the variable it was trying to reference?) Dont add work to those who will follow after you to maintain your application (especially since its likely to be you).  Be specific with your exception messages follow DRY when throwing exceptions within a given method by throwing unique exceptions for each interesting case of invalid state. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 + SuExec => Connection reset by peer: mod_fcgid: error reading data from FastCGI server

    - by Zigzag
    I'm trying to use PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 but I always have the error : "Connection reset by peer: mod_fcgid: error reading data from FastCGI server". And Apache returns an error 500 each time I tried to execute a php page : I have compiled the Apache with this options: ./configure --with-mpm=worker --enable-userdir=shared --enable-actions=shared --enable-alias=shared --enable-auth=shared --enable-so --enable-deflate \ --enable-cache=shared --enable-disk-cache=shared --enable-info=shared --enable-rewrite=shared \ --enable-suexec=shared --with-suexec-caller=www-data --with-suexec-userdir=site --with-suexec-logfile=/usr/local/apache2/logs/suexec.log --with-suexec-docroot=/home Then PHP: ./configure --with-config-file-path=/usr/local/apache2/php --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql --with-zlib --enable-exif --with-gd --enable-cgi Then FCdigd: APXS=/usr/local/apache2/bin/apxs ./configure.apxs The VHOST is: <Directory /home/website_panel/site/> FCGIWrapper /home/website_panel/cgi/php .php ... ErrorLog /home/website_panel/logs/error.log </Directory> cat /home/website_panel/logs/error.log [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:42 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:42 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:43 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:43 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php The Suexec log: root:/usr/local/apache2# cat /var/log/apache2/suexec.log [2010-03-07 22:11:05]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:15]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:23]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:42]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:43]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php root:/usr/local/apache2# cat logs/error_log [Sun Mar 07 22:18:47 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/local/apache2/bin/suexec) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Memory Allocated 0 bytes (each conf takes 32 bytes) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Version 0.7 - Initialized [0 Confs] [Sun Mar 07 22:18:47 2010] [notice] Apache/2.2.14 (Unix) mod_fcgid/2.3.5 configured -- resuming normal operations root:/usr/local/apache2# /home/website_panel/cgi/php -v PHP 5.3.2 (cli) (built: Mar 7 2010 16:01:49) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies If someone has got an idea, I want to hear it ^^ Thanks !

    Read the article

  • PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 + SuExec => Connection reset by peer: mod_fcgid: error readi

    - by Zigzag
    Hi, I'm trying to use PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 but I always have the error : "Connection reset by peer: mod_fcgid: error reading data from FastCGI server". And Apache returns an error 500 each time I tried to execute a php page : I have compiled the Apache with this options: ./configure --with-mpm=worker --enable-userdir=shared --enable-actions=shared --enable-alias=shared --enable-auth=shared --enable-so --enable-deflate \ --enable-cache=shared --enable-disk-cache=shared --enable-info=shared --enable-rewrite=shared \ --enable-suexec=shared --with-suexec-caller=www-data --with-suexec-userdir=site --with-suexec-logfile=/usr/local/apache2/logs/suexec.log --with-suexec-docroot=/home Then PHP: ./configure --with-config-file-path=/usr/local/apache2/php --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql --with-zlib --enable-exif --with-gd --enable-cgi Then FCdigd: APXS=/usr/local/apache2/bin/apxs ./configure.apxs The VHOST is: <Directory /home/website_panel/site/> FCGIWrapper /home/website_panel/cgi/php .php ... ErrorLog /home/website_panel/logs/error.log </Directory> cat /home/website_panel/logs/error.log [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:42 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:42 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:43 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:43 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php The Suexec log: root:/usr/local/apache2# cat /var/log/apache2/suexec.log [2010-03-07 22:11:05]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:15]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:23]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:42]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:43]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php root:/usr/local/apache2# cat logs/error_log [Sun Mar 07 22:18:47 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/local/apache2/bin/suexec) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Memory Allocated 0 bytes (each conf takes 32 bytes) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Version 0.7 - Initialized [0 Confs] [Sun Mar 07 22:18:47 2010] [notice] Apache/2.2.14 (Unix) mod_fcgid/2.3.5 configured -- resuming normal operations root:/usr/local/apache2# /home/website_panel/cgi/php -v PHP 5.3.2 (cli) (built: Mar 7 2010 16:01:49) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies If someone has got an idea, I want to hear it ^^ Thanks !

    Read the article

  • Hang while starting several daemons [solved]

    - by Adrian Lang
    I’m running a Debian Squeeze AMD64 server. Target runlevel after boot is runlevel 2, which includes rsyslogd, cron, sshd and some other stuff, but not dovecot, postfix, apache2, etc. The system fails to reach runlevel 2 with several symptoms: The system hangs at trying to start rsyslogd Booting into runlevel 1 works, then login from the console works Starting rsyslogd from runlevel 1 via /etc/init.d/rsyslog hangs Starting runlevel 2 with rsyslogd disabled works But then, logging in via console fails: I get the motd, and then nothing Starting sshd from runlevel 1 succeeds But then, I cannot login via ssh. Sometimes password ssh login gives me the motd and then nothing, sometimes not even this. Trying to offer a public key seems to annoy the sshd enough to not talk to me any further. When rebooting from runlevel 1, the server hangs at trying to stop apache2 (which is not running, so this really should be trivial). Trying to stop apache2 when logged in in runleve 1 does hang as well. And that’s just the stuff which fails all the time. RAM has been tested, dmesg shows no problems. I have no clue. Update: (shortened) output from rsyslogd -c4 -d called in runlevel 1 rsyslogd 4.6.4 startup, compatibility mode 4, module path '' caller requested object 'net', not found (iRet -3003) Requested to load module 'lmnet' loading module '/user/lib/rsyslog/lmnet.so' module of type 2 being loaded conf.c requested ref for 'lmnet', refcount 1 rsylog runtime initialized, version 4.6.4, current users 1 syslogd.c requested ref for 'lmnet', refcount now 2 I can kill rsyslogd with Strg+C, then. /var/log shows none of the configured log files, though. Update2: Thanks to @DerfK I still have no clue, but at least I narrowed down the problem. I’m now testing with /etc/init.d/apache2 stop (without an apache2 running, of course) which hangs as well and looks like an even more obvious failure. After some testing I found out that a file with one single line: /usr/sbin/apache2ctl configtest /dev/null 2&1 hangs, while the same line executed in an interactive shell works. I was not able to further reduce this line while, i. e. every single part, the stream redirections and the commando itself is necessary to reproduce the hang. @DerfK also pointed me to strace which gave a shallow hint about what kind of hang we have here: wait4(-1for the init scripts futex(0xsomepointer, FUTEX_WAIT_PRIVATE, 2, NULL for rsyslogd / apache2 binaries called by the init scripts The system was installed as a Debian Lenny by my hoster in autumn 2011, I upgraded it to Squeeze immediately and kept it up to date with Squeeze, which then used to be testing. There were no big changes, though. I guess I never tried to reboot the system before. Update3: I found the problem. My /etc/nsswitch.conf specified ldap as hosts lookup backup, which is not available at that time of the boot. Relying on dns solely fixes my boot problems.

    Read the article

  • Hang while starting several daemons

    - by Adrian Lang
    I’m running a Debian Squeeze AMD64 server. Target runlevel after boot is runlevel 2, which includes rsyslogd, cron, sshd and some other stuff, but not dovecot, postfix, apache2, etc. The system fails to reach runlevel 2 with several symptoms: The system hangs at trying to start rsyslogd Booting into runlevel 1 works, then login from the console works Starting rsyslogd from runlevel 1 via /etc/init.d/rsyslog hangs Starting runlevel 2 with rsyslogd disabled works But then, logging in via console fails: I get the motd, and then nothing Starting sshd from runlevel 1 succeeds But then, I cannot login via ssh. Sometimes password ssh login gives me the motd and then nothing, sometimes not even this. Trying to offer a public key seems to annoy the sshd enough to not talk to me any further. When rebooting from runlevel 1, the server hangs at trying to stop apache2 (which is not running, so this really should be trivial). Trying to stop apache2 when logged in in runleve 1 does hang as well. And that’s just the stuff which fails all the time. RAM has been tested, dmesg shows no problems. I have no clue. Update: (shortened) output from rsyslogd -c4 -d called in runlevel 1 rsyslogd 4.6.4 startup, compatibility mode 4, module path '' caller requested object 'net', not found (iRet -3003) Requested to load module 'lmnet' loading module '/user/lib/rsyslog/lmnet.so' module of type 2 being loaded conf.c requested ref for 'lmnet', refcount 1 rsylog runtime initialized, version 4.6.4, current users 1 syslogd.c requested ref for 'lmnet', refcount now 2 I can kill rsyslogd with Strg+C, then. /var/log shows none of the configured log files, though. Update2: Thanks to @DerfK I still have no clue, but at least I narrowed down the problem. I’m now testing with /etc/init.d/apache2 stop (without an apache2 running, of course) which hangs as well and looks like an even more obvious failure. After some testing I found out that a file with one single line: /usr/sbin/apache2ctl configtest /dev/null 2&1 hangs, while the same line executed in an interactive shell works. I was not able to further reduce this line while, i. e. every single part, the stream redirections and the commando itself is necessary to reproduce the hang. @DerfK also pointed me to strace which gave a shallow hint about what kind of hang we have here: wait4(-1for the init scripts futex(0xsomepointer, FUTEX_WAIT_PRIVATE, 2, NULL for rsyslogd / apache2 binaries called by the init scripts The system was installed as a Debian Lenny by my hoster in autumn 2011, I upgraded it to Squeeze immediately and kept it up to date with Squeeze, which then used to be testing. There were no big changes, though. I guess I never tried to reboot the system before.

    Read the article

  • How can I change exim's DKIM and SPF for emails sent?

    - by 0pt1m1z3
    I've now spent 2 hours trying to figure out this issue and I am about to give up and go to bed. I've been having issues with Gmail rejecting emails from my VPS server because of false spam alerts (probably caused by lfd sending too many emails). So I changed my Exim config to send emails from a different IP (my VPS comes with 3) and that fixed the issue. I also enabled DKIM and SPF on my domains for added measure. But now, all my emails appear as ("From: Sender Name via server.domain1.com") where server.domain1.com is my VPS hostname. I previously had the same issue in Outlook and turning off "Set SMTP Sender: headers" solved that problem. But I believe adding the DKIM and SPF now makes Gmail add "via server.domain1.com" to my messages. How do I fix this? This is a typical header for a message (as it appears at gmail): Delivered-To: [email protected] Received: by 10.60.44.163 with SMTP id f3csp248622oem; Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Received: by 10.50.106.200 with SMTP id gw8mr452788igb.10.1333081398523; Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Return-Path: <[email protected]> Received: from domain2.com ([X.X.X.X]) by mx.google.com with ESMTPS id y1si810998igb.3.2012.03.29.21.23.18 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates X.X.X.X as permitted sender) client-ip=X.X.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates X.X.X.X as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=server.domain1.com; s=default; h=Date:Message-Id:From:Content-type:MIME-Version:Subject:To; bh=wF8bBRgh01EYg4t5DAeVPv1Ps906UVIeRnQCb/HvSYw=; b=k/Pg7lnrO+Ud/z1mOTv+O/3DiJzzQgyBhfIizIaFHM8tF/eNJt5P2k+9yQB224sxYstZIWwVRBJmiqvcM1QhARv1HWqWma0crppZ3JOn+LRHANan634OBi+58SIRA+gu; Received: (Exim 4.77) id 1SDTVE-0005HA-9Y for [email protected]; Fri, 30 Mar 2012 00:31:56 -0400 To: [email protected] Subject: Password Reset Request MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 From: Sender Name <[email protected]> Message-Id: <[email protected]> Date: Fri, 30 Mar 2012 00:31:56 -0400 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - server.domain1.com X-AntiAbuse: Original Domain - domain2.com X-AntiAbuse: Originator/Caller UID/GID - [507 504] / [47 12] X-AntiAbuse: Sender Address Domain - server.domain1.com

    Read the article

  • windows 2008 R2 TS printer security - can't take owership

    - by Ian
    I have a Windows 2008 R2 server with Terminal server role installed. I'm seeing a problem with an ordinary user who is member of local printer operators group on the server. If the user opens a cmd window using ‘run as administrator’ they can run printmanager.msc without needing to enter their password again. In printmanager they can change the ownership of redirected (easy print) printers without problems. If, from the same cmd window, they use subinacl to try and change the onwership of the queue to themselves they get access denied: >subinacl.exe /printer "_#MyPrinter (2 redirected)" /setowner="MyDom\MyUsr" Elapsed Time: 00 00:00:00 Done: 1, Modified 0, Failed 1, Syntax errors 0 Last Done : _#MyPrinter (2 redirected) Last Failed: _#MyPrinter (2 redirected) - OpenPrinter Error : 5 Access denied so, same context, same action but one works and one doesn't. Any ideas for this odd behaviour? I'm using subinacl x86 on an x64 server as I can't find anything more up to date. I've tried with icacls and others but couldn't get them to do anything with printers. EDIT: added after Gregs comments regarding setacl below If I log into the TS server as Testusr and open Admin Tools Printer Admin (as administrator) and then type mydomain\testusr and the testusr's password, then I can change the ownership of the printer queue and set testusr as the owner. However if I open cmd as administrator and, again, type mydomain\testusr and the users password when I try to change the ownership of my redirected printer I get the following: C:\>setacl -on "Bullzip PDF Printer (12 redireccionado)" -ot prn -actn setowner -ownr n:mydom\testusr WARNING: Privilege 'Back up files and directories' could not be enabled. SetACL's powers are restricted. WARNING: Privilege 'Restore files and directories' could not be enabled. SetACL's powers are restricted. INFORMATION: Processing ACL of: <Bullzip PDF Printer (12 redireccionado)> ERROR: Enabling the privilege SeTakeOwnershipPrivilege failed with: No todos los privilegios o grupos a los que se hace referencia son asignados al llamador. [meaning not all referenced privs or groups are assigned to the caller] SetACL finished with error(s): SetACL error message: A privilege could not be enabled maybe I'm getting something wrong but if the built in windows tool can do it with just membership of the 'print operators' group then setacl should be able to as well, no? However setacl seems to depend on other privileges, which in reality are not required to do this.

    Read the article

  • Confused with DKIM, SPF and Exim Configs

    - by 0pt1m1z3
    I've now spent 2 hours trying to figure out this issue and I am about to give up and go to bed. I've been having issues with Gmail rejecting emails from my VPS server because of false spam alerts (probably caused by lfd sending too many emails). So I changed my Exim config to send emails from a different IP (my VPS comes with 3) and that fixed the issue. I also enabled DKIM and SPF on my domains for added measure. But now, all my emails appear as ("From: Sender Name via server.domain1.com") where server.domain1.com is my VPS hostname. I previously had the same issue in Outlook and turning off "Set SMTP Sender: headers" solved that problem. But I believe adding the DKIM and SPF now makes Gmail add "via server.domain1.com" to my messages. How do I fix this? This is a typical header for a message (as it appears at gmail): Delivered-To: [email protected] Received: by 10.60.44.163 with SMTP id f3csp248622oem; Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Received: by 10.50.106.200 with SMTP id gw8mr452788igb.10.1333081398523; Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Return-Path: <[email protected]> Received: from domain2.com ([X.X.X.X]) by mx.google.com with ESMTPS id y1si810998igb.3.2012.03.29.21.23.18 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates X.X.X.X as permitted sender) client-ip=X.X.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates X.X.X.X as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=server.domain1.com; s=default; h=Date:Message-Id:From:Content-type:MIME-Version:Subject:To; bh=wF8bBRgh01EYg4t5DAeVPv1Ps906UVIeRnQCb/HvSYw=; b=k/Pg7lnrO+Ud/z1mOTv+O/3DiJzzQgyBhfIizIaFHM8tF/eNJt5P2k+9yQB224sxYstZIWwVRBJmiqvcM1QhARv1HWqWma0crppZ3JOn+LRHANan634OBi+58SIRA+gu; Received: (Exim 4.77) id 1SDTVE-0005HA-9Y for [email protected]; Fri, 30 Mar 2012 00:31:56 -0400 To: [email protected] Subject: Password Reset Request MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 From: Sender Name <[email protected]> Message-Id: <[email protected]> Date: Fri, 30 Mar 2012 00:31:56 -0400 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - server.domain1.com X-AntiAbuse: Original Domain - domain2.com X-AntiAbuse: Originator/Caller UID/GID - [507 504] / [47 12] X-AntiAbuse: Sender Address Domain - server.domain1.com

    Read the article

  • Asterisk: Dropping calls with an "ast_yyerror"

    - by Nick
    I'm having an intermittent issue where asterisk will play our greeting to the caller, and then drop the call instead of making our phones ring. I'm unable to reproduce the problem with any phones I have here, and many callers get through just fine. Some callers though, run into the problem, and I can't find any pattern to it. The bit of information I could find said it was caused by an error in evaluating a dialplan expression. I'm thinking it's this line: exten = START,n,GotoIf($[${FORCE_CLOSED}=TRUE]?CLOSED,1) But I'm not sure what's wrong with it. I see the following error on the console: [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:459 ast_yyerror: ast_yyerror(): syntax error: syntax error, unexpected '=', expecting $end; Input:=TRUE^ Surrounding Console output: -- Executing [START@AGInbound:1] Answer("IAX2/AtlantaTeliax-10086", "") in new stack -- Executing [START@AGInbound:2] BackGround("IAX2/AtlantaTeliax-10086", 0000_AG_THANK_YOU_FOR_CALLING_AG") in new stack -- Playing '0000_AG_THANK_YOU_FOR_CALLING_AG.slin' (language 'en') [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:459 ast_yyerror: ast_yyerror(): syntax error: syntax error, unexpected '=', expecting $end; Input: =TRUE ^ [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:463 ast_yyerror: If you have questions, please refer to doc/tex/channelvariables.tex in the asterisk source. -- Executing [START@AGInbound:3] GotoIf("IAX2/AtlantaTeliax-10086", "?CLOSED,1") in new stack -- Executing [START@AGInbound:4] GotoIfTime("IAX2/AtlantaTeliax-10086", "9:30-17:0|mon-fri|*|*?OPEN,1") in new stack -- Executing [START@AGInbound:5] GotoIfTime("IAX2/AtlantaTeliax-10086", "10:0-18:30|sat|*|*?OPEN,1") in new stack -- Executing [START@AGInbound:6] GotoIfTime("IAX2/AtlantaTeliax-10086", "12:0-17:0|sun|*|*?OPEN,1") in new stack Relevant lines from the dial plan: exten = START,1,Answer() exten = START,n,Background(0000_AG_THANK_YOU_FOR_CALLING_AG) ; See if we're open ; Force Closed if no one's going to be answering exten = START,n,GotoIf($[${FORCE_CLOSED}=TRUE]?CLOSED,1) exten = START,n,GotoIfTime(${AG_WEEKDAY_OPEN_HOUR}:${AG_WEEKDAY_OPEN_MIN}-${AG$ exten = START,n,GotoIfTime(${AG_SATURDAY_OPEN_HOUR}:${AG_SATURDAY_OPEN_MIN}-${$ exten = START,n,GotoIfTime(${AG_SUNDAY_OPEN_HOUR}:${AG_SUNDAY_OPEN_MIN}-${AG_S$ ; ...and we're not. But maybe the time of day has been overridden? exten = START,n,GotoIf($[${OVERRIDE_TIME_OF_DAY}=TRUE]?OPEN,1) ; No override... We're definatly closed. exten = START,n,Goto(CLOSED,1) Any idea what's wrong with the expression? We recently upgraded from 1.4 to 1.6.

    Read the article

  • C# 5 Async, Part 1: Simplifying Asynchrony – That for which we await

    - by Reed
    Today’s announcement at PDC of the future directions C# is taking excite me greatly.  The new Visual Studio Async CTP is amazing.  Asynchronous code – code which frustrates and demoralizes even the most advanced of developers, is taking a huge leap forward in terms of usability.  This is handled by building on the Task functionality in .NET 4, as well as the addition of two new keywords being added to the C# language: async and await. This core of the new asynchronous functionality is built upon three key features.  First is the Task functionality in .NET 4, and based on Task and Task<TResult>.  While Task was intended to be the primary means of asynchronous programming with .NET 4, the .NET Framework was still based mainly on the Asynchronous Pattern and the Event-based Asynchronous Pattern. The .NET Framework added functionality and guidance for wrapping existing APIs into a Task based API, but the framework itself didn’t really adopt Task or Task<TResult> in any meaningful way.  The CTP shows that, going forward, this is changing. One of the three key new features coming in C# is actually a .NET Framework feature.  Nearly every asynchronous API in the .NET Framework has been wrapped into a new, Task-based method calls.  In the CTP, this is done via as external assembly (AsyncCtpLibrary.dll) which uses Extension Methods to wrap the existing APIs.  However, going forward, this will be handled directly within the Framework.  This will have a unifying effect throughout the .NET Framework.  This is the first building block of the new features for asynchronous programming: Going forward, all asynchronous operations will work via a method that returns Task or Task<TResult> The second key feature is the new async contextual keyword being added to the language.  The async keyword is used to declare an asynchronous function, which is a method that either returns void, a Task, or a Task<T>. Inside the asynchronous function, there must be at least one await expression.  This is a new C# keyword (await) that is used to automatically take a series of statements and break it up to potentially use discontinuous evaluation.  This is done by using await on any expression that evaluates to a Task or Task<T>. For example, suppose we want to download a webpage as a string.  There is a new method added to WebClient: Task<string> WebClient.DownloadStringTaskAsync(Uri).  Since this returns a Task<string> we can use it within an asynchronous function.  Suppose, for example, that we wanted to do something similar to my asynchronous Task example – download a web page asynchronously and check to see if it supports XHTML 1.0, then report this into a TextBox.  This could be done like so: private async void button1_Click(object sender, RoutedEventArgs e) { string url = "http://reedcopsey.com"; string content = await new WebClient().DownloadStringTaskAsync(url); this.textBox1.Text = string.Format("Page {0} supports XHTML 1.0: {1}", url, content.Contains("XHTML 1.0")); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Let’s walk through what’s happening here, step by step.  By adding the async contextual keyword to the method definition, we are able to use the await keyword on our WebClient.DownloadStringTaskAsync method call. When the user clicks this button, the new method (Task<string> WebClient.DownloadStringTaskAsync(string)) is called, which returns a Task<string>.  By adding the await keyword, the runtime will call this method that returns Task<string>, and execution will return to the caller at this point.  This means that our UI is not blocked while the webpage is downloaded.  Instead, the UI thread will “await” at this point, and let the WebClient do it’s thing asynchronously. When the WebClient finishes downloading the string, the user interface’s synchronization context will automatically be used to “pick up” where it left off, and the Task<string> returned from DownloadStringTaskAsync is automatically unwrapped and set into the content variable.  At this point, we can use that and set our text box content. There are a couple of key points here: Asynchronous functions are declared with the async keyword, and contain one or more await expressions In addition to the obvious benefits of shorter, simpler code – there are some subtle but tremendous benefits in this approach.  When the execution of this asynchronous function continues after the first await statement, the initial synchronization context is used to continue the execution of this function.  That means that we don’t have to explicitly marshal the call that sets textbox1.Text back to the UI thread – it’s handled automatically by the language and framework!  Exception handling around asynchronous method calls also just works. I’d recommend every C# developer take a look at the documentation on the new Asynchronous Programming for C# and Visual Basic page, download the Visual Studio Async CTP, and try it out.

    Read the article

  • How to propagate http response code from back-end to client

    - by Manoj Neelapu
    Oracle service bus can be used as for pass through casses. Some use cases require propagating the http-response code back to the caller. http://forums.oracle.com/forums/thread.jspa?messageID=4326052&#4326052 is one such example we will try to accomplish in this tutorial.We will try to demonstrate this feature using Oracle Service Bus (11.1.1.3.0. We will also use commons-logging-1.1.1, httpcomponents-client-4.0.1, httpcomponents-core-4.0.1 for writing the client to demonstrate.First we create a simple JSP which will always set response code to 304.The JSP snippet will look like <%@ page language="java"     contentType="text/xml;     charset=UTF-8"        pageEncoding="UTF-8" %><%      System.out.println("Servlet setting Responsecode=304");    response.setStatus(304);    response.flushBuffer();%>We will now deploy this JSP on weblogic server with URI=http://localhost:7021/reponsecode/For this JSP we will create a simple Any XML BS We will also create proxy service as shown below Once the proxy is created we configure pipeline for the proxy to use route node, which invokes the BS(JSPCaller) created in the first place. So now we will create a error handler for route node and will add a stage. When a HTTP BS sends a request, the JSP sends the response back. If the response code is not 200, then the http BS will consider that as error and the above configured error handler is invoked. We will print $outbound to show the response code sent by the JSP. The next actions. To test this I had create a simple clientimport org.apache.http.Header;import org.apache.http.HttpEntity;import org.apache.http.HttpHost;import org.apache.http.HttpResponse;import org.apache.http.HttpVersion;import org.apache.http.client.methods.HttpGet;import org.apache.http.conn.ClientConnectionManager;import org.apache.http.conn.scheme.PlainSocketFactory;import org.apache.http.conn.scheme.Scheme;import org.apache.http.conn.scheme.SchemeRegistry;import org.apache.http.impl.client.DefaultHttpClient;import org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager;import org.apache.http.params.BasicHttpParams;import org.apache.http.params.HttpParams;import org.apache.http.params.HttpProtocolParams;import org.apache.http.util.EntityUtils;/** * @author MNEELAPU * */public class TestProxy304{    public static void main(String arg[]) throws Exception{     HttpHost target = new HttpHost("localhost", 7021, "http");     // general setup     SchemeRegistry supportedSchemes = new SchemeRegistry();     // Register the "http" protocol scheme, it is required     // by the default operator to look up socket factories.     supportedSchemes.register(new Scheme("http",              PlainSocketFactory.getSocketFactory(), 7021));     // prepare parameters     HttpParams params = new BasicHttpParams();     HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1);     HttpProtocolParams.setContentCharset(params, "UTF-8");     HttpProtocolParams.setUseExpectContinue(params, true);     ClientConnectionManager connMgr = new ThreadSafeClientConnManager(params,              supportedSchemes);     DefaultHttpClient httpclient = new DefaultHttpClient(connMgr, params);     HttpGet req = new HttpGet("/HttpResponseCode/ProxyExposed");     System.out.println("executing request to " + target);     HttpResponse rsp = httpclient.execute(target, req);     HttpEntity entity = rsp.getEntity();     System.out.println("----------------------------------------");     System.out.println(rsp.getStatusLine());     Header[] headers = rsp.getAllHeaders();     for (int i = 0; i < headers.length; i++) {         System.out.println(headers[i]);     }     System.out.println("----------------------------------------");     if (entity != null) {         System.out.println(EntityUtils.toString(entity));     }     // When HttpClient instance is no longer needed,      // shut down the connection manager to ensure     // immediate deallocation of all system resources     httpclient.getConnectionManager().shutdown();     }}On compiling and executing this we see the below output in STDOUT which clearly indicates the response code was propagated from Business Service to Proxy serviceexecuting request to http://localhost:7021----------------------------------------HTTP/1.1 304 Not ModifiedDate: Tue, 08 Jun 2010 16:13:42 GMTContent-Type: text/xml; charset=UTF-8X-Powered-By: Servlet/2.5 JSP/2.1----------------------------------------  

    Read the article

  • AS11 Oracle B2B Sync Support - Series 2

    - by sinkarbabu.kirubanithi
    In the earlier series, we discussed about how to model "Sync Support" in Oracle B2B. And, we haven't discussed how the response can be consumed synchronously by the back-end application or initiator of sync request. In this sequel, we will see how we can extend it to the SOA composite applications to model the end-to-end usecase, this would help the initiator of sync request to receive the response synchronously. Series 2 - is little lengthier for blog standards so be prepared before you continue further :). Let's start our discussion with a high-level scenario where one need to initiate a synchronous request and get response synchronously. There are various approaches available, we will see one simplest approach here. Components Involved: 1. Oracle B2B 2. Oracle JCA JMS Adapter 3. Oracle BPEL 4. All of the above are wrapped up in a single SOA composite application. Oracle B2B: Skipping the "Sync Support" setup part in B2B, as we have already discussed that in the earlier series 1. Here we have provided "Sync Support" samples that can be imported to B2B directly and users can start testing the same in few minutes. Initiator Sample: This requires two JMS queues to be created, one for B2B to receive initial outbound sync request and the other is for B2B to deliver the incoming sync response to the back-end. Please enable "Use JMS Id" option in both internal listening and delivery channels. This would enable JCA JMS Adapter to correlate the initial B2B request and response and in turn it would be returned as synchronous response of BPEL. Internal Listening Channel Image: Internal Delivery Channel Image: To get going without much challenges, just create queues in Weblogic with the JNDI mentioned in the above two screenshots. If you want to use different names, then you may have to change the queue jndi names in sample after importing it into B2B. Here are the Queue related JNDI names used in the sample, 1. Internal Listening Channel Queue details, Name: JNDI Name: jms/b2b/syncreplyqueue 2. Internal Delivery Channel Queue details, Name: JNDI Name: jms/b2b/syncrequestqueue Here is the Initiator Sample Acme.zip Note: You may have to adjust the ip address of GlobalChips endpoint in the Delivery Channel. Responder Sample: Contains B2B meta-data and the Callout. Just import the sample and place the callout binary under "/tmp/callout" directory. If you choose to use a different location for callout, then you may have to change the same in B2B Configuration after importing the sample. Here are the artifacts, 1. Callout Source SampleCallout.java 2. Callout Binary sample-callout.jar 3. Responder Sample GlobalChips.zip Callout Details: Just gives the static response XML that needs to be sent back as response for the inbound sync request. For a sample purpose, we have given static response but in production you may have to invoke a web service or something similar to get the response. IMPORTANT NOTE: For Sync Support use case, responder is not expected to deliver the inbound sync request to backend as the process of delivering and getting the response from backend are expected from the Callout. This default behavior can be overridden by enabling the config property "b2b.SyncAppDelivery=true" in B2B config mbean (b2b-config.xml). This makes B2B to deliver the inbound sync request to be delivered to backend queue but the response to be sent to remote caller still has to come from Callout. 2. Oracle JCA JMS Adapter: On the initiator side, we have used JCA JMS Request/Reply pattern to send/receive the synchronous message from B2B. 3. Oracle BPEL: Exposes WS-SOAP Endpoint that takes payload as input and passes the same to B2B and returns the synchronous response of B2B as SOAP response. For outside world, it looks as if it is the synchronous web service endpoint but under the cover it uses JMS to trigger/initiate B2B to send and receive the synchronous response. 4. Composite application: All the components discussed above are wired in SOA composite application that helps to model a end-to-end synchronous use case. Here's the composite application sca_B2BSyncSample_rev1.0.jar, you may just deploy this to your AS11 SOA to make use of it. For any editing, you can just import the project in your JDEV under any SOA Application. Here are the composite application screenshots, Composite Application: BPEL With JCA JMS Adapter (Request/Reply):

    Read the article

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

  • Real-world SignalR example, ditching ghetto long polling

    - by Jeff
    One of the highlights of BUILD last week was the announcement that SignalR, a framework for real-time client to server (or cloud, if you will) communication, would be a real supported thing now with the weight of Microsoft behind it. Love the open source flava! If you aren’t familiar with SignalR, watch this BUILD session with PM Damian Edwards and dev David Fowler. Go ahead, I’ll wait. You’ll be in a happy place within the first ten minutes. If you skip to the end, you’ll see that they plan to ship this as a real first version by the end of the year. Insert slow clap here. Writing a few lines of code to move around a box from one browser to the next is a way cool demo, but how about something real-world? When learning new things, I find it difficult to be abstract, and I like real stuff. So I thought about what was in my tool box and the decided to port my crappy long-polling “there are new posts” feature of POP Forums to use SignalR. A few versions back, I added a feature where a button would light up while you were pecking out a reply if someone else made a post in the interim. It kind of saves you from that awkward moment where someone else posts some snark before you. While I was proud of the feature, I hated the implementation. When you clicked the reply button, it started polling an MVC URL asking if the last post you had matched the last one the server, and it did it every second and a half until you either replied or the server told you there was a new post, at which point it would display that button. The code was not glam: // in the reply setup PopForums.replyInterval = setInterval("PopForums.pollForNewPosts(" + topicID + ")", 1500); // called from the reply setup and the handler that fetches more posts PopForums.pollForNewPosts = function (topicID) { $.ajax({ url: PopForums.areaPath + "/Forum/IsLastPostInTopic/" + topicID, type: "GET", dataType: "text", data: "lastPostID=" + PopForums.currentTopicState.lastVisiblePost, success: function (result) { var lastPostLoaded = result.toLowerCase() == "true"; if (lastPostLoaded) { $("#MorePostsBeforeReplyButton").css("visibility", "hidden"); } else { $("#MorePostsBeforeReplyButton").css("visibility", "visible"); clearInterval(PopForums.replyInterval); } }, error: function () { } }); }; What’s going on here is the creation of an interval timer to keep calling the server and bugging it about new posts, and setting the visibility of a button appropriately. It looks like this if you’re monitoring requests in FireBug: Gross. The SignalR approach was to call a message broker when a reply was made, and have that broker call back to the listening clients, via a SingalR hub, to let them know about the new post. It seemed weird at first, but the server-side hub’s only method is to add the caller to a group, so new post notifications only go to callers viewing the topic where a new post was made. Beyond that, it’s important to remember that the hub is also the means to calling methods at the client end. Starting at the server side, here’s the hub: using Microsoft.AspNet.SignalR.Hubs; namespace PopForums.Messaging { public class Topics : Hub { public void ListenTo(int topicID) { Groups.Add(Context.ConnectionId, topicID.ToString()); } } } Have I mentioned how awesomely not complicated this is? The hub acts as the channel between the server and the client, and you’ll see how JavaScript calls the above method in a moment. Next, the broker class and its associated interface: using Microsoft.AspNet.SignalR; using Topic = PopForums.Models.Topic; namespace PopForums.Messaging { public interface IBroker { void NotifyNewPosts(Topic topic, int lasPostID); } public class Broker : IBroker { public void NotifyNewPosts(Topic topic, int lasPostID) { var context = GlobalHost.ConnectionManager.GetHubContext<Topics>(); context.Clients.Group(topic.TopicID.ToString()).notifyNewPosts(lasPostID); } } } The NotifyNewPosts method uses the static GlobalHost.ConnectionManager.GetHubContext<Topics>() method to get a reference to the hub, and then makes a call to clients in the group matched by the topic ID. It’s calling the notifyNewPosts method on the client. The TopicService class, which handles the reply data from the MVC controller, has an instance of the broker new’d up by dependency injection, so it took literally one line of code in the reply action method to get things moving. _broker.NotifyNewPosts(topic, post.PostID); The JavaScript side of things wasn’t much harder. When you click the reply button (or quote button), the reply window opens up and fires up a connection to the hub: var hub = $.connection.topics; hub.client.notifyNewPosts = function (lastPostID) { PopForums.setReplyMorePosts(lastPostID); }; $.connection.hub.start().done(function () { hub.server.listenTo(topicID); }); The important part to look at here is the creation of the notifyNewPosts function. That’s the method that is called from the server in the Broker class above. Conversely, once the connection is done, the script calls the listenTo method on the server, letting it know that this particular connection is listening for new posts on this specific topic ID. This whole experiment enables a lot of ideas that would make the forum more Facebook-like, letting you know when stuff is going on around you.

    Read the article

  • Simple Project Templates

    - by Geertjan
    The NetBeans sources include a module named "simple.project.templates": In the module sources, Tim Boudreau turns out to be the author of the code, so I asked him what it was all about, and if he could provide some usage code. His response, from approximately this time last year because it's been sitting in my inbox for a while, is below. Sure - though I think the javadoc in it is fairly complete.  I wrote it because I needed to create a bunch of project templates for Javacard, and all of the ways that is usually done were grotesque and complicated.  I figured we already have the ability to create files from templates, and we already have the ability to do substitutions in templates, so why not have a single file that defines the project as a list of file templates to create (with substitutions in the names) and some definitions of what should be in project properties. You can also add files to the project programmatically if you want.Basically, a template for an entire project is a .properties file.  Any line which doesn't have the prefix 'pp.' or 'pvp.' is treated as the definition of one file which should be created in the new project.  Any such line where the key ends in * means that file should be opened once the new project is created.  So, for example, in the nodejs module, the definition looks like: {{projectName}}.js*=Templates/javascript/HelloWorld.js .npmignore=node_hidden_templates/npmignore So, the first line means:  - Create a file with the same name as the project, using the HelloWorld template    - I.e. the left side of the line is the relative path of the file to create, and the right side is the path in the system filesystem for the template to use       - If the template is not one you normally want users to see, just register it in the system filesystem somewhere other than Templates/ (but remember to set the attribute that marks it as a template)  - Include that file in the set of files which should be opened in the editor once the new project is created. To actually create a project, first you just create a new ProjectCreator: ProjectCreator gen = new ProjectCreator( parentFolderOfNewProject ); Now, if you want to programmatically generate any files, in addition to those defined in the template, you can: gen.add (new FileCreator("nbproject", "project.xml", false) {     public DataObject create (FileObject project, Map<String,String> substitutions) throws IOException {          ...     } }); Then pass the FileObject for the project template (the properties file) to the ProjectCreator's createProject method (hmm, maybe it should be the string path to the project template instead, to save the caller trouble looking up the FileObject for the template).  That method looks like this: public final GeneratedProject createProject(final ProgressHandle handle, final String name, final FileObject template, final Map<String, String> substitutions) throws IOException { The name parameter should be the directory name for the new project;  the map is the strings you gathered in the wizard which should be used for substitutions.  createProject should be called on a background thread (i.e. use a ProgressInstantiatingIterator for the wizard iterator and just pass in the ProgressHandle you are given). The return value is a GeneratedProject object, which is just a holder for the created project directory and the set of DataObjects which should be opened when the wizard finishes. I'd love to see simple.project.templates moved out of the javacard cluster, as it is really useful and much simpler than any of the stuff currently done for generating projects.  It would also be possible to do much richer tools for creating projects in apisupport - i.e. choose (or create in the wizard) the templates you want to use, generate a skeleton wizard with a UI for all the properties you'd like to substitute, etc. Here is a partial project template from Javacard - for example usage, see org.netbeans.modules.javacard.wizard.ProjectWizardIterator in javacard.project (or the much simpler one in contrib/nodejs). #This properties file describes what to create when a project template is#instantiated.  The keys are paths on disk relative to the project root. #The values are paths to the templates to use for those files in the system#filesystem.  Any string inside {{ and }}'s will be substituted using properties#gathered in the template wizard.#Special key prefixes are #  pp. - indicates an entry for nbproject/project.properties#  pvp. - indicates an entry for nbproject/private/private.properties #File templates, in format [path-in-project=path-to-template]META-INF/javacard.xml=org-netbeans-modules-javacard/templates/javacard.xmlMETA-INF/MANIFEST.MF=org-netbeans-modules-javacard/templates/EAP_MANIFEST.MF APPLET-INF/applet.xml=org-netbeans-modules-javacard/templates/applet.xmlscripts/{{classnamelowercase}}.scr=org-netbeans-modules-javacard/templates/test.scrsrc/{{packagepath}}/{{classname}}.java*=Templates/javacard/ExtendedApplet.java nbproject/deployment.xml=org-netbeans-modules-javacard/templates/deployment.xml#project.properties contentspp.display.name={{projectname}}pp.platform.active={{activeplatform}} pp.active.device={{activedevice}}pp.includes=**pp.excludes= I will be using the above info in an upcoming blog entry and provide step by step instructions showing how to use them. However, anyone else out there should have enough info from the above to get started yourself!

    Read the article

  • More Quick Interview Tips

    - by Ajarn Mark Caldwell
    In the last couple of years I have conducted a lot of interviews for application and database developers for my company, and I can tell you that the little things can mean a lot.  Here are a few quick tips to help you make a good first impression. A year ago I gave you my #1 interview tip: Do some basic research!  And a year later, I am still stunned by how few technical people do the most basic of research.  I can only guess that it is because it is so engrained in our psyche that technical competence is everything (see How to Manage Technical Employees for more on this idea) that we forget or ignore the importance of soft skills and the art of the interview.  Or maybe it is because we have heard the stories of the uber-geek who has zero personal skills but still makes a fortune working for Microsoft.  Well, here’s another quick tip:  You’re probably not as good as he is; and a large number of companies actually run small to medium sized teams and can’t really afford to have the social outcast in the group.  In a small team, everyone has to get along well, and that’s an important part of what I’m evaluating during the interview process. My #2 tip is to act alive!  I typically conduct screening interviews by phone before I bring someone in for an in-person.  I don’t care how laid-back you are or if you have a “quiet personality”, when we are talking, ACT like you are happy I called and you are interested in getting the job.  If you sound like you are bored-to-death and that you would be perfectly happy to never work again, I am perfectly happy to help you attain that goal, and I’ll move on to the next candidate. And closely related to #2, perhaps we’ll call it #2.1 is this tip:  When I call you on the phone for the interview, don’t answer your phone by just saying, “Hello”.  You know that the odds are about 999-to-1 that it is me calling for the interview because we have specifically arranged this time slot for the call.  And you can see on the caller ID that it is not one of your buddies calling, so identify yourself.  Don’t make me question whether I dialed the right number.  Answer your phone with a, “Hello, this is ___<your full name preferred, but at least your first name>___.”.  And when I say, “Hi, <your name>, this is Mark from <my company>” it would be really nice to hear you say, “Hi, Mark, I have been expecting your call.”  This sets the perfect tone for our conversation.  I know I have the right person; you are professional enough and interested enough in the job or contract to remember your appointments; and now we can move on to a little intro segment and get on with the reason for our call. As crazy as it sounds, I’ve actually had phone interviews that went like this: <Ring…> You:  “Hello?” Me:  “Hi, this is Mark from _______” You:  “Yeah?” Me:  “Is this <your name>?” You:  “Yeah.” Me:  “I had this time in my calendar for us to talk…were you expecting my call?” You:  “Oh, yeah, sure…” I used to be nice and would try to go ahead with the interview even after this bad start, thinking I was giving the candidate the benefit of the doubt…a second chance…but more often than not it was a struggle and 10 minutes into what was supposed to be a 45-minute call, I’m looking for a way to hang up without being rude myself.  It never worked out.  I never brought that person in for an in-person interview, much less offered them the job or contract.  Who knows, maybe they were some sort of wunderkind that we missed out on.  What I know is that they would never fit in with the rest of the team, and around here that is absolutely critical. So, in conclusion… Act alive!  Identify yourself!  And do at least the very basic of research.

    Read the article

  • .NET Security Part 4

    - by Simon Cooper
    Finally, in this series, I am going to cover some of the security issues that can trip you up when using sandboxed appdomains. DISCLAIMER: I am not a security expert, and this is by no means an exhaustive list. If you actually are writing security-critical code, then get a proper security audit of your code by a professional. The examples below are just illustrations of the sort of things that can go wrong. 1. AppDomainSetup.ApplicationBase The most obvious one is the issue covered in the MSDN documentation on creating a sandbox, in step 3 – the sandboxed appdomain has the same ApplicationBase as the controlling appdomain. So let’s explore what happens when they are the same, and an exception is thrown. In the sandboxed assembly, Sandboxed.dll (IPlugin is an interface in a partially-trusted assembly, with a single MethodToDoThings on it): public class UntrustedPlugin : MarshalByRefObject, IPlugin { // implements IPlugin.MethodToDoThings() public void MethodToDoThings() { throw new EvilException(); } } [Serializable] internal class EvilException : Exception { public override string ToString() { // show we have read access to C:\Windows // read the first 5 directories Console.WriteLine("Pwned! Mwuahahah!"); foreach (var d in Directory.EnumerateDirectories(@"C:\Windows").Take(5)) { Console.WriteLine(d.FullName); } return base.ToString(); } } And in the controlling assembly: // what can possibly go wrong? AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase } // only grant permissions to execute // and to read the application base, nothing else PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, appDomainSetup.ApplicationBase); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.pathDiscovery, appDomainSetup.ApplicationBase); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain("Sandbox", null, appDomainSetup, restrictedPerms); // execute UntrustedPlugin in the sandbox // don't crash the application if the sandbox throws an exception IPlugin o = (IPlugin)sandbox.CreateInstanceFromAndUnwrap("Sandboxed.dll", "UntrustedPlugin"); try { o.MethodToDoThings() } catch (Exception e) { Console.WriteLine(e.ToString()); } And the result? Oops. We’ve allowed a class that should be sandboxed to execute code with fully-trusted permissions! How did this happen? Well, the key is the exact meaning of the ApplicationBase property: The application base directory is where the assembly manager begins probing for assemblies. When EvilException is thrown, it propagates from the sandboxed appdomain into the controlling assembly’s appdomain (as it’s marked as Serializable). When the exception is deserialized, the CLR finds and loads the sandboxed dll into the fully-trusted appdomain. Since the controlling appdomain’s ApplicationBase directory contains the sandboxed assembly, the CLR finds and loads the assembly into a full-trust appdomain, and the evil code is executed. So the problem isn’t exactly that the sandboxed appdomain’s ApplicationBase is the same as the controlling appdomain’s, it’s that the sandboxed dll was in such a place that the controlling appdomain could find it as part of the standard assembly resolution mechanism. The sandbox then forced the assembly to load in the controlling appdomain by throwing a serializable exception that propagated outside the sandbox. The easiest fix for this is to keep the sandbox ApplicationBase well away from the ApplicationBase of the controlling appdomain, and don’t allow the sandbox permissions to access the controlling appdomain’s ApplicationBase directory. If you do this, then the sandboxed assembly can’t be accidentally loaded into the fully-trusted appdomain, and the code can’t be executed. If the plugin does try to induce the controlling appdomain to load an assembly it shouldn’t, a SerializationException will be thrown when it tries to load the assembly to deserialize the exception, and no damage will be done. 2. Loading the sandboxed dll into the application appdomain As an extension of the previous point, you shouldn’t directly reference types or methods in the sandboxed dll from your application code. That loads the assembly into the fully-trusted appdomain, and from there code in the assembly could be executed. Instead, pull out methods you want the sandboxed dll to have into an interface or class in a partially-trusted assembly you control, and execute methods via that instead (similar to the example above with the IPlugin interface). If you need to have a look at the assembly before executing it in the sandbox, either examine the assembly using reflection from within the sandbox, or load the assembly into the Reflection-only context in the application’s appdomain. The code in assemblies in the reflection-only context can’t be executed, it can only be reflected upon, thus protecting your appdomain from malicious code. 3. Incorrectly asserting permissions You should only assert permissions when you are absolutely sure they’re safe. For example, this method allows a caller read-access to any file they call this method with, including your documents, any network shares, the C:\Windows directory, etc: [SecuritySafeCritical] public static string GetFileText(string filePath) { new FileIOPermission(FileIOPermissionAccess.Read, filePath).Assert(); return File.ReadAllText(filePath); } Be careful when asserting permissions, and ensure you’re not providing a loophole sandboxed dlls can use to gain access to things they shouldn’t be able to. Conclusion Hopefully, that’s given you an idea of some of the ways it’s possible to get past the .NET security system. As I said before, this post is not exhaustive, and you certainly shouldn’t base any security-critical applications on the contents of this blog post. What this series should help with is understanding the possibilities of the security system, and what all the security attributes and classes mean and what they are used for, if you were to use the security system in the future.

    Read the article

  • A very useful custom component

    - by Kevin Smith
    Whenever I am debugging a problem in WebCenter Content (WCC) I often find it useful to see the contents of the internal data binder used by WCC when executing a service. I want to know the value of all parameters passed in by the caller, either a user in the web GUI or from an application calling the service via RIDC or web services. I also want to the know the value of binder variables calculated by WCC as it processes a service. What defaults has it applied based on configuration settings or profile rules? What values has it derived based on the user input? To help with this I created a  component that uses a java filter to dump out the contents of the internal data binder to the WCC trace file. It dumps the binder contents using the toString() method. You can register this filter code using many different filter hooks to see how the binder is updated as WCC processes the service. By default, it uses the validateStandard filter hook which is useful during a CHECKIN service. It uses the system trace section, so make sure that trace section is enabled before looking for the output from this component. Here is some sample output>system/6    10.09 09:57:40.648    IdcServer-1    filter: postParseDataForServiceRequest, binder start -- system/6    10.09 09:57:40.698    IdcServer-1    *** LocalData *** system/6    10.09 09:57:40.698    IdcServer-1    (10 keys + 0 defaults) system/6    10.09 09:57:40.698    IdcServer-1    ClientEncoding=UTF-8 system/6    10.09 09:57:40.698    IdcServer-1    IdcService=CHECKIN_UNIVERSAL system/6    10.09 09:57:40.698    IdcServer-1    NoHttpHeaders=0 system/6    10.09 09:57:40.698    IdcServer-1    UserDateFormat=iso8601 system/6    10.09 09:57:40.698    IdcServer-1    UserTimeZone=UTC system/6    10.09 09:57:40.698    IdcServer-1    dDocTitle=Check in from RIDC using Framework Folder system/6    10.09 09:57:40.698    IdcServer-1    dDocType=Document system/6    10.09 09:57:40.698    IdcServer-1    dSecurityGroup=Public system/6    10.09 09:57:40.698    IdcServer-1    parentFolderPath=/folder1/folder2 system/6    10.09 09:57:40.698    IdcServer-1    primaryFile=testfile5.bin     system/6    10.09 09:57:40.698    IdcServer-1    ***  RESULT SETS  ***>system/6    10.09 09:57:40.698    IdcServer-1    binder end -------------------------------------------- See the readme included in the component for more details. You can download the component from here.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26  | Next Page >