Search Results

Search found 4028 results on 162 pages for 'mysqld safe'.

Page 153/162 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Taking the training wheels off: Accelerating the Business with Oracle IAM by Brian Mozinski (Accenture)

    - by Greg Jensen
    Today, technical requirements for IAM are evolving rapidly, and the bar is continuously raised for high performance IAM solutions as organizations look to roll out high volume use cases on the back of legacy systems.  Existing solutions were often designed and architected to support offline transactions and manual processes, and the business owners today demand globally scalable infrastructure to support the growth their business cases are expected to deliver. To help IAM practitioners address these challenges and make their organizations and themselves more successful, this series we will outline the: • Taking the training wheels off: Accelerating the Business with Oracle IAM The explosive growth in expectations for IAM infrastructure, and the business cases they support to gain investment in new security programs. • "Necessity is the mother of invention": Technical solutions developed in the field Well proven tricks of the trade, used by IAM guru’s to maximize your solution while addressing the requirements of global organizations. • The Art & Science of Performance Tuning of Oracle IAM 11gR2 Real world examples of performance tuning with Oracle IAM • No Where to go but up: Extending the benefits of accelerated IAM Anything is possible, compelling new solutions organizations are unlocking with accelerated Oracle IAM Let’s get started … by talking about the changing dynamics driving these discussions. Big Companies are getting bigger everyday, and increasingly organizations operate across state lines, multiple times zones, and in many countries or continents at the same time.  No longer is midnight to 6am a safe time to take down the system for upgrades, to run recon’s and import or update user accounts and attributes.  Further IT organizations are operating as shared services with SLA’s similar to telephone carrier levels expected by their “clients”.  Workers are moved in and out of roles on a weekly, daily, or even hourly rate and IAM is expected to support those rapid changes.  End users registering for services during business hours in Singapore are expected their access to be green-lighted in custom apps hosted in Portugal within the hour.  Many of the expectations of asynchronous systems and batched updates are not adequate and the number and types of users is growing. When organizations acted more like independent teams at functional or geographic levels it was manageable to have processes that relied on a handful of people who knew how to make things work …. Knew how to get you access to the key systems to get your job done.  Today everyone is expected to do more with less, the finance administrator previously supporting their local Atlanta sales office might now be asked to help close the books for the Johannesburg team, and access certification process once completed monthly by Joan on the 3rd floor is now done by a shared pool of resources in Sao Paulo.   Fragmented processes that rely on institutional knowledge to get access to systems and get work done quickly break down in these scenarios.  Highly robust processes that have automated workflows for connected or disconnected systems give organizations the dynamic flexibility to share work across these lines and cut costs or increase productivity. As the IT industry computing paradigms continue to change with the passing of time, and as mature or proven approaches become clear, it is normal for organizations to adjust accordingly. Businesses must manage identity in an increasingly hybrid world in which legacy on-premises IAM infrastructures are extended or replaced to support more and more interconnected and interdependent services to a wider range of users. The old legacy IAM implementation models we had relied on to manage identities no longer apply. End users expect to self-request access to services from their tablet, get supervisor approval over mobile devices and email, and launch the application even if is hosted on the cloud, or run by a partner, vendor, or service provider. While user expectations are higher, they are also simpler … logging into custom desktop apps to request approvals, or going through email or paper based processes for certification is unacceptable.  Users expect security to operate within the paradigm of the application … i.e. feel like the application they are using. Citizen and customer facing applications have evolved from every where, with custom applications, 3rd party tools, and merging in from acquired entities or 3rd party OEM’s resold to expand your portfolio of services.  These all have their own user stores, authentication models, user lifecycles, session management, etc.  Often the designers/developers are no longer accessible and the documentation is limited.  Bringing together underlying directories to scale for growth, and improve user experience is critical for revenue … but also for operations. Job functions are more dynamic.... take the Olympics for example.  Endless organizations from corporations broadcasting, endorsing, or marketing through the event … to non-profit athletic foundations and public/government entities for athletes and public safety, all operate simultaneously on the world stage.  Each organization needs to spin up short-term teams, often dealing with proprietary information from hot ads to racing strategies or security plans.  IAM is expected to enable team’s to spin up, enable new applications, protect privacy, and secure critical infrastructure.  Then it needs to be disabled just as quickly as users go back to their previous responsibilities. On a more technical level … Optimized system directory; tuning guidelines and parameters are needed by businesses today. Business’s need to be making the right choices (virtual directories) and considerations via choosing the correct architectural patterns (virtual, direct, replicated, and tuning), challenge is that business need to assess and chose the correct architectural patters (centralized, virtualized, and distributed) Today's Business organizations have very complex heterogeneous enterprises that contain diverse and multifaceted information. With today's ever changing global landscape, the strategic end goal in challenging times for business is business agility. The business of identity management requires enterprise's to be more agile and more responsive than ever before. The continued proliferation of networking devices (PC, tablet, PDA's, notebooks, etc.) has caused the number of devices and users to be granted access to these devices to grow exponentially. Business needs to deploy an IAM system that can account for the demands for authentication and authorizations to these devices. Increased innovation is forcing business and organizations to centralize their identity management services. Access management needs to handle traditional web based access as well as handle new innovations around mobile, as well as address insufficient governance processes which can lead to rouge identity accounts, which can then become a source of vulnerabilities within a business’s identity platform. Risk based decisions are providing challenges to business, for an adaptive risk model to make proper access decisions via standard Web single sign on for internal and external customers,. Organizations have to move beyond simple login and passwords to address trusted relationship questions such as: Is this a trusted customer, client, or citizen? Is this a trusted employee, vendor, or partner? Is this a trusted device? Without a solid technological foundation, organizational performance, collaboration, constituent services, or any other organizational processes will languish. A Single server location presents not only network concerns for distributed user base, but identity challenges. The network risks are centered on latency of the long trip that the traffic has to take. Other risks are a performance around availability and if the single identity server is lost, all access is lost. As you can see, there are many reasons why performance tuning IAM will have a substantial impact on the success of your organization.  In our next installment in the series we roll up our sleeves and get into detailed tuning techniques used everyday by thought leaders in the field implementing Oracle Identity & Access Management Solutions.

    Read the article

  • Tailoring the Oracle Fusion Applications User Interface with Oracle Composer

    - by mvaughan
    By Killian Evers, Oracle Applications User Experience Changing the user interface (UI) is one of the most common modifications customers perform to Oracle Fusion Applications. Typically, customers add or remove a field based on their needs. Oracle makes the process of tailoring easier for customers, and reduces the burden for their IT staff, which you can read about on the Usable Apps website or in an earlier VoX post.This is the first in a series of posts that will talk about the tools that Oracle has provided for tailoring with its family of composers. These tools are designed for business systems analysts, and they allow employees other than IT staff to make changes in an upgrade-safe and patch-friendly manner. Let’s take a deep dive into one of these composers, the Oracle Composer. Oracle Composer allows business users to modify existing UIs after they have been deployed and are in use. It is an integral component of our SaaS offering. Using Oracle Composer, users can control:     •    Who sees the changes     •    When the changes are made     •    What changes are made Change for me, change for you, change for all of youOne of the most powerful aspects of Oracle Composer is its flexibility. Oracle uses Oracle Composer to make changes for a user or group of users – those who see the changes. A user of Oracle Fusion Applications can make changes to the user interface at runtime via Oracle Composer, and these changes will remain every time they log into the system. For example, they can rearrange certain objects on a page, add and remove designated content, and save queries.Business systems analysts can make changes to Oracle Fusion Application UIs for groups of users or all users. Oracle’s Fusion Middleware Metadata Services (MDS) stores these changes and retrieves them at runtime, merging customizations with the base metadata and revealing the final experience to the end user. A tailored application can have multiple customization layers, and some layers can be specific to certain Fusion Applications. Some examples of customization layers are: site, organization, country, or role. Customization layers are applied in a specific order of precedence on top of the base application metadata. This image illustrates how customization layers are applied.What time is it?Users make changes to UIs at design time, runtime, and design time at runtime. Design time changes are typically made by application developers using an integrated development environment, or IDE, such as Oracle JDeveloper. Once made, these changes are then deployed to managed servers by application administrators. Oracle Composer covers the other two areas: Runtime changes and design time at runtime changes. When we say users are making changes at runtime, we mean that the changes are made within the running application and take effect immediately in the running application. A prime example of this ability is users who make changes to their running application that only affect the UIs they see. What is new with Oracle Composer is the last area: Design time at runtime.  A business systems analyst can make changes to the UIs at runtime but does not have to make those changes immediately to the application. These changes are stored as metadata, separate from the base application definitions. Customizations made at runtime can be saved in a sandbox so that the changes can be isolated and validated before being published into an environment, without the need to redeploy the application. What can I do?Oracle Composer can be run in one of two modes. Depending on which mode is chosen, you may have different capabilities available for changing the UIs. The first mode is view mode, the most common default mode for most pages. This is the mode that is used for personalizations or user customizations. Users can access this mode via the Personalization link (see below) in the global region on Oracle Fusion Applications pages. In this mode, you can rearrange components on a page with drag-and-drop, collapse or expand components, add approved external content, and change the overall layout of a page. However, all of the changes made this way are exclusive to that particular user.The second mode, edit mode, is typically made available to select users with access privileges to edit page content. We call these folks business systems analysts. This mode is used to make UI changes for groups of users. Users with appropriate privileges can access the edit mode of Oracle Composer via the Administration menu (see below) in the global region on Oracle Fusion Applications pages. In edit mode, users can also add components, delete components, and edit component properties. While in edit mode in Oracle Composer, there are two views that assist the business systems analyst with making UI changes: Design View and Source View (see below). Design View, the default view, is a WYSIWYG rendering of the page and its content. The business systems analyst can perform these actions: Add content – including custom content like a portlet displaying news or stock quotes, or predefined content delivered from Oracle Fusion Applications (including ADF components and task flows) Rearrange content – performed via drag-and-drop on the page or by using the actions menu of a component or portlet to move content around Edit component properties and parameters – for specific components, control the visual properties such as text or display labels, or parameters such as RSS feeds Hide or show components – hidden components can be re-shown Delete components Change page layout – users can select from eight pre-defined layouts Edit page properties – create or edit a page’s parameters and display properties Reset page customizations – remove edits made to the page in the current layer and/or reset the page to a previous state. Detailed information on each of these capabilities and the additional actions not covered in the list above can be found in the Oracle® Fusion Middleware Developer's Guide for Oracle WebCenter.This image shows what the screen looks like in Design View.Source View, the second option in the edit mode of Oracle Composer, provides a WYSIWYG and a hierarchical rendering of page components in a component navigator. In Source View, users can access and modify properties of components that are not otherwise selectable in Design View. For example, many ADF Faces components can be edited only in Source View. Users can also edit components within a task flow. This image shows what the screen looks like in Source View.Detailed information on Source View can be found in the Oracle® Fusion Middleware Developer's Guide for Oracle WebCenter.Oracle Composer enables any application or portal to be customized or personalized after it has been deployed and is in use. It is designed to be extremely easy to use so that both business systems analysts and users can edit Oracle Fusion Applications pages with a few clicks of the mouse. Oracle Composer runs in all modern browsers and provides a rich, dynamic way to edit JSF application and portal pages.From the editor: The next post in this series about composers will be on Data Composer. You can also catch Killian speaking about extensibility at OpenWorld 2012 and in her Faces of Fusion video.

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • Reg Gets a Job at Red Gate (and what happens behind the scenes)

    - by red(at)work
    Mr Reg Gater works at one of Cambridge’s many high-tech companies. He doesn’t love his job, but he puts up with it because... well, it could be worse. Every day he drives to work around the Red Gate roundabout, wondering what his boss is going to blame him for today, and wondering if there could be a better job out there for him. By late morning he already feels like handing his notice in. He got the hacky look from his boss for being 5 minutes late, and then they ran out of tea. Again. He goes to the local sandwich shop for lunch, and picks up a Red Gate job menu and a Book of Red Gate while he’s waiting for his order. That night, he goes along to Cambridge Geek Nights and sees some very enthusiastic Red Gaters talking about the work they do; it sounds interesting and, of all things, fun. He takes a quick look at the job vacancies on the Red Gate website, and an hour later realises he’s still there – looking at videos, photos and people profiles. He especially likes the Red Gate’s Got Talent page, and is very impressed with Simon Johnson’s marathon time. He thinks that he’d quite like to work with such awesome people. It just so happens that Red Gate recently decided that they wanted to hire another hot shot team member. Behind the scenes, the wheels were set in motion: the recruitment team met with the hiring manager to understand exactly what they’re looking for, and to decide what interview tests to do, who will do the interviews, and to kick-start any interview training those people might need. Next up, a job description and job advert were written, and the job was put on the market. Reg applies, and his CV lands in the Recruitment team’s inbox and they open it up with eager anticipation that Reg could be the next awesome new starter. He looks good, and in a jiffy they’ve arranged an interview. Reg arrives for his interview, and is greeted by a smiley receptionist. She offers him a selection of drinks and he feels instantly relaxed. A couple of interviews and an assessment later, he gets a job offer. We make his day and he makes ours by accepting, and becoming one of the 60 new starters so far this year. Behind the scenes, things start moving all over again. The HR team arranges for a “Welcome” goodie box to be whisked out to him, prepares his contract, sends an email to Information Services (Or IS for short - we’ll come back to them), keeps in touch with Reg to make sure he knows what to expect on his first day, and of course asks him to fill in the all-important wiki questionnaire so his new colleagues can start to get to know him before he even joins. Meanwhile, the IS team see an email in SupportWorks from HR. They see that Reg will be starting in the sales team in a few days’ time, and they know exactly what to do. They pull out a new machine, and within minutes have used their automated deployment software to install every piece of software that a new recruit could ever need. They also check with Reg’s new manager to see if he has any special requirements that they could help with. Reg starts and is amazed to find a fully configured machine sitting on his desk, complete with stationery and all the other tools he’ll need to do his job. He feels even more cared for after he gets a workstation assessment, and realises he’d be comfier with an ergonomic keyboard and a footstool. They arrive minutes later, just like that. His manager starts him off on his induction and sales training. Along with job-specific training, he’ll also have a buddy to help him find his feet, and loads of pre-arranged demos and introductions. Reg settles in nicely, and is great at his job. He enjoys the canteen, and regularly eats one of the 40,000 meals provided each year. He gets used to the selection of teas that are available, develops a taste for champagne launch parties, and has his fair share of the 25,000 cups of coffee downed at Red Gate towers each year. He goes along to some Feel Good Fund events, and donates a little something to charity in exchange for a turn on the chocolate fountain. He’s looking a little scruffy, so he decides to get his hair cut in between meetings, just in time for the Red Gate birthday company photo. Reg starts a new project: identifying existing customers to up-sell to new bundles. He talks with the web team to generate lists of qualifying customers who haven’t recently been sent marketing emails, and sends emails out, using a new in-house developed tool to schedule follow-up calls in CRM for the same group. The customer responds, saying they’d like to upgrade but are having a licensing problem – Reg sends the issue to Support, and it gets routed to the web team. The team identifies a workaround, and the bug gets scheduled into the next maintenance release in a fortnight’s time (hey; they got lucky). With all the new stuff Reg is working on, he realises that he’d be way more efficient if he had a third monitor. He speaks to IS and they get him one - no argument. He also needs a test machine and then some extra memory. Done. He then thinks he needs an iPad, and goes to ask for one. He gets told to stop pushing his luck. Some time later, Reg’s wife has a baby, so Reg gets 2 weeks of paid paternity leave and a bunch of flowers sent to his house. He signs up to the childcare scheme so that he doesn’t have to pay National Insurance on the first £243 of his childcare. The accounts team makes it all happen seamlessly, as they did with his Give As You Earn payments, which come out of his wages and go straight to his favorite charity. Reg’s sales career is going well. He’s grateful for the help that he gets from the product support team. How do they answer all those 900-ish support calls so effortlessly each month? He’s impressed with the patches that are sent out to customers who find “interesting behavior” in their tools, and to the customers who just must have that new feature. A little later in his career at Red Gate, Reg decides that he’d like to learn about management. He goes on some management training specially customised for Red Gate, joins the Management Book Club, and gets together with other new managers to brainstorm how to get the most out of one to one meetings with his team. Reg decides to go for a game of Foosball to celebrate his good fortune with his team, and has to wait for Finance to finish. While he’s waiting, he reflects on the wonderful time he’s had at Red Gate. He can’t put his finger on what it is exactly, but he knows he’s on to a good thing. All of the stuff that happened to Reg didn’t just happen magically. We’ve got teams of people working relentlessly behind the scenes to make sure that everyone here is comfortable, safe, well fed and caffeinated to the max.

    Read the article

  • Establishing WebLogic Server HTTPS Trust of IIS Using a Microsoft Local Certificate Authority

    - by user647124
    Everyone agrees that self-signed and demo certificates for SSL and HTTPS should never be used in production and preferred not to be used elsewhere. Most self-signed and demo certificates are provided by vendors with the intention that they are used only to integrate within the same environment. In a vendor’s perfect world all application servers in a given enterprise are from the same vendor, which makes this lack of interoperability in a non-production environment an advantage. For us working in the real world, where not only do we not use a single vendor everywhere but have to make do with self-signed certificates for all but production, testing HTTPS between an IIS ASP.NET service provider and a WebLogic J2EE consumer application can be very frustrating to set up. It was for me, especially having found many blogs and discussion threads where various solutions were described but did not quite work and were all mostly similar but just a little bit different. To save both you and my future (who always seems to forget the hardest-won lessons) all of the pain and suffering, I am recording the steps that finally worked here for reference and sanity. How You Know You Need This The first cold clutches of dread that tells you it is going to be a long day is when you attempt to a WSDL published by IIS in WebLogic over HTTPS and you see the following: <Jul 30, 2012 2:51:31 PM EDT> <Warning> <Security> <BEA-090477> <Certificate chain received from myserver.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure.> weblogic.wsee.wsdl.WsdlException: Failed to read wsdl file from url due to -- javax.net.ssl.SSLKeyException: [Security:090477]Certificate chain received from myserver02.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure. The above is what started a three day sojourn into searching for a solution. Even people who had solved it before would tell me how they did, and then shrug when I demonstrated that the steps did not end in the success they claimed I would experience. Rather than torture you with the details of everything I did that did not work, here is what finally did work. Export the Certificates from IE First, take the offending WSDL URL and paste it into IE (if you have an internal Microsoft CA, you have IE, even if you don’t use it in favor of some other browser). To state the semi-obvious, if you received the error above there is a certificate configured for the IIS host of the service and the SSL port has been configured properly. Otherwise there would be a different error, usually about the site not found or connection failed. Once the WSDL loads, to the right of the address bar there will be a lock icon. Click the lock and then click View Certificates in the resulting dialog (if you do not have a lock icon but do have a Certificate Error message, see http://support.microsoft.com/kb/931850 for steps to install the certificate then you can continue from the point of finding the lock icon). Figure 1: View Certificates in IE Next, select the Details tab in the resulting dialog Figure 2: Use Certificate Details to Export Certificate Click Copy to File, then Next, then select the Base-64 encoded option for the format Figure 3: Select the Base-64 encoded option for the format For the sake of simplicity, I choose to save this to the root of the WebLogic domain. It will work from anywhere, but later you will need to type in the full path rather than just the certificate name if you save it elsewhere. Figure 4: Browse to Save Location Figure 5: Save the Certificate to the Domain Root for Convenience This is the point where I ran into some confusion. Some articles mentioned exporting the entire chain of certificates. This supposedly works for some types of certificates, or if you have a few other tools and the time to learn them. For the SSL experts out there, they already have these tools, know how to use them well, and should not be wasting their time reading this article meant for folks who just want to get things wired up and back to unit testing and development. For the rest of us, the easiest way to make sure things will work is to just export all the links in the chain individually and let WebLogic Server worry about re-assembling them into a chain (which it does quite nicely). While perhaps not the most elegant solution, the multi-step process is easy to repeat and uses only tools that are immediately available and require no learning curve. So… Next, go to Tools then Internet Options then the Content tab and click Certificates. Go to the Trust Root Certificate Authorities tab and find the certificate root for your Microsoft CA cert (look for the Issuer of the certificate you exported earlier). Figure 6: Trusted Root Certification Authorities Tab Export this one the same way as before, with a different name Figure 7: Use a Unique Name for Each Certificate Repeat this once more for the Intermediate Certificate tab. Import the Certificates to the WebLogic Domain Now, open an command prompt, navigate to [WEBLOGIC_DOMAIN_ROOT]\bin and execute setDomainEnv. You should then be in the root of the domain. If not, CD to the domain root. Assuming you saved the certificate in the domain root, execute the following: keytool -importcert -alias [ALIAS-1] -trustcacerts -file [FULL PATH TO .CER 1] -keystore truststore.jks -storepass [PASSWORD] An example with the variables filled in is: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password After several lines out output you will be prompted with: Trust this certificate? [no]: The correct answer is ‘yes’ (minus the quotes, of course). You’ll you know you were successful if the response is: Certificate was added to keystore If not, check your typing, as that is generally the source of an error at this point. Repeat this for all three of the certificates you exported, changing the [ALIAS-1] and [FULL PATH TO .CER 1] value each time. For example: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-2 -trustcacerts -file microsftcertRoot.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-3 -trustcacerts -file microsftcertIntermediate.cer -keystore truststore.jks -storepass password In the above we created a new JKS key store. You can re-use an existing one by changing the name of the JKS file to one you already have and change the password to the one that matches that JKS file. For the DemoTrust.jks  that is included with WebLogic the password is DemoTrustKeyStorePassPhrase. An example here would be: keytool -importcert -alias IIS-1 -trustcacerts -file microsoft.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftRoot.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftInter.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase Whichever keystore you use, you can check your work with: keytool -list -keystore truststore.jks -storepass password Where “truststore.jks” and “password” can be replaced appropriately if necessary. The output will look something like this: Figure 8: Output from keytool -list -keystore Update the WebLogic Keystore Configuration If you used an existing keystore rather than creating a new one, you can restart your WebLogic Server and skip the rest of this section. For those of us who created a new one because that is the instructions we found online… Next, we need to tell WebLogic to use the JKS file (truststore.jks) we just created. Log in to the WebLogic Server Administration Console and navigate to Servers > AdminServer > Configuration > Keystores. Scroll down to “Custom Trust Keystore:” and change the value to “truststore.jks” and the value of “Custom Trust Keystore Passphrase:” and “Confirm Custom Trust Keystore Passphrase:” to the password you used when earlier, then save your changes. You will get a nice message similar to the following: Figure 9: To Be Safe, Restart Anyways The “No restarts are necessary” is somewhat of an exaggeration. If you want to be able to use the keystore you may need restart the server(s). To save myself aggravation, I always do. Your mileage may vary. Conclusion That should get you there. If there are some erroneous steps included for your situation in particular, I will offer up a semi-apology as the process described above does not take long at all and if there is one step that could be dropped from it, is still much faster than trying to figure this out from other sources.

    Read the article

  • In Case You Weren’t There: Blogwell NYC

    - by Mike Stiles
    0 0 1 1009 5755 Vitrue 47 13 6751 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman";} Your roving reporter roved out to another one of Socialmedia.org’s fantastic Blogwell events, this time in NYC. As Central Park and incredible weather beckoned, some of the biggest brand names in the world gathered to talk about how they’re incorporating social into marketing and CRM, as well as extending social across their entire organizations internally. Below we present a collection of the live tweets from many of the key sessions GE @generalelectricJon Lombardo, Leader of Social Media COE How GE builds and extends emotional connections with consumers around health and reaps the benefits of increased brand equity in the process. GE has a social platform around Healthyimagination to create better health for people. If you and a friend are trying to get healthy together, you’ll do better. Health is inherently. Get health challenges via Facebook and share with friends to achieve goals together. They’re creating an emotional connection around the health context. You don’t influence people at large. Your sphere of real influence is around 5-10 people. They find relevant conversations about health on Twitter and engage sounding like a friend, not a brand. Why would people share on behalf of a brand? Because you tapped into an activity and emotion they’re already having. To create better habits in health, GE gave away inexpensive, relevant gifts related to their goals. Create the context, give the relevant gift, get social acknowledgment for giving it. What you get when you get acknowledgment for your engagement and gift is user generated microcontent. GE got 12,000 unique users engaged and 1400 organic posts with the healthy gift campaign. The Dow Chemical Company @DowChemicalAbby Klanecky, Director of Digital & Social Media Learn how Dow Chemical is finding, training, and empowering their scientists to be their storytellers in social media. There are 1m jobs coming open in science. Only 200k are qualified for them. Dow Chemical wanted to use social to attract and talk to scientists. Dow Chemical decided to use real scientists as their storytellers. Scientists are incredibly passionate, the key ingredient of a great storyteller. Step 1 was getting scientists to focus on a few platforms, blog, Twitter, LinkedIn. Dow Chemical social flow is Core Digital Team - #CMs – ambassadors – advocates. The scientists were trained in social etiquette via practice scenarios. It’s not just about sales. It’s about growing influence and the business. Dow Chemical trained about 100 scientists, 55 are active and there’s a waiting list for the next sessions. In person social training produced faster results and better participation. Sometimes you have to tell pieces of the story instead of selling your execs on the whole vision. Social Media Ethics Briefing: Staying Out of TroubleAndy Sernovitz, CEO @SocialMediaOrg How do we get people to share our message for us? We have to have their trust. The difference between being honest and being sleazy is disclosure. Disclosure does not hurt the effectiveness of your marketing. No one will get mad if you tell them up front you’re a paid spokesperson for a company. It’s a legal requirement by the FTC, it’s the law, to disclose if you’re being paid for an endorsement. Require disclosure and truthfulness in all your social media outreach. Don’t lie to people. Monitor the conversation and correct misstatements. Create social media policies and training programs. If you want to stay safe, never pay cash for social media. Money changes everything. As soon as you pay, it’s not social media, it’s advertising. Disclosure, to the feds, means clear, conspicuous, and understandable to the average reader. This phrase will keep you in the clear, “I work for ___ and this is my personal opinion.” Who are you? Were you paid? Are you giving an honest opinion based on a real experience? You as a brand are responsible for what an agency or employee or contactor does in your behalf. SocialMedia.org makes available a Disclosure Best Practices Toolkit. Socialmedia.org/disclosure. The point is to not ethically mess up and taint social media as happened to e-mail. Not only is the FTC cracking down, so is Google and Facebook. Visa @VisaNewsLucas Mast, Senior Business Leader, Global Corporate Social Media Visa built a mobile studio for the Olympics for execs and athletes. They wanted to do postcard style real time coverage of Visa’s Olympics sponsorships, and on a shoestring. Challenges included Olympic rules, difficulty getting interviews, time zone trouble, and resourcing. Another problem was they got bogged down with their own internal approval processes. Despite all the restrictions, they created and published a variety of and fair amount of content. They amassed 1000+ views of videos posted to the Visa Communication YouTube channel. Less corporate content yields more interest from media outlets and bloggers. They did real world video demos of how their products work in the field vs. an exec doing a demo in a studio. Don’t make exec interview videos dull and corporate. Keep answers short, shoot it in an interesting place, do takes until they’re comfortable and natural. Not everything will work. Not everything will get a retweet. But like the lottery, you can’t win if you don’t play. Promoting content is as important as creating it. McGraw-Hill Companies @McGrawHillCosPatrick Durando, Senior Director of Global New Media McGraw-Hill has 26,000 employees. McGraw-Hill created a social intranet called Buzz. Intranets create operational efficiency, help product dev, facilitate crowdsourcing, and breaks down geo silos. Intranets help with talent development, acquisition, retention. They replaced the corporate directory with their own version of LinkedIn. The company intranet has really cut down on the use of email. Long email threats become organized, permanent social discussions. The intranet is particularly useful in HR for researching and getting answers surrounding benefits and policies. Using a profile on your company intranet can establish and promote your internal professional brand. If you’re going to make an intranet, it has to look great, work great, and employees are going have to want to go there. You can’t order them to like it. 

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • The Great Divorce

    - by BlackRabbitCoder
    I have a confession to make: I've been in an abusive relationship for more than 17 years now.  Yes, I am not ashamed to admit it, but I'm finally doing something about it. I met her in college, she was new and sexy and amazingly fast -- and I'd never met anything like her before.  Her style and her power captivated me and I couldn't wait to learn more about her.  I took a chance on her, and though I learned a lot from her -- and will always be grateful for my time with her -- I think it's time to move on. Her name was C++, and she so outshone my previous love, C, that any thoughts of going back evaporated in the heat of this new romance.  She promised me she'd be gentle and not hurt me the way C did.  She promised me she'd clean-up after herself better than C did.  She promised me she'd be less enigmatic and easier to keep happy than C was.  But I was deceived.  Oh sure, as far as truth goes, it wasn't a complete lie.  To some extent she was more fun, more powerful, safer, and easier to maintain.  But it just wasn't good enough -- or at least it's not good enough now. I loved C++, some part of me still does, it's my first-love of programming languages and I recognize its raw power, its blazing speed, and its improvements over its predecessor.  But with today's hardware, at speeds we could only dream to conceive of twenty years ago, that need for speed -- at the cost of all else -- has died, and that has left my feelings for C++ moribund. If I ever need to write an operating system or a device driver, then I might need that speed.  But 99% of the time I don't.  I'm a business-type programmer and chances are 90% of you are too, and even the ones who need speed at all costs may be surprised by how much you sacrifice for that.   That's not to say that I don't want my software to perform, and it's not to say that in the business world we don't care about speed or that our job is somehow less difficult or technical.  There's many times we write programs to handle millions of real-time updates or handle thousands of financial transactions or tracking trading algorithms where every second counts.  But if I choose to write my code in C++ purely for speed chances are I'll never notice the speed increase -- and equally true chances are it will be far more prone to crash and far less easy to maintain.  Nearly without fail, it's the macro-optimizations you need, not the micro-optimizations.  If I choose to write a O(n2) algorithm when I could have used a O(n) algorithm -- that can kill me.  If I choose to go to the database to load a piece of unchanging data every time instead of caching it on first load -- that too can kill me.  And if I cross the network multiple times for pieces of data instead of getting it all at once -- yes that can also kill me.  But choosing an overly powerful and dangerous mid-level language to squeeze out every last drop of performance will realistically not make stock orders process any faster, and more likely than not open up the system to more risk of crashes and resource leaks. And that's when my love for C++ began to die.  When I noticed that I didn't need that speed anymore.  That that speed was really kind of a lie.  Sure, I can be super efficient and pack bits in a byte instead of using separate boolean values.  Sure, I can use an unsigned char instead of an int.  But in the grand scheme of things it doesn't matter as much as you think it does.  The key is maintainability, and that's where C++ failed me.  I like to tell the other developers I work with that there's two levels of correctness in coding: Is it immediately correct? Will it stay correct? That is, you can hack together any piece of code and make it correct to satisfy a task at hand, but if a new developer can't come in tomorrow and make a fairly significant change to it without jeopardizing that correctness, it won't stay correct. Some people laugh at me when I say I now prefer maintainability over speed.  But that is exactly the point.  If you focus solely on speed you tend to produce code that is much harder to maintain over the long hall, and that's a load of technical debt most shops can't afford to carry and end up completely scrapping code before it's time.  When good code is written well for maintainability, though, it can be correct both now and in the future. And you know the best part is?  My new love is nearly as fast as C++, and in some cases even faster -- and better than that, I know C# will treat me right.  Her creators have poured hundreds of thousands of hours of time into making her the sexy beast she is today.  They made her easy to understand and not an enigmatic mess.  They made her consistent and not moody and amorphous.  And they made her perform as fast as I care to go by optimizing her both at compile time and a run-time. Her code is so elegant and easy on the eyes that I'm not worried where she will run to or what she'll pull behind my back.  She is powerful enough to handle all my tasks, fast enough to execute them with blazing speed, maintainable enough so that I can rely on even fairly new peers to modify my work, and rich enough to allow me to satisfy any need.  C# doesn't ask me to clean up her messes!  She cleans up after herself and she tries to make my life easier for me by taking on most of those optimization tasks C++ asked me to take upon myself.  Now, there are many of you who would say that I am the cause of my own grief, that it was my fault C++ didn't behave because I didn't pay enough attention to her.  That I alone caused the pain she inflicted on me.  And to some extent, you have a point.  But she was so high maintenance, requiring me to know every twist and turn of her vast and unrestrained power that any wrong term or bout of forgetfulness was met with painful reminders that she wasn't going to watch my back when I made a mistake.  But C#, she loves me when I'm good, and she loves me when I'm bad, and together we make beautiful code that is both fast and safe. So that's why I'm leaving C++ behind.  She says she's changing for me, but I have no interest in what C++0x may bring.  Oh, I'll still keep in touch, and maybe I'll see her now and again when she brings her problems to my door and asks for some attention -- for I always have a soft spot for her, you see.  But she's out of my house now.  I have three kids and a dog and a cat, and all require me to clean up after them, why should I have to clean up after my programming language as well?

    Read the article

  • Thoughts on Build 2013

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/06/30/153294.aspxAnd so another Build conference has come to an end. Below are my thoughts/perspectives on various aspects of the event. I’ll do a separate blog post on my thoughts of the Build message for developers. The Good Moscone center was a great venue for Build! Easy to get around, easy to get to, and well maintained, it was a very comfortable conference venue. Yeah, the free swag was nice. Build has built up an expectation that attendees will always get something; it’ll be interesting to see how Microsoft maintains this expectation over the next few Build events. I still maintain that free swag should never be the main reason one attends an event, and for me this was definitely just an added bonus. I’m planning on trying to use the Surface as a dedicated 2nd device at work for meetings, I’ll share my experiences over the next few months. The hackathon event was a great idea, although personally I couldn’t justify spending the money on a conference registration just to spend the entire conference coding. Still, the apps that were created were really great and there was a lot of passion and excitement around the hackathon. I wonder if they couldn’t have had the hackathon on the Monday/Tuesday for those that wanted to participate so they didn’t miss any of the actual conference over Wed/Thurs. San Francisco was a great city to host Build. Getting from hotels to the conference center was very easy (well especially for me, I was only 3 blocks away) and the city itself felt very safe. However, if I never have to fly into SFO again I’ll be alright with that! Delays going into and out of SFO and both apparently were due to the airport itself. The Bad Build is one of those oddities on the conference landscape where people will pay to commit to attending an event without knowing anything about the sessions. We got our list of conference sessions when we registered on Tuesday, not before. And even then, we only got titles and not descriptions (those were eventually made available via the conference’s mobile application). I get it…they’re going to make announcements and they don’t want to give anything away through the session titles. But honestly, there wasn’t anything in the session titles that I would have considered a surprise. Breakfasts were brutal. High-carb pastries, donuts, and muffins with fruit and hard boiled eggs does not a conference breakfast make. I can’t believe that the difference between a continental breakfast per person and a hot breakfast buffet would have been a huge impact to a conference fee that was already around $2000. The vendor area was anemic. I don’t know why Microsoft forces the vendors into cookie-cutter booth areas (this year they were all made of plywood material). WPC, TechEd – booth areas there allow the vendors to be creative with their displays. Not so much for Build. Really odd was the lack of Microsoft’s own representation around Bing. In the day 1 keynote Microsoft made a big deal about Bing as an API. Yet there was nobody in the vendor area set up to provide more information or have discussions with about the Bing API. The Ugly Our name badges were NFC enabled. The purpose of this, beyond the vendors being able to scan your info, wasn’t really made clear. An attendee I talked to showed how you could get a reader app on your phone so you can scan other members cards and collect their contact info – which is a kewl idea; business cards are so 1990’s. But I was *shocked* at the amount of information that was on our name badges! Here’s what’s displayed on our name badge: - Name - Company - Twitter Handle I’m ok with that. But here’s what actually gets read: - Name - Company - Address Used for Registration - Phone Number Used for Registration So sharing that info with another attendee, they get way more of my info than just how to find me on Twitter! Microsoft, you need to fix this for the future. If vendors want to collect information on attendees, they should be able to collect an ID from the badge, then get a report with corresponding records afterwards. My personal information should not be so readily available, and without my knowledge! Final Verdict Maybe its my older age, maybe its where I’m at in life with family, maybe its where I’m at in my career, but when I consider whether a conference experience was valuable I get to the core reason I attend: opportunities to learn, opportunities to network, opportunities to engage with Microsoft. Opportunities to Learn:  Sessions I attended were generally OK, with some really stand out ones on Day 2. I would love to see Microsoft adopt the Dojo format for a portion of their sessions. Hands On Labs are dull, lecture style sessions are great for information sharing. But a guided hands-on coding session (Read: Dojo) provides the best of both worlds. Being that all content is publically available online to everyone (Build attendee or not), the value of attending the conference sessions is decreased. The value though is in the discussions that take part in person afterwards, which leads to… Opportunities to Network: I enjoyed getting together with old friends and connecting with Twitter friends in person for the first time. I also had an opportunity to meet total strangers. So from a networking perspective, Build was fantastic! I still think it would have been great to have an area for ad-hoc discussions – where speakers could announce they’d be available for more questions after their sessions, or attendees who wanted to discuss more in depth on a topic with other attendees could arrange space. Some people have no problems being outgoing and making these things happen, but others are not and a structured model is more attractive. Opportunities to Engage with Microsoft: Hit and miss on this one. Outside of the vendor area, unless you cornered or reached out to a speaker, there wasn’t any defined way to connect with blue badges. And as I mentioned above, Microsoft didn’t have full representation in the vendor area (no Bing). All in all, Build was a fun party where I was informed about some new stuff and got some free swag. Was it worth the time away from home and the hit to my PD budget? I’d say Somewhat. Build is a great informational conference, but I wouldn’t call it a learning conference. Considering that TechEd seems to be moving to more of an IT Pro focus, independent developer conferences seem to be the best value for those looking to learn and not just be informed. With the rapid development cycle Microsoft is embracing, we’re already seeing Build happening twice within a 12 month period. If that continues, the value of attending Build in person starts to diminish – especially with so much content available online. If Microsoft wants Build to be a must-attend event in the future, they need to start incorporating aspects of Tech Ed, past PDCs, and other conferences so those that want to leave with more than free swag have something to attract them.

    Read the article

  • How to configure TATA Photon+ EC1261 HUAWEI

    - by user3215
    I'm running ubuntu 10.04. I have a newly purchased TATA Photon+ Internet connection which supports Windows and Mac. On the Internet I found a article saying that it could be configured on Linux. I followed the steps to install it on Ubuntu from this link. I am still not able to get online, and need some help. Also, it is very slow, but I was told that I would see speeds up to 3.1MB. I dont have wvdial installed and cannot install it from apt as I'm not connected to internet Booting from windows I dowloaded "wvdial" .deb package and tried to install on ubuntu but it's ended with dependency problem. Automatically, don't know how, I got connected to internet only for once. Immediately I installed wvdial package after this I followed the tutorials(I could not browse and upload the files here) . From then it's showing that the device is connected in the network connections but no internet connection. Once I disable the device, it won't show as connected again and I'll have to restart my system. Sometimes the device itself not detected(wondering if there is any command to re-read the all devices). output of wvdialconf /etc/wvdial.cof: #wvdialconf /etc/wvdial.conf Editing `/etc/wvdial.conf'. Scanning your serial ports for a modem. ttyS0<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyS0<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 115200 baud ttyS0<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. Modem Port Scan<*1>: S1 S2 S3 WvModem<*1>: Cannot get information for serial port. ttyUSB0<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyUSB0<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud ttyUSB0<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. WvModem<*1>: Cannot get information for serial port. ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. WvModem<*1>: Cannot get information for serial port. ttyUSB2<*1>: ATQ0 V1 E1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 Z -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK ttyUSB2<*1>: Modem Identifier: ATI -- Manufacturer: +GMI: HUAWEI TECHNOLOGIES CO., LTD ttyUSB2<*1>: Speed 9600: AT -- OK ttyUSB2<*1>: Max speed is 9600; that should be safe. ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK Found a modem on /dev/ttyUSB2. Modem configuration written to /etc/wvdial.conf. ttyUSB2<Info>: Speed 9600; init "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0" output of wvdial: #wvdial --> WvDial: Internet dialer version 1.60 --> Cannot get information for serial port. --> Initializing modem. --> Sending: ATZ ATZ OK --> Sending: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 OK --> Sending: AT+CRM=1 AT+CRM=1 OK --> Modem initialized. --> Sending: ATDT#777 --> Waiting for carrier. ATDT#777 CONNECT --> Carrier detected. Starting PPP immediately. --> Starting pppd at Sat Oct 16 15:30:47 2010 --> Pid of pppd: 5681 --> Using interface ppp0 --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> local IP address 14.96.147.104 --> pppd: (u;[08]@s;[08]`{;[08] --> remote IP address 172.29.161.223 --> pppd: (u;[08]@s;[08]`{;[08] --> primary DNS address 121.40.152.90 --> pppd: (u;[08]@s;[08]`{;[08] --> secondary DNS address 121.40.152.100 --> pppd: (u;[08]@s;[08]`{;[08] Output of log message /var/log/messages: Oct 16 15:29:44 avyakta-desktop pppd[5119]: secondary DNS address 121.242.190.180 Oct 16 15:29:58 desktop pppd[5119]: Terminating on signal 15 Oct 16 15:29:58 desktop pppd[5119]: Connect time 0.3 minutes. Oct 16 15:29:58 desktop pppd[5119]: Sent 0 bytes, received 177 bytes. Oct 16 15:29:58 desktop pppd[5119]: Connection terminated. Oct 16 15:30:47 desktop pppd[5681]: pppd 2.4.5 started by root, uid 0 Oct 16 15:30:47 desktop pppd[5681]: Using interface ppp0 Oct 16 15:30:47 desktop pppd[5681]: Connect: ppp0 <--> /dev/ttyUSB2 Oct 16 15:30:47 desktop pppd[5681]: CHAP authentication succeeded Oct 16 15:30:47 desktop pppd[5681]: CHAP authentication succeeded Oct 16 15:30:48 desktop pppd[5681]: local IP address 14.96.147.104 Oct 16 15:30:48 desktop pppd[5681]: remote IP address 172.29.161.223 Oct 16 15:30:48 desktop pppd[5681]: primary DNS address 121.40.152.90 Oct 16 15:30:48 desktop pppd[5681]: secondary DNS address 121.40.152.100 EDIT 1 : I tried the following sudo stop network-manager sudo killall modem-manager sudo /usr/sbin/modem-manager --debug > ~/mm.log 2>&1 & sudo /usr/sbin/NetworkManager --no-daemon > ~/nm.log 2>&1 & Output of mm.log: #vim ~/mm.log: ** Message: Loaded plugin Option High-Speed ** Message: Loaded plugin Option ** Message: Loaded plugin Huawei ** Message: Loaded plugin Longcheer ** Message: Loaded plugin AnyData ** Message: Loaded plugin ZTE ** Message: Loaded plugin Ericsson MBM ** Message: Loaded plugin Sierra ** Message: Loaded plugin Generic ** Message: Loaded plugin Gobi ** Message: Loaded plugin Novatel ** Message: Loaded plugin Nokia ** Message: Loaded plugin MotoC Output of nm.log: #vim ~/nm.log: NetworkManager: <info> starting... NetworkManager: <info> modem-manager is now available NetworkManager: SCPlugin-Ifupdown: init! NetworkManager: SCPlugin-Ifupdown: update_system_hostname NetworkManager: SCPluginIfupdown: guessed connection type (eth0) = 802-3-ethernet NetworkManager: SCPlugin-Ifupdown: update_connection_setting_from_if_block: name:eth0, type:802-3-ethernet, id:Ifupdown (eth0), uuid: 681b428f-beaf-8932-dce4-678ed5bae28e NetworkManager: SCPlugin-Ifupdown: addresses count: 1 NetworkManager: SCPlugin-Ifupdown: No dns-nameserver configured in /etc/network/interfaces NetworkManager: nm-ifupdown-connection.c.119 - invalid connection read from /etc/network/interfaces: (1) addresses NetworkManager: SCPluginIfupdown: management mode: unmanaged NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:14.4/0000:02:02.0/net/eth1, iface: eth1) NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:14.4/0000:02:02.0/net/eth1, iface: eth1): no ifupdown configuration found. NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/lo, iface: lo) @

    Read the article

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • Why We Should Learn to Stop Worrying and Love Millennials

    - by HCM-Oracle
    By Christine Mellon Much is said and written about the new generations of employees entering our workforce, as though they are a strange specimen, a mysterious life form to be “figured out,” accommodated and engaged – at a safe distance, of course.  At its worst, this talk takes a critical and disapproving tone, with baby boomer employees adamantly refusing to validate this new breed of worker, let alone determine how to help them succeed and achieve their potential.   The irony of our baby-boomer resentments and suspicions is that they belie the fact that we created the very vision that younger employees are striving to achieve.  From our frustrations with empty careers that did not fulfill us, from our opposition to “the man,” from our sharp memories of our parents’ toiling for 30 years just for the right to retire, from the simple desire not to live our lives in a state of invisibility, came the seeds of hope for something better. One characteristic of Millennial workers that grew from these seeds is the desire to experience as much as possible.  They are the “Experiential Employee”, with a passion for growing in diverse ways and expanding personal and professional horizons.  Rather than rooting themselves in a single company for a career, or even in a single career path, these employees are committed to building a broad portfolio of experiences and capabilities that will enable them to make a difference and to leave a mark of significance in the world.  How much richer is the organization that nurtures and leverages this inclination?  Our curmudgeonly ways must be surrendered and our focus redirected toward building the next generation of talent ecosystems, if we are to optimize what future generations have to offer.   Accelerating Professional Development In spite of our Boomer grumblings about Millennials’ “unrealistic” expectations, the truth is that we have a well-matched set of circumstances.  We have executives-in-waiting who want to learn quickly and a concurrent, urgent need to ramp up their development time, based on anticipated high levels of retirement in the next 10+ years.  Since we need to rapidly skill up these heirs to the corporate kingdom, isn’t it a fortunate coincidence that they are hungry to learn, develop and move fluidly throughout our organizations??  So our challenge now is to efficiently operationalize the wisdom we have acquired about effective learning and development.   We have already evolved from classroom-based models to diverse instructional methods.  The next step is to find the best approaches to help younger employees learn quickly and apply new learnings in an impactful way.   Creating temporary or even permanent functional partnerships among Millennial employees is one way to maximize outcomes.  This might take the form of 2 or more employees owning aspects of what once fell under a single role.  While one might argue this would mean duplication of resources, it could be a short term cost while employees come up to speed.  And the potential benefits would be numerous:  leveraging and validating the inherent sense of community of new generations, creating cross-functional skills with broad applicability, yielding additional perspectives and approaches to traditional work outcomes, and accelerating the performance curve for incumbents through Cooperative Learning (Johnson, D. and Johnson R., 1989, 1999).  This well-researched teaching strategy, where students support each other in the absorption and application of new information, has been shown to deliver faster, more efficient learning, and greater retention. Alternately, perhaps short term contracts with exiting retirees, or former retirees, to help facilitate the development of following generations may have merit.  Again, a short term cost, certainly.  However, the gains realized in shortening the learning curve, and strengthening engagement are substantial and lasting. Ultimately, there needs to be creative thinking applied for each organization on how to accelerate the capabilities of our future leaders in unique ways that mesh with current culture. The manner in which performance is evaluated must finally shift as well.  Employees will need to be assessed on how well they have developed key skills and capabilities vs. end-to-end mastery of functional positions they have no interest in keeping for an entire career. As we become more comfortable in placing greater and greater weight on competencies vs. tasks, we will realize increased organizational agility via this new generation of workers, which will be further enhanced by their natural flexibility and appetite for change. Revisiting Succession  For many years, organizations have failed to deliver desired succession planning outcomes.  According to CEB’s 2013 research, only 28% of current leaders were pre-identified in a succession plan. These disappointing results, along with the entrance of the experiential, Millennial employee into the workforce, may just provide the needed impetus for HR to reinvent succession processes.   We have recognized that the best professional development efforts are not always linear, and the time has come to fully adopt this philosophy in regard to succession as well.  Paths to specific organizational roles will not look the same for newer generations who seek out unique learning opportunities, without consideration of a singular career destination.  Rather than charting particular jobs as precursors for key positions, the experiences and skills behind what makes an incumbent successful must become essential in succession mapping.  And the multitude of ways in which those experiences and skills may be acquired must be factored into the process, along with the individual employee’s level of learning agility. While this may seem daunting, it is necessary and long overdue.  We have talked about the criticality of competency-based succession, however, we have not lived up to our own rhetoric.  Many Boomers have experienced the same frustration in our careers; knowing we are capable of shining in a particular role, but being denied the opportunity due to how our career history lined up, on paper, with documented job requirements.  These requirements usually emphasized past jobs/titles and specific tasks, versus capabilities, drive and willingness (let alone determination) to learn new things.  How satisfying would it be for us to leave a legacy where such narrow thinking no longer applies and potential is amplified? Realizing Diversity Another bloom from the seeds we Boomers have tried to plant over the past decades is a completely evolved view of diversity.  Millennial employees assume a diverse workforce, and are startled by anything less.  Their social tolerance, nurtured by wide and diverse networks, is unprecedented.  College graduates expect a similar landscape in the “real world” to what they experienced throughout their lives.  They appreciate and seek out divergent points of view and experiences without needing any persuasion.  The face of our U.S. workforce will likely see dramatic change as Millennials apply their fresh take on hiring and building strong teams, with an inherent sense of inclusion.  This wonderful aspect of the Millennial wave should be celebrated and strongly encouraged, as it is the fulfillment of our own aspirations. Future Perfect The Experiential Employee is operating more as a free agent than a long term player, and their commitment will essentially last as long as meaningful organizational culture and personal/professional opportunities keep their interest.  As Boomers, we have laid the foundation for this new, spirited employment attitude, and we should take pride in knowing that.  Generations to come will challenge organizations to excel in how they identify, manage and nurture talent. Let’s support and revel in the future that we’ve helped invent, rather than lament what we think has been lost.  After all, the future is always connected to the past.  And as so eloquently phrased by Antoine Lavoisier, French nobleman, chemist and politico:  “Nothing is Lost, Nothing is Created, and Everything is Transformed.” Christine has over 25 years of diverse HR experience.  She has held HR consulting and corporate roles, including CHRO positions for Echostar in Denver, a 6,000+ employee global engineering firm, and Aepona, a startup software firm, successfully acquired by Intel. Christine is a resource to Oracle clients, to assist in Human Capital Management strategy development and implementation, compensation practices, talent development initiatives, employee engagement, global HR management, and integrated HR systems and processes that support the full employee lifecycle. 

    Read the article

  • Moving from Tortoise to TFS

    - by MarkPearl
    The Past A few years ago my small software company made the jump from storing code on a shared folder to source code control. At the time we had evaluated a few of the options and settled on Tortoise SVN. The main motivation for going the SVN route was that we found a great plugin for Visual Studio that allowed us to avoid the command prompt for uploading changes (like I said we are windows programmers… command prompt bad!! ) and it was free. Up to now we have been pretty happy with SVN as it removed many of the worries that I had about how safe my code was on a shared folder and also gave us the opportunity to safely have several developers work on the same project at the same time. The only times when we have been unhappy has been when we have had SVN hell days – which pretty much occur when you are doing something out of the norm and suddenly SVN just won’t resolve conflicts or something along those lines. This happens once every 4 or 5 months and is not necessarily a problem caused directly by SVN – but a problem augmented by SVN. When you have SVN hell days you want to curse SVN! With that in mind I recently have been relooking at our source code control. I have explored using GIT and was very impressed by it and have also looked at TFS. From a source code control perspective I don’t want to get into a heated discussion on which one is better – but I do want to mention that I wear two hats in my organization – software developer & manager, and with the manager hat on I tend to sway the TFS route. So when I was given a coupon to test DiscountASP.Net Team Foundation Server Service for a year, I thought it was the perfect opportunity to try TFS in a distributed environment and also make the first step towards having an integrated development management system. Some of the things that appeal to me about DiscountASP’s offering are the following… Basic management / planning facilities like to do lists inside Visual Studio Daily backup of data on the server – we are developers, not IT managers and so the more of this I could outsource the better Distributed solution – all of us work remotely and so this was a big one as well. Registering and Setting Up with DiscountASP.NET The whole registration process was simple and intuitive. The web interface is not the most visually impressive one, but it is functional and a few seconds after I clicked the last submit button a email was sitting in my inbox giving me my control panel username and suggesting that I read the “Getting Started” article. The getting started article was easy to read and understand so no complaints there either. Next to set my dev environment to work. With a few references to the getting started article I had completed the whole setup process in a matter of minutes. Ten minutes after initiating the whole thing I was logged into VS2010 and creating my first TFS project. With the service that I signed up for, I have access for 5 users – which is sufficient for my internal needs. So from what I can tell, to set the rest of us up on the system I just need to supply them with their user credentials and url. My Concerns Resolved 1) Security So, a few concerns I had about the service. First and foremost – is it secure? I would hate for someone to get access to our code and the whole idea of putting it up on the internet is a concern for me. Turning to the Knowledge Base on the DiscountASP website this is one of the first question I can see answered. According to them it is secure. I have extracted their comment below regarding this. Our TFS hosting service is secure. We only accept HTTPS connections ensuring that any client-server data transmission is encrypted. At the network level, all of our systems are protected by multiple Juniper firewalls, Tipping Point's Intrusion Detection System (see Tipping Point's case study of our use here), and we also employ DDoS mitigation to add extra layers of security. Additionally, physical access to the servers is tightly restricted. Please see the security section of this Knowledge Base article for further details. 2) Web Portal Access The other big concern I have is regarding web portal access. In the ideal world I would like to be able to give my end users access to a web portal for reporting bugs etc. When I initially read through the FAQ of the site it mentioned that there was web portal access – but from what I can see this is just for “users”. Since I am limited to 5 users for the account, it would not be practical to set up external users that we could get feedback from on bugs etc. I would be interested if this is possible – and if so if someone could post it in the comments it would be much appreciated. If this isn’t possible, it is a slight let down as we rely heavily on end user feedback to get feedback and it would have been ideal to have gotten this within the service. Other than those two items, I didn’t have any real concerns that were unresolved. So where do I go from here? So time passed by from the initial writing of this post and as work whirred in and out of my inbox I have still not had a proper opportunity to give the service a test run. Recently though things have began to slow down and then surprise surprise I had another SVN Hell day. With that experience I had a new found resolve to get our team on TFS and so today we are going to start to use the service as a team. I am hoping that I do not have TFS hell days – but if I do, I will be sure to write about them. In short - the verdict is still out on whether this service is going to be invaluable to my business or whether it will create more headaches than it is worth BUT I am hopping it will be an invaluable service. I will only really be able to determine that in a few months… till then!

    Read the article

  • Learnings from trying to write better software: Loud errors from the very start

    - by theo.spears
    Microsoft made a very small number of backwards incompatible changes between .NET 1.1 and 2.0, because they wanted to make it as easy and safe as possible to port applications to the new runtime. (Here’s a list.) However, one thing they did change was what happens when a background thread fails with an unhanded exception - in .NET 1.1 nothing happened, the thread terminated, and the application continued oblivious. Try the same trick in .NET 2.0 and the entire application, including all threads, will rudely terminate. There are three reasons for this. Firstly if a background thread has crashed, it may have left the entire application in an inconsistent state, in a way that will affect other threads. It’s better to terminate the entire application than continue and have the application perform actions based on a broken state, for example take customer orders, or write corrupt files to disk.  Secondly, during software development, it is far better for errors to be loud and obtrusive. Even if you have unit tests and integration tests (and you should), a key part of ensuring software works properly is to actually try using it, both through systematic testing and through the casual use all software gets by its developers during use. Subtle errors are easy to miss if you are not actually doing real work using the application, loud errors are obvious. Thirdly, and most importantly, even if catching and swallowing exceptions indiscriminately doesn't cause any problems in your application, the presence of unexpected exceptions shows you do not fully understand the behavior of your code. The currently released version of your application may be absolutely correct. However, because your mental model of the behavior is wrong, any future change you make to the program could and probably will introduce critical errors.  This applies to more than just exceptions causing threads to exit, any unexpected state should make the application blow up in an un-ignorable way. The worst thing you can do is silently swallow errors and continue. And let's be clear, writing to a log file does not count as blowing up in an un-ignorable way.  This is all simple as long as the call stack only contains your code, but when your functions start to be called by third party or .NET framework code, it's surprisingly easy for exceptions to start vanishing. Let's look at two examples.   1. Windows forms drag drop events  Usually if you throw an exception from a winforms event handler it will bring up the "application has crashed" dialog with abort and continue options. This is a good default behavior - the error is big and loud, but it is possible for the user to ignore the error and hopefully save their data, if somehow this bug makes it past testing. However drag and drop are different - throw an exception from one of these and it will just be silently swallowed with no explanation.  By the way, it's not just drag and drop events. Timer events do it too.  You can research how exceptions are treated in different handlers and code appropriately, but the safest and most user friendly approach is to always catch exceptions in your event handlers and show your own error message. I'll talk about one good approach to handling these exceptions at the end of this post.   2. SSMS integration for SQL Tab Magic  A while back wrote an SSMS add-in called SQL Tab Magic (learn more about the process here). It works by listening to certain SSMS events and remembering what documents are opened and closed. I deployed it internally and it was used for a few months by a number of people without problems, so I was reasonably confident in its quality. Before releasing I made a few cleanups, including introducing error reporting. Bam. A few days later I was looking at over 1,000 error reports in my inbox. In turns out I wasn't handling table designers properly. The exceptions were there, but again SSMS was helpfully swallowing them all for me, so I was blissfully unaware. Had I made my errors loud from the start, I would have noticed these issues long before and fixed them.   Handling exceptions  Now you are systematically catching exceptions throughout your application, you need to do something with them. I've tried 3 options: log them, alert the user, and automatically send them home.  There are a few good options for logging in .NET. The most widespread is Apache log4net, which provides a very capable and configurable logging framework. There is also NLog which has a compatible interface, with a greater emphasis on fluent rather than XML configuration.  Alerting the user serves two purposes. Firstly it means they understand their action has failed to they don't just assume it worked (Silent file copy failure is a problem if you then delete the originals) or that they should keep waiting for a background task to complete. Secondly, it means the users can report the bug to your support team, and then you can fix it. This means the message you show the user should contain the information you need as a developer to identify and fix it. And the user will probably just send you a screenshot of the dialog, so it shouldn't be hidden by scroll bars.  This leads us to the third option, automatically sending error reports home. By automatic I mean with minimal effort on the part of the user, rather than doing it silently behind their backs. The advantage of this is you can send back far more detailed and precise information than you can expect a user to include in an email, and by making it easier to report errors, you make it more likely users will do so.  We do this using a great tool called SmartAssembly (full disclosure: this is a product made by Red Gate). It captures complete stack traces including the values of all local variables and then allows the user to send all this information back with a single click. We also capture log files to help understand what lead up to the error. We then use the free SmartAssembly Sync for Jira to dedupe these reports and raise them as bugs in our bug tracking system.  The combined effect of loud errors during development and then automatic error reporting once software is deployed allows us to find and fix more bugs, correct misunderstandings on how our software works, and overall is a key piece in delivering higher quality software. However it is no substitute for having motivated cunning testers in the building - and we're looking to hire more of those too.   If you found this post interesting you should follow me on twitter.  

    Read the article

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • CodePlex Daily Summary for Sunday, August 03, 2014

    CodePlex Daily Summary for Sunday, August 03, 2014Popular ReleasesBoxStarter: Boxstarter 2.4.76: Running the Setup.bat file will install Chocolatey if not present and then install the Boxstarter modules.GMare: GMare Beta 1.2: Features Added: - Instance painting by holding the alt key down while pressing the left mouse button - Functionality to the binary exporter so that backgrounds from image files can be used - On the binary exporter background information can be edited manually now - Update to the GMare binary read GML script - Game Maker Studio export - Import from GMare project. Multiple options to import desired properties of a .gmpx - 10 undo/redo levels instead of 5 is now the default - New preferences dia...Json.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...SQL Server Dialog: SQL Server Dialog: Input server, user and password Show folder and file in treeview Customize icon Filter file extension Skip system generate folder and fileAitso-a platform for spatial optimization and based on artificial immune systems: Aitso_0.14.08.01: Aitso0.14.08.01Installer.zipVidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.AutoUpdater.NET : Auto update library for VB.NET and C# Developer: AutoUpdater.NET 1.3: Fixed problem in DownloadUpdateDialog where download continues even if you close the dialog. Added support for new url field for 64 bit application setup. AutoUpdater.NET will decide which download url to use by looking at the value of IntPtr.Size. Added German translation provided by Rene Kannegiesser. Now developer can handle update logic herself using event suggested by ricorx7. Added italian translation provided by Gianluca Mariani. Fixed bug that prevents Application from exiti...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.Multiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterAricie Shared: Aricie.Shared Version 1.8.00: Version 1.8.0 - Release Notes New: Expression Builder to design Flee Expressions New: Cryptographic helpers and configuration classes Improvement: Many fixes and improvements with property editor Improvement: Token Replace Property explorer now has a restricted mode for additional security Improvement: Better variables, types and object manipulation Fixed: smart file and flee bugs Fixed: Removed Exception while trying to read unsuported files Improvement: several performance twe...Accesorios de sitios Torrent en Español para Synology Download Station: Pack de Torrents en Español 6.0.0: Agregado los módulos de DivXTotal, el módulo de búsqueda depende del de alojamiento para bajar las series Utiliza el rss: http://www.divxtotal.com/rss.php DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).VG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityNew ProjectsCreek: Creek is a Collection of many C# Frameworks and my ownSpeaking Speedometer (android): Simple speaking speedometerT125Protocol { Alpha version }: implement T125 Protocol for communicate with a mainframe.Unix Time: This library provides a System.UnixTime as a new Type providing conversion between Unix Time and .NET DateTime.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer

    - by Elton Stoneman
    This is the third in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer As the patterns get further from the simple .NET full-trust consumer, all that changes is the communication protocol and the authentication mechanism. In Part 3 the scenario is that we still have a secure .NET environment consuming our service, so we can store shared keys securely, but the runtime environment is locked down so we can't use Microsoft.ServiceBus to get the nice WCF relay bindings. To support this we will expose a RESTful endpoint through the Azure Service Bus, and require the consumer to send a security token with each HTTP service request. Pattern applicability This is a good fit for scenarios where: the runtime environment is secure enough to keep shared secrets the consumer can execute custom code, including building HTTP requests with custom headers the consumer cannot use the Azure SDK assemblies the service may need to know who is consuming it the service does not need to know who the end-user is Note there isn't actually a .NET requirement here. By exposing the service in a REST endpoint, anything that can talk HTTP can be a consumer. We'll authenticate through ACS which also gives us REST endpoints, so the service is still accessed securely. Our real-world example would be a hosted cloud app, where we we have enough room in the app's customisation to keep the shared secret somewhere safe and to hook in some HTTP calls. We will be flowing an identity through to the on-premise service now, but it will be the service identity given to the consuming app - the end user's identity isn't flown through yet. In this post, we’ll consume the service from Part 1 in ASP.NET using the WebHttpRelayBinding. The code for Part 3 (+ Part 1) is on GitHub here: IPASBR Part 3. Authenticating and authorizing with ACS We'll follow the previous examples and add a new service identity for the namespace in ACS, so we can separate permissions for different consumers (see walkthrough in Part 1). I've named the identity partialTrustConsumer. We’ll be authenticating against ACS with an explicit HTTP call, so we need a password credential rather than a symmetric key – for a nice secure option, generate a symmetric key, copy to the clipboard, then change type to password and paste in the key: We then need to do the same as in Part 2 , add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: partialTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send As with Part 2, this sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. RESTfully exposing the on-premise service through Azure Service Bus Relay The part 3 sample code is ready to go, just put your Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files.  But to do it yourself is very simple. We already have a WebGet attribute in the service for locally making REST calls, so we are just going to add a new endpoint which uses the WebHttpRelayBinding to relay that service through Azure. It's as easy as adding this endpoint to Web.config for the service:         <endpoint address="https://sixeyed-ipasbr.servicebus.windows.net/rest"                   binding="webHttpRelayBinding"                    contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> - and adding the webHttp attribute in your endpoint behavior:           <behavior name="SharedSecret">             <webHttp/>             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="gl0xaVmlebKKJUAnpripKhr8YnLf9Neaf6LR53N8uGs="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> Where's my WSDL? The metadata story for REST is a bit less automated. In our local webHttp endpoint we've enabled WCF's built-in help, so if you navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/help - you'll see the uri format for making a GET request to the service. The format is the same over Azure, so this is where you'll be connecting: https://[your-namespace].servicebus.windows.net/rest/reverse?string=abc123 Build the service with the new endpoint, open that in a browser and you'll get an XML version of an HTTP status code - a 401 with an error message stating that you haven’t provided an authorization header: <?xml version="1.0"?><Error><Code>401</Code><Detail>MissingToken: The request contains no authorization header..TrackingId:4cb53408-646b-4163-87b9-bc2b20cdfb75_5,TimeStamp:10/3/2012 8:34:07 PM</Detail></Error> By default, the setup of your Service Bus endpoint as a relying party in ACS expects a Simple Web Token to be presented with each service request, and in the browser we're not passing one, so we can't access the service. Note that this request doesn't get anywhere near your on-premise service, Service Bus only relays requests once they've got the necessary approval from ACS. Why didn't the consumer need to get ACS authorization in Part 2? It did, but it was all done behind the scenes in the NetTcpRelayBinding. By specifying our Shared Secret credentials in the consumer, the service call is preceded by a check on ACS to see that the identity provided is a) valid, and b) allowed access to our Service Bus endpoint. By making manual HTTP requests, we need to take care of that ACS check ourselves now. We do that with a simple WebClient call to the ACS endpoint of our service; passing the shared secret credentials, we will get back an SWT: var values = new System.Collections.Specialized.NameValueCollection(); values.Add("wrap_name", "partialTrustConsumer"); //service identity name values.Add("wrap_password", "suCei7AzdXY9toVH+S47C4TVyXO/UUFzu0zZiSCp64Y="); //service identity password values.Add("wrap_scope", "http://sixeyed-ipasbr.servicebus.windows.net/"); //this is the realm of the RP in ACS var acsClient = new WebClient(); var responseBytes = acsClient.UploadValues("https://sixeyed-ipasbr-sb.accesscontrol.windows.net/WRAPv0.9/", "POST", values); rawToken = System.Text.Encoding.UTF8.GetString(responseBytes); With a little manipulation, we then attach the SWT to subsequent REST calls in the authorization header; the token contains the Send claim returned from ACS, so we will be authorized to send messages into Service Bus. Running the sample Navigate to http://localhost:2028/Sixeyed.Ipasbr.WebHttpClient/Default.cshtml, enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • CodePlex Daily Summary for Monday, August 11, 2014

    CodePlex Daily Summary for Monday, August 11, 2014Popular ReleasesSpace Engineers Server Manager: SESM V1.15: V1.15 - Updated Quartz library - Correct a bug in the new mod managment - Added a warning if you have backup enabled on a server but no static map configuredAspose for Apache POI: Missing Features of Apache POI SS - v 1.2: Release contain the Missing Features in Apache POI SS SDK in comparison with Aspose.Cells What's New ? Following Examples: Create Pivot Charts Detect Merged Cells Sort Data Printing Workbooks Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.AngularGo (SPA Project Template): AngularGo.VS2013.vsix: First ReleaseTouchmote: Touchmote 1.0 beta 13: Changes Less GPU usage Works together with other Xbox 360 controls Bug fixesPublic Key Infrastructure PowerShell module: PowerShell PKI Module v3.0: Important: I would like to hear more about what you are thinking about the project? I appreciate that you like it (2000 downloads over past 6 months), but may be you have to say something? What do you dislike in the module? Maybe you would love to see some new functionality? Tell, what you think! Installation guide:Use default installation path to install this module for current user only. To install this module for all users — enable "Install for all users" check-box in installation UI ...Modern UI for WPF: Modern UI 1.0.6: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. BREAKING CHANGE LinkGroup.GroupName renamed to GroupKey NEW FEATURES Improved rendering on high DPI screens, including support for per-monitor DPI awareness available in Windows 8.1 (see also Per-monitor DPI awareness) New ModernProgressRing control with 8 builtin styles New LinkCommands.NavigateLink routed command New Visual Studio project templates 'Modern UI WPF App' and 'Modern UI W...ClosedXML - The easy way to OpenXML: ClosedXML 0.74.0: Multiple thread safe improvements including AdjustToContents XLHelper XLColor_Static IntergerExtensions.ToStringLookup Exception now thrown when saving a workbook with no sheets, instead of creating a corrupt workbook Fix for hyperlinks with non-ASCII Characters Added basic workbook protection Fix for error thrown, when a spreadsheet contained comments and images Fix to Trim function Fix Invalid operation Exception thrown when the formula functions MAX, MIN, and AVG referenc...SEToolbox: SEToolbox 01.042.019 Release 1: Added RadioAntenna broadcast name to ship name detail. Added two additional columns for Asteroid material generation for Asteroid Fields. Added Mass and Block number columns to main display. Added Ellipsis to some columns on main display to reduce name confusion. Added correct SE version number in file when saving. Re-added in reattaching Motor when drag/dropping or importing ships (KeenSH have added RotorEntityId back in after removing it months ago). Added option to export and r...jQuery List DragSort: jQuery List DragSort 0.5.2: Fixed scrollContainer removing deprecated use of $.browser so should now work with latest version of jQuery. Added the ability to return false in dragEnd to revert sort order Project changes Added nuget package for dragsort https://www.nuget.org/packages/dragsort Converted repository from SVN to MercurialBraintree Client Library: Braintree 2.32.0: Allow credit card verification options to be passed outside of the nonce for PaymentMethod.create Allow billingaddress parameters and billingaddress_id to be passed outside of the nonce for PaymentMethod.create Add Subscriptions to paypal accounts Add PaymentMethod.update Add failonduplicatepaymentmethod option to PaymentMethod.create Add support for dispute webhooksThe Mario Kart 8 App: V1.0.2.1: First Codeplex release. WINDOWS INSTALLER ONLYAspose Java for Docx4j: Aspose.Words vs Docx4j - v 1.0: Release contain the Code Comparison for Features in Docx4j SDK and Aspose.Words What's New ?Following Examples: Accessing Document Properties Add Bookmarks Convert to Formats Delete Bookmarks Working with Comments Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.File System Security PowerShell Module: NTFSSecurity 2.4.1: Add-Access and Remove-Access now take multiple accoutsYourSqlDba: YourSqlDba 5.2.1.: This version improves alert message that comes a while after you install the script. First it says to get it from YourSqlDba.CodePlex.com If you don't want to update now, just-rerun the script from your installed version. To get actual version running just execute install.PrintVersionInfo. . You can go to source code / history and click on change set 72957 to see changes in the script.Manipulator: Manipulator: manipulatorXNB filetype plugin for Paint.NET: Paint.NET XNB plugin v0.4.0.0: CHANGELOG Reverted old incomplete changes. Updated library for compatibility with Paint .NET 4. Updated project to NET 4.5. Updated version to 0.4.0.0. INSTALLATION INSTRUCTIONS Extract the ZIP file to your Paint.NET\FileTypes folder.EdiFabric: Release 4.1: Changed MessageContextWix# (WixSharp) - managed interface for WiX: Release 1.0.0.0: Release 1.0.0.0 Custom UI Custom MSI Dialog Custom CLR Dialog External UIMath.NET Numerics: Math.NET Numerics v3.2.0: Linear Algebra: Vector.Map2 (map2 in F#), storage-optimized Linear Algebra: fix RemoveColumn/Row early index bound check (was not strict enough) Statistics: Entropy ~Jeff Mastry Interpolation: use Array.BinarySearch instead of local implementation ~Candy Chiu Resources: fix a corrupted exception message string Portable Build: support .Net 4.0 as well by using profile 328 instead of 344. .Net 3.5: F# extensions now support .Net 3.5 as well .Net 3.5: NuGet package now contains pro...babelua: 1.6.5.1: V1.6.5.1 - 2014.8.7New feature: Formatting code; Stability improvement: fix a bug that pop up error "System.Net.WebResponse EndGetResponse";New ProjectsDouDou: a little project.Dynamic MVC: Dynamically generate views from your model objects for a data centric MVC application.EasyDb - Simple Data Access: EasyDb is a simple library for data access that allows you to write less code.ExpressToAbroad: just go!!!!!Full Silverlight Web Video/Voice Conferencing: The Goal of this project is to provide complete Open Source (Voice/Video Chatting Client/Server) Modules Using SilverlightGaia: Gaia is an app for Windows plataform, Gaia is like Siri and Google Now or Betty but Gaia use only text commands.pxctest: pxctestSTACS: Career Management System for MIT by Team "STACS"StrongWorld: StrongWorld.WebSuiteXevas Tools: Xevas is a professional coders group of 'Nimbuzz'. We make all tools for worldwide users of nimbuzz at free of cost.????????: ????????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ???????????????: ????????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ????????????????: ?????????

    Read the article

  • Looking into the JQuery Overlays Plugin

    - by nikolaosk
    I have been using JQuery for a couple of years now and it has helped me to solve many problems on the client side of web development.  You can find all my posts about JQuery in this link. In this post I will be providing you with a hands-on example on the JQuery Overlays Plugin.If you want you can have a look at this post, where I describe the JQuery Cycle Plugin.You can find another post of mine talking about the JQuery Carousel Lite Plugin here. Another post of mine regarding the JQuery Image Zoom Plugin can be found here.I will be writing more posts regarding the most commonly used JQuery Plugins. With the JQuery Overlays Plugin we can provide the user (overlay) with more information about an image when the user hovers over the image. I have been using extensively this plugin in my websites. In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like. You can use Visual Studio 2012 Express edition. You can download it here.  You can download this plugin from this link. I launch Expression Web 4.0 and then I type the following HTML markup (I am using HTML 5) <html lang="en"> <head>    <link rel="stylesheet" href="ImageOverlay.css" type="text/css" media="screen" />    <script type="text/javascript" src="jquery-1.8.3.min.js"></script>    <script type="text/javascript" src="jquery.ImageOverlay.min.js"></script>         <script type="text/javascript">        $(function () {            $("#Liverpool").ImageOverlay();        });    </script>   </head><body>    <ul id="Liverpool" class="image-overlay">        <li>            <a href="www.liverpoolfc.com">                <img alt="Liverpool" src="championsofeurope.jpg" />                <div class="caption">                    <h3>Liverpool Football club</h3>                    <p>The greatest club in the world</p>                </div>            </a>        </li>    </ul></body></html> This is a very simple markup. I have added references to the JQuery library (current version is 1.8.3) and the JQuery Overlays Plugin. Then I add 1 image in the element with "id=Liverpool". There is a caption class as well, where I place the text I want to show when the mouse hovers over the image. The caption class and the Liverpool id element are styled in the ImageOverlay.css file that can also be downloaded with the plugin.You can style the various elements of the html markup in the .css file. The Javascript code that makes it all happen follows.   <script type="text/javascript">        $(function () {            $("#Liverpool").ImageOverlay();        });    </script>        I am just calling the ImageOverlay function for the Liverpool ID element.The contents of ImageOverlay.css file follow .image-overlay { list-style: none; text-align: left; }.image-overlay li { display: inline; }.image-overlay a:link, .image-overlay a:visited, .image-overlay a:hover, .image-overlay a:active { text-decoration: none; }.image-overlay a:link img, .image-overlay a:visited img, .image-overlay a:hover img, .image-overlay a:active img { border: none; }.image-overlay a{    margin: 9px;    float: left;    background: #fff;    border: solid 2px;    overflow: hidden;    position: relative;}.image-overlay img{    position: absolute;    top: 0;    left: 0;    border: 0;}.image-overlay .caption{    float: left;    position: absolute;    background-color: #000;    width: 100%;    cursor: pointer;    /* The way to change overlay opacity is the follow properties. Opacity is a tricky issue due to        longtime IE abuse of it, so opacity is not offically supported - use at your own risk.         To play it safe, disable overlay opacity in IE. */    /* For Firefox/Opera/Safari/Chrome */    opacity: .8;    /* For IE 5-7 */    filter: progid:DXImageTransform.Microsoft.Alpha(Opacity=80);    /* For IE 8 */    -MS-filter: "progid:DXImageTransform.Microsoft.Alpha(Opacity=80)";}.image-overlay .caption h1, .image-overlay .caption h2, .image-overlay .caption h3,.image-overlay .caption h4, .image-overlay .caption h5, .image-overlay .caption h6{    margin: 10px 0 10px 2px;    font-size: 26px;    font-weight: bold;    padding: 0 0 0 5px;    color:#92171a;}.image-overlay p{    text-indent: 0;    margin: 10px;    font-size: 1.2em;} It couldn't be any simpler than that. I view my simple page in Internet Explorer 10 and it works as expected. I have tested this simple solution in all major browsers and it works fine.Have a look at the picture below. You can test it yourself and see the results in your favorite browser. Hope it helps!!!

    Read the article

  • Secure Your Wireless Router: 8 Things You Can Do Right Now

    - by Chris Hoffman
    A security researcher recently discovered a backdoor in many D-Link routers, allowing anyone to access the router without knowing the username or password. This isn’t the first router security issue and won’t be the last. To protect yourself, you should ensure that your router is configured securely. This is about more than just enabling Wi-Fi encryption and not hosting an open Wi-Fi network. Disable Remote Access Routers offer a web interface, allowing you to configure them through a browser. The router runs a web server and makes this web page available when you’re on the router’s local network. However, most routers offer a “remote access” feature that allows you to access this web interface from anywhere in the world. Even if you set a username and password, if you have a D-Link router affected by this vulnerability, anyone would be able to log in without any credentials. If you have remote access disabled, you’d be safe from people remotely accessing your router and tampering with it. To do this, open your router’s web interface and look for the “Remote Access,” “Remote Administration,” or “Remote Management” feature. Ensure it’s disabled — it should be disabled by default on most routers, but it’s good to check. Update the Firmware Like our operating systems, web browsers, and every other piece of software we use, router software isn’t perfect. The router’s firmware — essentially the software running on the router — may have security flaws. Router manufacturers may release firmware updates that fix such security holes, although they quickly discontinue support for most routers and move on to the next models. Unfortunately, most routers don’t have an auto-update feature like Windows and our web browsers do — you have to check your router manufacturer’s website for a firmware update and install it manually via the router’s web interface. Check to be sure your router has the latest available firmware installed. Change Default Login Credentials Many routers have default login credentials that are fairly obvious, such as the password “admin”. If someone gained access to your router’s web interface through some sort of vulnerability or just by logging onto your Wi-Fi network, it would be easy to log in and tamper with the router’s settings. To avoid this, change the router’s password to a non-default password that an attacker couldn’t easily guess. Some routers even allow you to change the username you use to log into your router. Lock Down Wi-Fi Access If someone gains access to your Wi-Fi network, they could attempt to tamper with your router — or just do other bad things like snoop on your local file shares or use your connection to downloaded copyrighted content and get you in trouble. Running an open Wi-Fi network can be dangerous. To prevent this, ensure your router’s Wi-Fi is secure. This is pretty simple: Set it to use WPA2 encryption and use a reasonably secure passphrase. Don’t use the weaker WEP encryption or set an obvious passphrase like “password”. Disable UPnP A variety of UPnP flaws have been found in consumer routers. Tens of millions of consumer routers respond to UPnP requests from the Internet, allowing attackers on the Internet to remotely configure your router. Flash applets in your browser could use UPnP to open ports, making your computer more vulnerable. UPnP is fairly insecure for a variety of reasons. To avoid UPnP-based problems, disable UPnP on your router via its web interface. If you use software that needs ports forwarded — such as a BitTorrent client, game server, or communications program — you’ll have to forward ports on your router without relying on UPnP. Log Out of the Router’s Web Interface When You’re Done Configuring It Cross site scripting (XSS) flaws have been found in some routers. A router with such an XSS flaw could be controlled by a malicious web page, allowing the web page to configure settings while you’re logged in. If your router is using its default username and password, it would be easy for the malicious web page to gain access. Even if you changed your router’s password, it would be theoretically possible for a website to use your logged-in session to access your router and modify its settings. To prevent this, just log out of your router when you’re done configuring it — if you can’t do that, you may want to clear your browser cookies. This isn’t something to be too paranoid about, but logging out of your router when you’re done using it is a quick and easy thing to do. Change the Router’s Local IP Address If you’re really paranoid, you may be able to change your router’s local IP address. For example, if its default address is 192.168.0.1, you could change it to 192.168.0.150. If the router itself were vulnerable and some sort of malicious script in your web browser attempted to exploit a cross site scripting vulnerability, accessing known-vulnerable routers at their local IP address and tampering with them, the attack would fail. This step isn’t completely necessary, especially since it wouldn’t protect against local attackers — if someone were on your network or software was running on your PC, they’d be able to determine your router’s IP address and connect to it. Install Third-Party Firmwares If you’re really worried about security, you could also install a third-party firmware such as DD-WRT or OpenWRT. You won’t find obscure back doors added by the router’s manufacturer in these alternative firmwares. Consumer routers are shaping up to be a perfect storm of security problems — they’re not automatically updated with new security patches, they’re connected directly to the Internet, manufacturers quickly stop supporting them, and many consumer routers seem to be full of bad code that leads to UPnP exploits and easy-to-exploit backdoors. It’s smart to take some basic precautions. Image Credit: Nuscreen on Flickr     

    Read the article

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

  • What's new in EJB 3.2 ? - Java EE 7 chugging along!

    - by arungupta
    EJB 3.1 added a whole ton of features for simplicity and ease-of-use such as @Singleton, @Asynchronous, @Schedule, Portable JNDI name, EJBContainer.createEJBContainer, EJB 3.1 Lite, and many others. As part of Java EE 7, EJB 3.2 (JSR 345) is making progress and this blog will provide highlights from the work done so far. This release has been particularly kept small but include several minor improvements and tweaks for usability. More features in EJB.Lite Asynchronous session bean Non-persistent EJB Timer service This also means these features can be used in embeddable EJB container and there by improving testability of your application. Pruning - The following features were made Proposed Optional in Java EE 6 and are now made optional. EJB 2.1 and earlier Entity Bean Component Contract for CMP and BMP Client View of an EJB 2.1 and earlier Entity Bean EJB QL: Query Language for CMP Query Methods JAX-RPC-based Web Service Endpoints and Client View The optional features are moved to a separate document and as a result EJB specification is now split into Core and Optional documents. This allows the specification to be more readable and better organized. Updates and Improvements Transactional lifecycle callbacks in Stateful Session Beans, only for CMT. In EJB 3.1, the transaction context for lifecyle callback methods (@PostConstruct, @PreDestroy, @PostActivate, @PrePassivate) are defined as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Unspecified Unspecified Unspecified Unspecified Singleton Bean's transaction management type Bean's transaction management type N/A N/A In EJB 3.2, stateful session bean lifecycle callback methods can opt-in to be transactional. These methods are then executed in a transaction context as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Bean's transaction management type Bean's transaction management type Bean's transaction management type Bean's transaction management type Singleton Bean's transaction management type Bean's transaction management type N/A N/A For example, the following stateful session bean require a new transaction to be started for @PostConstruct and @PreDestroy lifecycle callback methods. @Statefulpublic class HelloBean {   @PersistenceContext(type=PersistenceContextType.EXTENDED)   private EntityManager em;    @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)   @PostConstruct   public void init() {        myEntity = em.find(...);   }   @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)    @PostConstruct    public void destroy() {        em.flush();    }} Notice, by default the lifecycle callback methods are not transactional for backwards compatibility. They need to be explicitly opt-in to be made transactional. Opt-out of passivation for stateful session bean - If your stateful session bean needs to stick around or it has non-serializable field then the bean can be opt-out of passivation as shown. @Stateful(passivationCapable=false)public class HelloBean {    private NonSerializableType ref = ... . . .} Simplified the rules to define all local/remote views of the bean. For example, if the bean is defined as: @Statelesspublic class Bean implements Foo, Bar {    . . .} where Foo and Bar have no annotations of their own, then Foo and Bar are exposed as local views of the bean. The bean may be explicitly marked @Local as @Local@Statelesspublic class Bean implements Foo, Bar {    . . .} then this is the same behavior as explained above, i.e. Foo and Bar are local views. If the bean is marked @Remote as: @Remote@Statelesspublic class Bean implements Foo, Bar {    . . .} then Foo and Bar are remote views. If an interface is marked @Local or @Remote then each interface need to be explicitly marked explicitly to be exposed as a view. For example: @Remotepublic interface Foo { . . . }@Statelesspublic class Bean implements Foo, Bar {    . . .} only exposes one remote interface Foo. Section 4.9.7 from the specification provide more details about this feature. TimerService.getAllTimers is a newly added convenience API that returns all timers in the same bean. This is only for displaying the list of timers as the timer can only be canceled by its owner. Removed restriction to obtain the current class loader, and allow to use java.io package. This is handy if you want to do file access within your beans. JMS 2.0 alignment - A standard list of activation-config properties is now defined destinationLookup connectionFactoryLookup clientId subscriptionName shareSubscriptions Tons of other clarifications through out the spec. Appendix A provide a comprehensive list of changes since EJB 3.1. ThreadContext in Singleton is guaranteed to be thread-safe. Embeddable container implement Autocloseable. A complete replay of Enterprise JavaBeans Today and Tomorrow from JavaOne 2012 can be seen here (click on CON4654_mp4_4654_001 in Media). The specification is still evolving so the actual property or method names or their actual behavior may be different from the currently proposed ones. Are there any improvements that you'd like to see in EJB 3.2 ? The EJB 3.2 Expert Group would love to hear your feedback. An Early Draft of the specification is available. The latest version of the specification can always be downloaded from here. Java EE 7 Specification Status EJB Specification Project JIRA of EJB Specification JSR Expert Group Discussion Archive These features will start showing up in GlassFish 4 Promoted Builds soon.

    Read the article

  • Restructuring a large Chrome Extension/WebApp

    - by A.M.K
    I have a very complex Chrome Extension that has gotten too large to maintain in its current format. I'd like to restructure it, but I'm 15 and this is the first webapp or extension of it's type I've built so I have no idea how to do it. TL;DR: I have a large/complex webapp I'd like to restructure and I don't know how to do it. Should I follow my current restructure plan (below)? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? While it isn't relevant to the question, the actual code is on Github and the extension is on the webstore. The basic structure is as follows: index.html <html> <head> <link href="css/style.css" rel="stylesheet" /> <!-- This holds the main app styles --> <link href="css/widgets.css" rel="stylesheet" /> <!-- And this one holds widget styles --> </head> <body class="unloaded"> <!-- Low-level base elements are "hardcoded" here, the unloaded class is used for transitions and is removed on load. i.e: --> <div class="tab-container" tabindex="-1"> <!-- Tab nav --> </div> <!-- Templates for all parts of the application and widgets are stored as elements here. I plan on changing these to <script> elements during the restructure since <template>'s need valid HTML. --> <template id="template.toolbar"> <!-- Template content --> </template> <!-- Templates end --> <!-- Plugins --> <script type="text/javascript" src="js/plugins.js"></script> <!-- This contains the code for all widgets, I plan on moving this online and downloading as necessary soon. --> <script type="text/javascript" src="js/widgets.js"></script> <!-- This contains the main application JS. --> <script type="text/javascript" src="js/script.js"></script> </body> </html> widgets.js (initLog || (window.initLog = [])).push([new Date().getTime(), "A log is kept during page load so performance can be analyzed and errors pinpointed"]); // Widgets are stored in an object and extended (with jQuery, but I'll probably switch to underscore if using Backbone) as necessary var Widgets = { 1: { // Widget ID, this is set here so widgets can be retreived by ID id: 1, // Widget ID again, this is used after the widget object is duplicated and detached size: 3, // Default size, medium in this case order: 1, // Order shown in "store" name: "Weather", // Widget name interval: 300000, // Refresh interval nicename: "weather", // HTML and JS safe widget name sizes: ["tiny", "small", "medium"], // Available widget sizes desc: "Short widget description", settings: [ { // Widget setting specifications stored as an array of objects. These are used to dynamically generate widget setting popups. type: "list", nicename: "location", label: "Location(s)", placeholder: "Enter a location and press Enter" } ], config: { // Widget settings as stored in the tabs object (see script.js for storage information) size: "medium", location: ["San Francisco, CA"] }, data: {}, // Cached widget data stored locally, this lets it work offline customFunc: function(cb) {}, // Widgets can optionally define custom functions in any part of their object refresh: function() {}, // This fetches data from the web and caches it locally in data, then calls render. It gets called after the page is loaded for faster loads render: function() {} // This renders the widget only using information from data, it's called on page load. } }; script.js (initLog || (window.initLog = [])).push([new Date().getTime(), "These are also at the end of every file"]); // Plugins, extends and globals go here. i.e. Number.prototype.pad = .... var iChrome = function(refresh) { // The main iChrome init, called with refresh when refreshing to not re-run libs iChrome.Status.log("Starting page generation"); // From now on iChrome.Status.log is defined, it's used in place of the initLog iChrome.CSS(); // Dynamically generate CSS based on settings iChrome.Tabs(); // This takes the tabs stored in the storage (see fetching below) and renders all columns and widgets as necessary iChrome.Status.log("Tabs rendered"); // These will be omitted further along in this excerpt, but they're used everywhere // Checks for justInstalled => show getting started are run here /* The main init runs the bare minimum required to display the page, this sets all non-visible or instantly need things (such as widget dragging) on a timeout */ iChrome.deferredTimeout = setTimeout(function() { iChrome.deferred(refresh); // Pass refresh along, see above }, 200); }; iChrome.deferred = function(refresh) {}; // This calls modules one after the next in the appropriate order to finish rendering the page iChrome.Search = function() {}; // Modules have a base init function and are camel-cased and capitalized iChrome.Search.submit = function(val) {}; // Methods within modules are camel-cased and not capitalized /* Extension storage is async and fetched at the beginning of plugins.js, it's then stored in a variable that iChrome.Storage processes. The fetcher checks to see if processStorage is defined, if it is it gets called, otherwise settings are left in iChromeConfig */ var processStorage = function() { iChrome.Storage(function() { iChrome.Templates(); // Templates are read from their elements and held in a cache iChrome(); // Init is called }); }; if (typeof iChromeConfig == "object") { processStorage(); } Objectives of the restructure Memory usage: Chrome apparently has a memory leak in extensions, they're trying to fix it but memory still keeps on getting increased every time the page is loaded. The app also uses a lot on its own. Code readability: At this point I can't follow what's being called in the code. While rewriting the code I plan on properly commenting everything. Module interdependence: Right now modules call each other a lot, AFAIK that's not good at all since any change you make to one module could affect countless others. Fault tolerance: There's very little fault tolerance or error handling right now. If a widget is causing the rest of the page to stop rendering the user should at least be able to remove it. Speed is currently not an issue and I'd like to keep it that way. How I think I should do it The restructure should be done using Backbone.js and events that call modules (i.e. on storage.loaded = init). Modules should each go in their own file, I'm thinking there should be a set of core files that all modules can rely on and call directly and everything else should be event based. Widget structure should be kept largely the same, but maybe they should also be split into their own files. AFAIK you can't load all templates in a folder, therefore they need to stay inline. Grunt should be used to merge all modules, plugins and widgets into one file. Templates should also all be precompiled. Question: Should I follow my current restructure plan? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? Do applications written with Backbone tend to be more intensive (memory and speed) than ones written in Vanilla JS? Also, can I expect to improve this with a proper restructure or is my current code about as good as can be expected?

    Read the article

  • --log-slave-updates is OFF but some updates are still logged to the slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the some changes to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something? Reply to @RolandoMySQLDBA: If replication brought this over, then the same query has to be in the relay logs. Please go find the relay log that has the INSERT query with the same TIMESTAMP (1333541337). There is no such query with the same TIMESTAMP in the relay logs. If you cannot find it in the relay logs, then look and see if Infobright is posting the INSERT query. In that instance, the INSERT should be recorded in the binary logs of the Slave. Looking more deeply into the binary logs, I see that almost of the queries are CREATE/INSERT/UPDATE/DROP "temporary" tables, something like this: # at 123873315 #120405 0:42:04 server id 3 end_log_pos 123873618 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; SET @@session.pseudo_thread_id=395373/*!*/; CREATE TEMPORARY TABLE `norep_tmpcampaign` ( `campaignid` INTEGER(11) NOT NULL DEFAULT '0' , `status` INTEGER(11) NOT NULL DEFAULT '0' , `updated` DATETIME, KEY `campaignid` (`campaignid`) )ENGINE=MEMORY /*!*/; # at 123873618 #120405 0:42:04 server id 3 end_log_pos 123873755 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; DROP TABLE IF EXISTS `norep_tmpcampaign1` /* generated by server */ "temporary" here means that they are dropped after calculation is done. I also tells the slave not to replicate any statement matches the norep_ wildcard pattern: replicate-wild-ignore-table=%.norep_% But there is an exception table in the binary logs: # at 123828094 #120405 0:37:21 server id 3 end_log_pos 123828495 Query thread_id=395209 exec_time=0 error_code=0 SET TIMESTAMP=1333561041/*!*/; INSERT INTO sessions (SessionId, ApplicationName, Created, Expires, LockDate, LockId, Timeout, Locked, SessionItems, Fla gs) Values('pgv2exo4y4vo4ccz44vwznu0', '/', '2012-04-05 00:37:21', '2012-04-05 00:57:21', '2012-04-05 00:37:21', 0, 20, 0, 'AwAAAP////8IdXNlcm5hbWUGdXNlcmlkCHBlcm1pdGlkAgAAAAQAAAAGAAAAAQABAAEA', 0) /*!*/; Description: mysql> desc reportingdb.sessions; +-----------------+------------------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------------------+-------+ | SessionId | varchar(64) | NO | PRI | | | | ApplicationName | varchar(255) | NO | | | | | Created | timestamp | NO | | 0000-00-00 00:00:00 | | | Expires | timestamp | NO | | 0000-00-00 00:00:00 | | | LockDate | timestamp | NO | | 0000-00-00 00:00:00 | | | LockId | int(11) unsigned | NO | | NULL | | | Timeout | int(11) unsigned | NO | | NULL | | | Locked | bit(1) | NO | | NULL | | | SessionItems | varchar(255) | YES | | NULL | | | Flags | int(11) | NO | | NULL | | +-----------------+------------------+------+-----+---------------------+-------+ I'm sure all these queries are posting by MySQL, not Infobright: $ mysql-ib -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 48971 Server version: 5.1.40 build number (revision)=IB_4.0.5_r15240_15370(ice) (static) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select * from information_schema.tables where table_name='sessions'; Empty set (0.02 sec) I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs on slave: # at 311664029 #120405 0:15:23 server id 1 end_log_pos 311664006 Query thread_id=10458250 exec_time=0 error_code=0 use testuser/*!*/; SET TIMESTAMP=1333559723/*!*/; update users set email2='[email protected]' where id=22 /*!*/; Pay attention to the server id, in the relay logs, server id is master's (1) and in the binary log, server id is slave's (3 in this case). Reply to @RolandoMySQLDBA: Thu Apr 5 10:06:00 ICT 2012 Run CREATE DATABASE quantatest; on the Master now, please. Tell me if CREATE DATABASE quantatest; showed up in the Slave's Binary Logs. As I said above: I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs and you can guess, IO thread copied it to the relay logs, not binary logs. #120405 10:07:25 server id 1 end_log_pos 347573819 Query thread_id=10480775 exec_time=0 error_code=0 SET TIMESTAMP=1333595245/*!*/; /*!\C latin1 *//*!*/; SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/; create database quantatest /*!*/; The question must probably change to: why some update queries still logged to the slave binary logs althrough --log-slave-updates is disabled? Where they come from? Here are few last: /*!*/; # at 27492197 #120405 10:12:45 server id 3 end_log_pos 27492370 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; CREATE TEMPORARY TABLE norep_SplitValues ( value VARCHAR(1000) NOT NULL ) ENGINE=MEMORY /*!*/; # at 27492370 #120405 10:12:45 server id 3 end_log_pos 27492445 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; BEGIN /*!*/; # at 27492445 #120405 10:12:45 server id 3 end_log_pos 27492619 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'119577' COLLATE 'utf8_general_ci')) /*!*/; # at 27492619 #120405 10:12:45 server id 3 end_log_pos 27492695 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; COMMIT /*!*/; # at 27492918 #120405 10:12:46 server id 3 end_log_pos 27493115 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; SELECT `reportingdb`.`selfserving_get_locationad`(_utf8'3' COLLATE 'utf8_general_ci',_utf8'' COLLATE 'utf8_general_ci') /*!*/; # at 27493199 #120405 10:12:46 server id 3 end_log_pos 27493353 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; DROP TEMPORARY TABLE IF EXISTS `norep_SplitValues` /* generated by server */ /*!*/;

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >