Search Results

Search found 26808 results on 1073 pages for 'storing information'.

Page 177/1073 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Page specific CSS or a single css file when developing a mobile (webkit) based site?

    - by Mike
    I am working on a mobile site for webkit browsers. I have been trying to find information on using multiple style sheets versus a single css file. There is a lot of information on this topic, but it not a lot of information pertaining to mobile browsers. My site will have a bunch of pages that while have page specific css. For a non-mobile site, it seems like generally people say that a single file will be faster, but that multiple files are easier to develop. However, on a mobile site is that still the case? If you put everything in one file, that will get cached after load, but that will make the first load slower. If you had page specific files, the first page would get loaded quicker, but every other page would then take a hit while making the page specific css http request. Does anyone have any thoughts on this? It sounds like they are saying one file is better as long as its under 1 MB (which my files def will)? http://www.yuiblog.com/blog/2010/07/12/mobile-browser-cache-limits-revisited/

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1

    - by shiju
    In this post, I will demonstrate web application development using ASP. NET MVC 3, Razor and EF code First. This post will also cover Dependency Injection using Unity 2.0 and generic Repository and Unit of Work for EF Code First. The following frameworks will be used for this step by step tutorial. ASP.NET MVC 3 EF Code First CTP 5 Unity 2.0 Define Domain Model Let’s create domain model for our simple web application Category class public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } }   Expense class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First.    public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     }   RepositoryBasse – Generic Repository class public abstract class RepositoryBase<T> where T : class { private MyFinanceContext database; private readonly IDbSet<T> dbset; protected RepositoryBase(IDatabaseFactory databaseFactory) {     DatabaseFactory = databaseFactory;     dbset = Database.Set<T>(); }   protected IDatabaseFactory DatabaseFactory {     get; private set; }   protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } }   DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work   public interface IUnitOfWork {     void Commit(); }   UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } }   The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>.   public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext.   public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } }   Let’s create controller factory for Unity in the ASP.NET MVC 3 application. public class UnityControllerFactory : DefaultControllerFactory { IUnityContainer container; public UnityControllerFactory(IUnityContainer container) {     this.container = container; } protected override IController GetControllerInstance(RequestContext reqContext, Type controllerType) {     IController controller;     if (controllerType == null)         throw new HttpException(                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   }   Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies.   private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); }   In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity.   protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); }   Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller   public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        }   Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div>   We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Let’s create view page for insert Category information   @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.      @{     Layout = "~/Views/Shared/_Layout.cshtml"; }   Source Code You can download the source code from http://efmvc.codeplex.com/ . The source will be refactored on over time.   Summary In this post, we have created a simple web application using ASP.NET MVC 3 and EF Code First. We have discussed on technologies and practices such as ASP.NET MVC 3, Razor, EF Code First, Unity 2, generic Repository and Unit of Work. In my later posts, I will modify the application and will be discussed on more things. Stay tuned to my blog  for more posts on step by step application building.

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • apt-get fails to upgrade, install, remove etc

    - by Kieran Peat
    I upgraded from 11.10 to 12.04, had no issues that I noticed. Recently tried to install something via software center, but it was throwing errors. Changed to trying to sudo apt-get install instead but again no luck. I've genuinely tried as much as I know to fix this, but I can't so I figured I'd ask here. I've done sudo apt-get update successfully but sudo apt-get upgrade failed with... You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not installed libssl1.0.0 : Breaks: libssl1.0.0:i386 (!= 1.0.1-4ubuntu5.2) but 1.0.0e-2ubuntu4.6 is installed libssl1.0.0:i386 : Breaks: libssl1.0.0 (!= 1.0.0e-2ubuntu4.6) but 1.0.1-4ubuntu5.2 is installed E: Unmet dependencies. Try using -f. Using sudo apt-get -f install... The following packages were automatically installed and are no longer required: libgtkmm-2.4-1c2a libgtkhtml3.14-19 libglade2-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libqtcore4:i386 libssl1.0.0:i386 The following NEW packages will be installed libqtcore4:i386 The following packages will be upgraded: libssl1.0.0:i386 1 upgraded, 1 newly installed, 0 to remove and 33 not upgraded. 20 not fully installed or removed. Need to get 0 B/3,063 kB of archives. After this operation, 9,044 kB of additional disk space will be used. Do you want to continue [Y/n]? y E: Internal Error, No file name for libssl1.0.0 I've tried sudo apt-get remove libssl1.0.0 and sudo apt-get remove libssl1.0.0:i386 Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed Depends: libssl1.0.0:i386 but it is not going to be installed libcurl3:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed libsasl2-modules:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I've also tried sudo apt-get dist-upgrade, sudo apt-get autoremove etc without any luck. I also tried to download the .deb and use dpkg -i, but that failed and did not fully understand the method to be honest. Edit This is in response to the comments ref: sudo apt-get install -f doesn't fix broken packages. And now? sudo dpkg --configure -a --force-all dpkg: error processing libssl1.0.0 (--configure): libssl1.0.0:amd64 1.0.1-4ubuntu5.2 cannot be configured because libssl1.0.0:i386 is in a different version (1.0.0e-2ubuntu4.6) dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: too many errors, stopping Errors were encountered while processing: libssl1.0.0 libssl1.0.0:i386 ... libssl1.0.0:i386 Processing was halted because there were too many errors. Ref: Package manager doesn't work anymore moving /var/lib/kpkg/info/libssl.. kieran@kieran-EX58-UD3R:~$ sudo mv /var/lib/dpkg/info/libssl1.0.0:i386.postinst /var/lib/dpkg/info/libssl1.0.0:i386.postinst.bad kieran@kieran-EX58-UD3R:~$ sudo mv /var/lib/dpkg/info/libssl1.0.0:amd64.postinst /var/lib/dpkg/info/libssl1.0.0:amd64.postinst.bad kieran@kieran-EX58-UD3R:~$ sudo apt-get --reinstall install libssl Reading package lists... Done Building dependency tree Reading state information... Done Package libssl is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'libssl' has no installation candidate kieran@kieran-EX58-UD3R:~$ sudo apt-get --reinstall install libssl1.0.0 Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed libssl1.0.0 : Breaks: libssl1.0.0:i386 (!= 1.0.1-4ubuntu5.2) but 1.0.0e-2ubuntu4.6 is to be installed libssl1.0.0:i386 : Breaks: libssl1.0.0 (!= 1.0.0e-2ubuntu4.6) but 1.0.1-4ubuntu5.2 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). kieran@kieran-EX58-UD3R:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libgtkmm-2.4-1c2a libgtkhtml3.14-19 libglade2-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libqtcore4:i386 libssl1.0.0:i386 The following NEW packages will be installed libqtcore4:i386 The following packages will be upgraded: libssl1.0.0:i386 1 upgraded, 1 newly installed, 0 to remove and 58 not upgraded. 20 not fully installed or removed. Need to get 3,063 kB of archives. After this operation, 9,044 kB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main libssl1.0.0 i386 1.0.1-4ubuntu5.2 [1,002 kB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main libqtcore4 i386 4:4.8.1-0ubuntu4.1 [2,061 kB] Fetched 3,063 kB in 4s (731 kB/s) E: Internal Error, No file name for libssl1.0.0 ref: libssl Dependencies removing libssl1.0.0:i386 kieran@kieran-EX58-UD3R:~$ sudo apt-get remove libssl1.0.0:i386 Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed Depends: libssl1.0.0:i386 but it is not going to be installed libcurl3:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed libsasl2-modules:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Has Javascript developed beyond what it was originally designed to do?

    - by Elliot Bonneville
    I've been talking with a friend about the purpose of Javascript, when and how it should be used, etc. He quoted that: JavaScript was designed to add interactivity to HTML pages [...] JavaScript gives HTML designers a programming tool HTML authors are normally not programmers, but JavaScript is a scripting language with a very simple syntax! Almost anyone can put small "snippets" of code into their HTML pages JavaScript can react to events A JavaScript can be set to execute when something happens, like when a page has finished loading or when a user clicks on an HTML element JavaScript can read and write HTML elements A JavaScript can read and change the content of an HTML element JavaScript can be used to validate data A JavaScript can be used to validate form data before it is submitted to a server. This saves the server from extra processing JavaScript can be used to detect the visitor's browser - A JavaScript can be used to detect the visitor's browser, and - depending on the browser - load another page specifically designed for that browser. JavaScript can be used to create cookies - A JavaScript can be used to store and retrieve information on the visitor's computer. However, it seems like Javascript's getting used to do a lot more than these days. My friend also advocates against using Javascript's OOP functionality, claiming that "you shouldn't be processing data, merely validating." Is Javascript really limited to validating data and making flashy graphics on a web page? He goes on to claim "you shouldn't be attempting to access databases through javascript" and also says " in general you don't want to be doing your heavy lifting in javascript". I can't say I agree with his opinion, but I'd like to get some more input on this. So, my question: Has Javascript evolved from the definition above to something more powerful, has the way we use it changed, or am I just plain wrong? While I realize this is a subjective question, I can't find any more information on it, so a few links would be good, if nothing else. I'm not looking for a debate, just an answer.

    Read the article

  • IIS: 404 error on every file in a virtual directory.

    - by Scott Chamberlain
    I am trying to write my first WCF service for IIS 6.0. I followed the instructions on MSDN. I created the virtual directory, I can browse the directory fine but anything I click (even a sub-folder in that folder) gives me a 404 error. What am I missing that I can not access any files or folders? Any logs or whatnot you need just tell me where to find them in the comments and I will post them. UPDATE- Found the log, here is what it says when I connect and try to click on a sub folder. #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2010-03-07 19:08:07 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2010-03-07 19:08:07 W3SVC1 74.62.95.101 GET /prx2.php hash=AA70CBCE8DDD370B4A3E5F6500505C6FBA530220D856 80 - 221.192.199.35 Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.0) 404 0 2 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2010-03-07 22:21:20 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2010-03-07 22:21:20 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 401 2 2148074254 2010-03-07 22:21:26 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 401 1 0 2010-03-07 22:21:26 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 webinfinity\srchamberlain 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 200 0 0 2010-03-07 22:21:29 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/bin/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 404 0 2 --Update again I found this here IIS6 Dynamic Content: A 404.2 entry in the W3C Extended Log file is recorded when a Web Extension is not enabled. Use the IIS Microsoft Management Console (MMC) snap-in to enable the appropriate Web extension. Default Web Extensions include: ASP, ASP.net, Server-Side Includes, WebDAV publishing, FrontPage Server Extensions, Common Gateway Interface (CGI). Custom extensions must be added and explicitly enabled. See the IIS 6.0 Help File for more information. I am guessing the 404 0 2 at the end of the log is a 404.2 error. I now know the why, I still don't know the how on how to fix it.

    Read the article

  • Microsoft Management Console stops working when I add snap-in to it

    - by JayaprakashReddy
    I have Windows 7 Ultimate OS. I'm opening mmc.exe as administrator and trying add Certificates or any other snap-in, then while loading that snap-in MMC breaks and displays following message and after that it closes automatically once I click on close button on that message. What could be the problem? I did following to fix the problem but couldn't succeed any of these: I tried to repair the OS I repaired files using this method Even repaired the installation using this link Update: *@oldskool: Here is the debug process output:* Sorry its a long output text. 'mmc.exe': Loaded 'C:\Windows\System32\mmc.exe', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\ntdll.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\kernel32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\KernelBase.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\gdi32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\user32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\lpk.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\usp10.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\msvcrt.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\mfc42u.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\ole32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\rpcrt4.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\oleaut32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\odbc32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\advapi32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\sechost.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\mmcbase.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\shlwapi.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\uxtheme.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\duser.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\imm32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\msctf.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\odbcint.dll', Binary was not built with debug information. 'mmc.exe': Loaded 'C:\Windows\System32\dui70.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\winsxs\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16661_none_420fe3fa2b8113bd\comctl32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\shell32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\cryptbase.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\urlmon.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\wininet.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\iertutil.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\crypt32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\msasn1.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\clbcatq.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\mmcndmgr.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\dwmapi.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\oleacc.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\cryptsp.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\rsaenh.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\RpcRtRemote.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\mlang.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\xmllite.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\version.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\apphelp.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\msi.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\mscormmc.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\mscoree.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\winsxs\x86_microsoft.vc80.crt_1fc8b3b9a1e18e3b_8.0.50727.4927_none_d08a205e442db5b5\msvcr80.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\azroleui.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\atl.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\secur32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\netutils.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\dsrole.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\logoncli.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\dsuiext.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\ntdsapi.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\ws2_32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\nsi.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\activeds.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\adsldpc.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\Wldap32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\mpr.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\netapi32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\srvcli.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\wkscli.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\certmgr.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\certcli.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\CertEnroll.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\cryptui.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\ncrypt.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\bcrypt.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\wintrust.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\imagehlp.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\sspicli.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\aclui.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\IPHLPAPI.DLL', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\winnsi.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\slc.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\comsnap.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\mfc42.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\mycomput.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\devmgr.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\setupapi.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\cfgmgr32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\devobj.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\devrtl.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\newdev.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\dmdskmgr.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\dmutil.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\dmdskres.dll', Binary was not built with debug information. 'mmc.exe': Loaded 'C:\Windows\System32\dmdskres2.dll', Binary was not built with debug information. 'mmc.exe': Loaded 'C:\Windows\System32\gpedit.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\dssec.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\authz.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\dfscli.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\samcli.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\gpapi.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\framedynos.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\wtsapi32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\ipsmsnap.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\winipsec.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\userenv.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\profapi.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\ipsecsnp.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\polstore.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\localsec.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\wdc.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\pdh.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\pdhui.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\comdlg32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\credui.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\wevtapi.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\pla.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\tdh.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\winsta.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\utildll.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\browcli.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\vdmdbg.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\pmcsnap.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\winspool.drv', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\puiapi.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\wsecedit.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\scecli.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\filemgmt.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Program Files\Microsoft SQL Server\100\Tools\Binn\SqlManager.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\winsxs\x86_microsoft.vc80.mfc_1fc8b3b9a1e18e3b_8.0.50727.4053_none_cbf21254470d8752\mfc80u.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\winsxs\x86_microsoft.vc80.crt_1fc8b3b9a1e18e3b_8.0.50727.4927_none_d08a205e442db5b5\msvcp80.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\winsxs\x86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d1c738ec43578ea1\ATL80.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\winsxs\x86_microsoft.windows.common-controls_6595b64144ccf1df_5.82.7600.16661_none_ebfb56996c72aefc\comctl32.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\winsxs\x86_microsoft.vc80.mfcloc_1fc8b3b9a1e18e3b_8.0.50727.4053_none_03ca5532205cb096\mfc80ENU.dll', Binary was not built with debug information. 'mmc.exe': Loaded 'C:\Program Files\Microsoft SQL Server\100\Tools\Binn\Resources\1033\SqlManager.rll', Binary was not built with debug information. 'mmc.exe': Loaded 'C:\Windows\System32\msxml6.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Program Files\Microsoft SQL Server\90\Tools\Binn\SqlManager.dll', Cannot find or open the PDB file 'mmc.exe': Loaded 'C:\Windows\System32\wbem\wbemcntl.dll', Cannot find or open the PDB file The thread 'Win32 Thread' (0xf74) has exited with code 0 (0x0). Unhandled exception at 0x774d35e3 in mmc.exe: 0xC0000374: A heap has been corrupted.

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager

    - by rajbk
    This post walks you through creating a UI for paging, sorting and filtering a list of data items. It makes use of the excellent MVCContrib Grid and Pager Html UI helpers. A sample project is attached at the bottom. Our UI will eventually look like this. The application will make use of the Northwind database. The top portion of the page has a filter area region. The filter region is enclosed in a form tag. The select lists are wired up with jQuery to auto post back the form. The page has a pager region at the top and bottom of the product list. The product list has a link to display more details about a given product. The column headings are clickable for sorting and an icon shows the sort direction. Strongly Typed View Models The views are written to expect strongly typed objects. We suffix these strongly typed objects with ViewModel since they are designed specifically for passing data down to the view.  The following listing shows the ProductViewModel. This class will be used to hold information about a Product. We use attributes to specify if the property should be hidden and what its heading in the table should be. This metadata will be used by the MvcContrib Grid to render the table. Some of the properties are hidden from the UI ([ScaffoldColumn(false)) but are needed because we will be using those for filtering when writing our LINQ query. public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList = productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The following diagram shows the rest of the key ViewModels in our design. We have a container class called ProductListContainerViewModel which has nested classes. The ProductPagedList is of type IPagination<ProductViewModel>. The MvcContrib expects the IPagination<T> interface to determine the page number and page size of the collection we are working with. You convert any IEnumerable<T> into an IPagination<T> by calling the AsPagination extension method in the MvcContrib library. It also creates a paged set of type ProductViewModel. The ProductFilterViewModel class will hold information about the different select lists and the ProductName being searched on. It will also hold state of any previously selected item in the lists and the previous search criteria (you will recall that this type of state information was stored in Viewstate when working with WebForms). With MVC there is no state storage and so all state has to be fetched and passed back to the view. The GridSortOptions is a type defined in the MvcContrib library and is used by the Grid to determine the current column being sorted on and the current sort direction. The following shows the view and partial views used to render our UI. The Index view expects a type ProductListContainerViewModel which we described earlier. <%Html.RenderPartial("SearchFilters", Model.ProductFilterViewModel); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> <% Html.RenderPartial("SearchResults", Model); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> The View contains a partial view “SearchFilters” and passes it the ProductViewFilterContainer. The SearchFilter uses this Model to render all the search lists and textbox. The partial view “Pager” uses the ProductPageList which implements the interface IPagination. The “Pager” view contains the MvcContrib Pager helper used to render the paging information. This view is repeated twice since we want the pager UI to be available at the top and bottom of the product list. The Pager partial view is located in the Shared directory so that it can be reused across Views. The partial view “SearchResults” uses the ProductListContainer model. This partial view contains the MvcContrib Grid which needs both the ProdctPagedList and GridSortOptions to render itself. The Controller Action An example of a request like this: /Products?productName=test&supplierId=29&categoryId=4. The application receives this GET request and maps it to the Index method of the ProductController. Within the action we create an IQueryable<ProductViewModel> by calling the GetProductsProjected() method. /// <summary> /// This method takes in a filter list, paging/sort options and applies /// them to an IQueryable of type ProductViewModel /// </summary> /// <returns> /// The return object is a container that holds the sorted/paged list, /// state for the fiters and state about the current sorted column /// </returns> public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The supplier, category and productname filters are applied to this IQueryable if any are present in the request. The ProductPagedList class is created by applying a sort order and calling the AsPagination method. Finally the ProductListContainerViewModel class is created and returned to the view. You have seen how to use strongly typed views with the MvcContrib Grid and Pager to render a clean lightweight UI with strongly typed views. You also saw how to use partial views to get data from the strongly typed model passed to it from the parent view. The code also shows you how to use jQuery to auto post back. The sample is attached below. Don’t forget to change your connection string to point to the server containing the Northwind database. NorthwindSales_MvcContrib.zip My name is Kobayashi. I work for Keyser Soze.

    Read the article

  • Problem with setup VPN in Ubuntu Server 12.04

    - by Yozone W.
    I have a problem with setup VPN server on my Ubuntu VPS, here is my server environments: Ubuntu Server 12.04 x86_64 xl2tpd 1.3.1+dfsg-1 pppd 2.4.5-5ubuntu1 openswan 1:2.6.38-1~precise1 After install software and configuration: ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.2.0-24-virtual (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] /var/log/auth.log message: Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [RFC 3947] method set to=115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: ignoring Vendor ID payload [FRAGMENTATION 80000000] Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [Dead Peer Detection] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: responding to Main Mode from unknown peer [My IP Address] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R1: sent MR1, expecting MI2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R2: sent MR2, expecting MI3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: Main mode peer ID is ID_IPV4_ADDR: '192.168.12.52' Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: new NAT mapping for #5, was [My IP Address]:2251, now [My IP Address]:2847 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: the peer proposed: [My Server IP Address]/32:17/1701 -> 192.168.12.52/32:17/0 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: responding to Quick Mode proposal {msgid:8579b1fb} Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: us: [My Server IP Address]<[My Server IP Address]>:17/1701 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: them: [My IP Address][192.168.12.52]:17/65280===192.168.12.52/32 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x08bda158 <0x4920a374 xfrm=AES_256-HMAC_SHA1 NATOA=192.168.12.52 NATD=[My IP Address]:2847 DPD=enabled} Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA(0x08bda158) payload: deleting IPSEC State #6 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: ERROR: netlink XFRM_MSG_DELPOLICY response for flow eroute_connection delete included errno 2: No such file or directory Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received and ignored informational message Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA payload: deleting ISAKMP State #5 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address]: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:51:16 vpn pluto[3963]: packet from [My IP Address]:2847: received and ignored informational message xl2tpd -D message: xl2tpd[4289]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[4289]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[4289]: setsockopt recvref[30]: Protocol not available xl2tpd[4289]: This binary does not support kernel L2TP. xl2tpd[4289]: xl2tpd version xl2tpd-1.3.1 started on vpn.netools.me PID:4289 xl2tpd[4289]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[4289]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[4289]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[4289]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[4289]: Listening on IP address [My Server IP Address], port 1701 Then it just stopped here, and have no any response. I can't connect VPN on my mac client, the /var/log/system.log message: Oct 16 15:17:36 azone-iMac.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0 Oct 16 15:17:36 azone-iMac.local pppd[3799]: pppd 2.4.2 (Apple version 596.13) started by azone, uid 501 Oct 16 15:17:38 azone-iMac.local pppd[3799]: L2TP connecting to server 'vpn.netools.me' ([My Server IP Address])... Oct 16 15:17:38 azone-iMac.local pppd[3799]: IPSec connection started Oct 16 15:17:38 azone-iMac.local racoon[359]: Connecting. Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 started (Initiated by me). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 started (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local pppd[3799]: IPSec connection established Oct 16 15:17:59 azone-iMac.local pppd[3799]: L2TP cannot connect to the server Oct 16 15:17:59 azone-iMac.local racoon[359]: IPSec disconnecting from server [My Server IP Address] Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Anyone help? Thanks a million!

    Read the article

  • Permissions problem running Apache ActiveMQ

    - by Edd
    I'm wanting to use Apache ActiveMQ on Ubuntu 12.04 LTS, but am running into what looks like a permissions problem when I try to run it as follows: edd:~$ sudo activemq --version INFO: Loading '/usr/share/activemq/activemq-options' INFO: Using java '/usr/lib/jvm/java-6-openjdk//bin/java' INFO: changing to user 'activemq' to invoke java Java Runtime: Sun Microsystems Inc. 1.6.0_24 /usr/lib/jvm/java-6-openjdk-amd64/jre Heap sizes: current=502464k free=499842k max=502464k JVM args: -Xms512M -Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=true -Dactivemq.classpath=/var/lib/activemq//conf;; -Dactivemq.home=/usr/share/activemq -Dactivemq.base=/var/lib/activemq/ ACTIVEMQ_HOME: /usr/share/activemq ACTIVEMQ_BASE: /var/lib/activemq ActiveMQ 5.5.0 For help or more information please see: http://activemq.apache.org edd:~$ sudo activemq start INFO: Loading '/usr/share/activemq/activemq-options' INFO: Using java '/usr/lib/jvm/java-6-openjdk//bin/java' INFO: Starting - inspect logfiles specified in logging.properties and log4j.properties to get details INFO: changing to user 'activemq' to invoke java -su: line 2: /var/run/activemq.pid: Permission denied INFO: pidfile created : '/var/run/activemq.pid' (pid '7811') edd:~$ sudo activemq status INFO: Loading '/usr/share/activemq/activemq-options' INFO: Using java '/usr/lib/jvm/java-6-openjdk//bin/java' ActiveMQ not running edd:~$ ps ax | grep 'activemq' 8040 pts/0 S+ 0:00 grep --color=auto activemq I installed ActiveMQ using sudo apt-get install activemq. Apologies if there's any additional information missing - I'm fairly new to Linux as you may well have guessed!

    Read the article

  • SQLAuthority News – Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 – Microsoft Whitepaper

    - by pinaldave
    I recently presented session on Statistics and Best Practices in Virtual Tech Days on Nov 22, 2010. The sessions was very popular and I got many questions right after the sessions. The number question I had received was where everybody can get the further information. I am very much happy that my sessions created some curiosity for one of the most important feature of the SQL Server. Statistics are the heart of the SQL Server. Microsoft has published a white paper on the subject how statistics are useful to Query Optimizer. Here is the abstract of the same white paper from Microsoft. Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 Writer: Eric N. Hanson and Yavor Angelov Microsoft SQL Server 2008 collects statistical information about indexes and column data stored in the database. These statistics are used by the SQL Server query optimizer to choose the most efficient plan for retrieving or updating data. This paper describes what data is collected, where it is stored, and which commands create, update, and delete statistics. By default, SQL Server 2008 also creates and updates statistics automatically, when such an operation is considered to be useful. This paper also outlines how these defaults can be changed on different levels (column, table, and database). In addition, it presents how certain query language features, such as Transact-SQL variables, interact with use of statistics by the optimizer, and it provides guidance for using these features when writing queries so you can obtain good query performance. Link to white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 ?Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Using Unity – Part 3

    - by nmarun
    The previous blog was about registering and invoking different types dynamically. In this one I’d like to show how Unity manages/disposes the instances – say hello to Lifetime Managers. When a type gets registered, either through the config file or when RegisterType method is explicitly called, the default behavior is that the container uses a transient lifetime manager. In other words, the unity container creates a new instance of the type when Resolve or ResolveAll method is called. Whereas, when you register an existing object using the RegisterInstance method, the container uses a container controlled lifetime manager - a singleton pattern. It does this by storing the reference of the object and that means so as long as the container is ‘alive’, your registered instance does not go out of scope and will be disposed only after the container either goes out of scope or when the code explicitly disposes the container. Let’s see how we can use these and test if something is a singleton or a transient instance. Continuing on the same solution used in the previous blogs, I have made the following changes: First is to add typeAlias elements for TransientLifetimeManager type: 1: <typeAlias alias="transient" type="Microsoft.Practices.Unity.TransientLifetimeManager, Microsoft.Practices.Unity"/> You then need to tell what type(s) you want to be transient by nature: 1: <type type="IProduct" mapTo="Product2"> 2: <lifetime type="transient" /> 3: </type> 4: <!--<type type="IProduct" mapTo="Product2" />--> The lifetime element’s type attribute matches with the alias attribute of the typeAlias element. Now since ‘transient’ is the default behavior, you can have a concise version of the same as line 4 shows. Also note that I’ve changed the mapTo attribute from ‘Product’ to ‘Product2’. I’ve done this to help understand the transient nature of the instance of the type Product2. By making this change, you are basically saying when a type of IProduct needs to be resolved, Unity should create an instance of Product2 by default. 1: public string WriteProductDetails() 2: { 3: return string.Format("Name: {0}<br/>Category: {1}<br/>Mfg Date: {2}<br/>Hash Code: {3}", 4: Name, Category, MfgDate.ToString("MM/dd/yyyy hh:mm:ss tt"), GetHashCode()); 5: } Again, the above change is purely for the purpose of making the example more clear to understand. The display will show the full date and also displays the hash code of the current instance. The GetHashCode() method returns an integer when an instance gets created – a new integer for every instance. When you run the application, you’ll see something like the below: Now when you click on the ‘Get Product2 Instance’ button, you’ll see that the Mfg Date (which is set in the constructor) and the Hash Code are different from the one created on page load. This proves to us that a new instance is created every single time. To make this a singleton, we need to add a type alias for the ContainerControlledLifetimeManager class and then change the type attribute of the lifetime element to singleton. 1: <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="singleton" /> 5: </type> Running the application now gets me the following output: Click on the button below and you’ll see that the Mfg Date and the Hash code remain unchanged => the unity container is storing the reference the first time it is created and then returns the same instance every time the type needs to be resolved. Digging more deeper into this, Unity provides more than the two lifetime managers. ExternallyControlledLifetimeManager – maintains a weak reference to type mappings and instances. Unity returns the same instance as long as the some code is holding a strong reference to this instance. For this, you need: 1: <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="external" /> 5: </type> PerThreadLifetimeManager – Unity returns a unique instance of an object for each thread – so this effectively is a singleton behavior on a  per-thread basis. 1: <typeAlias alias="perThread" type="Microsoft.Practices.Unity.PerThreadLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="perThread" /> 5: </type> One thing to note about this is that if you use RegisterInstance method to register an existing object, this instance will be returned for every thread, making this a purely singleton behavior. Needless to say, this type of lifetime management is useful in multi-threaded applications (duh!!). I hope this blog provided some basics on lifetime management of objects resolved in Unity and in the next blog, I’ll talk about Injection. Please see the code used here.

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Installation of SAP Web Application Server 6.20 fails

    - by karthikeyan b
    I tried installing a trial version of SAP NetWeaver 7.1 on Windows Vista on my laptop but I couldn't succeed. Then I tried installing SAP Web Application Server 6.20. The installation goes all the way to 91%, at which point I get some errors and the installation stops. Does anyone have any experience with this? Here's the complete error log below: nfo: INSTGUI.EXE Protocol version is 10. Message checksum is 7613888. Info: INSTGUI MessageFile Start message loading... Info: INSTGUI MessageFile Finished message loading. Info: InstController Prepare {} {} R3SETUP Version: Apr 24 2002 Info: InstController Prepare {} {} Logfile will be set to E:\R3SETUP\BSP.log Check E:\R3SETUP\BSP.log for further messages. Info: CommandFileController SyFileVersionSave {} {} Saving original content of file E:\R3SETUP\BSP.R3S ... Warning: CommandFileController SyFileCopy {} {} Function CopyFile() failed at location SyFileCopy-681 Warning: CommandFileController SyFileCopy {} {} errno: 5: Access is denied. Warning: CommandFileController SyFileCreateWithPermissions {} {} errno: 13: Permission denied Warning: CommandFileController SyPermissionSet {} {} Function SetNamedSecurityInfo() failed for E:\R3SETUP\BSP.R3S at location SyPermissionSet-2484 Warning: CommandFileController SyPermissionSet {} {} errno: 5: Access is denied. Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 309 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 309 CommandFile could not be updated Info: CDSERVERBASE InternalColdKeyCheck 2 309 The CD KERNEL will not be copied. Info: CDSERVERBASE InternalColdKeyCheck 2 309 The CD DATA1 will not be copied. Info: CDSERVERBASE InternalColdKeyCheck 2 309 The CD DATA2 will not be copied. Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 checking host name lookup for 'Karthikeyan' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for 'Karthikeyan' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 host 'Karthikeyan' has ip address '115.184.71.93' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for '115.184.71.93' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 checking host name lookup for 'Karthikeyan' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for 'Karthikeyan' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 host 'Karthikeyan' has ip address '115.184.71.93' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for '115.184.71.93' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 checking host name lookup for 'Karthikeyan' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for 'Karthikeyan' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 host 'Karthikeyan' has ip address '115.184.71.93' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for '115.184.71.93' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 checking host name lookup for 'Karthikeyan' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for 'Karthikeyan' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 host 'Karthikeyan' has ip address '115.184.71.93' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for '115.184.71.93' is 'Karthikeyan'. Error: CommandFileController StoreMasterTableFromCommandFile 2 99 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 99 CommandFile could not be updated Warning: ADADBINSTANCE_IND_ADA GetConfirmationFor 2 58 Cleanup database instance BSP for new installation. Error: CommandFileController StoreMasterTableFromCommandFile 2 58 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 58 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1206 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1206 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 75 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 75 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 247 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 247 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 120 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 120 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 242 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 242 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 815 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 815 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 223 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 223 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 10 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 10 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 760 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 760 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1267 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1267 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1111 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1111 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1122 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1122 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1114 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1114 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 54 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 54 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1146 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1146 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 718 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 718 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 760 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 760 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 333 Requesting Installation Details Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Info: CDSERVERBASE InternalWarmKeyCheck 2 309 The CD KERNEL will not be copied. Info: CDSERVERBASE InternalWarmKeyCheck 2 309 The CD DATA1 will not be copied. Info: CDSERVERBASE InternalWarmKeyCheck 2 309 The CD DATA2 will not be copied. Info: InstController MakeStepsDeliver 2 309 Requesting Information on CD-ROMs Error: CommandFileController StoreMasterTableFromCommandFile 2 309 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 309 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 309 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 309 CommandFile could not be updated Info: CENTRDBINSTANCE_NT_ADA InternalWarmKeyCheck 2 333 The installation phase is starting now. Please look in the log file for further information about current actions. Info: InstController MakeStepsDeliver 2 333 Requesting Installation Details Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 99 Defining Key Values Error: CommandFileController StoreMasterTableFromCommandFile 2 99 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 99 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 99 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 99 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 58 Requesting Setup Details Error: CommandFileController StoreMasterTableFromCommandFile 2 58 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 58 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 58 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 58 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 333 Requesting Installation Details Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1206 Setting Users for Single DB landscape Error: CommandFileController StoreMasterTableFromCommandFile 2 1206 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1206 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1206 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1206 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 75 Stopping the SAP DB Instance Error: CommandFileController StoreMasterTableFromCommandFile 2 75 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 75 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 75 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 75 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 247 Stopping the SAP DB remote server Error: CommandFileController StoreMasterTableFromCommandFile 2 247 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 247 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 247 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 247 CommandFile could not be updated Warning: CDSERVERBASE ConfirmKey 2 1195 Can not read from Z:. Please ensure this path is accessible. Info: LvKeyRequest For further information see HTML documentation: step: SAPDBSETCDPATH_IND_IND and key: KERNEL_LOCATION Info: InstController MakeStepsDeliver 2 1195 Installing SAP DB Software Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1195 Installing SAP DB Software Info: SAPDBINSTALL_IND_ADA SyCoprocessCreate 2 1195 Creating coprocess E:\\sapdb\NT\I386\sdbinst.exe ... Info: SAPDBINSTALL_IND_ADA ExecuteDo 2 1195 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 120 Extracting SAP DB Tools Software Info: ADAEXTRACTLCTOOLS_NT_ADA SyCoprocessCreate 2 120 Creating coprocess SAPCAR ... Info: ADAEXTRACTLCTOOLS_NT_ADA SyCoprocessCreate 2 120 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 120 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 120 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 120 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 120 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 816 Extracting the Database-Dependent SAP system Executables Info: ADAEXTRACTBSPCFG SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Info: ADAEXTRACTBSPCFG SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 242 Starting VSERVER Info: ADAXSERVER_NT_ADA SyCoprocessCreate 2 242 Creating coprocess C:\sapdb\programs\bin\x_server.exe ... Info: ADAXSERVER_NT_ADA ExecuteDo 2 242 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 242 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 242 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 242 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 242 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 816 Extracting the Database-Dependent SAP system Executables Info: EXTRACTSAPEXEDBDATA1 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Info: EXTRACTSAPEXEDBDATA1 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Warning: CDSERVERBASE ConfirmKey 2 816 Can not read from Y:. Please ensure this path is accessible. Info: LvKeyRequest For further information see HTML documentation: step: EXTRACTSAPEXEDBDATA2 and key: DATA1_LOCATION Info: InstController MakeStepsDeliver 2 816 Extracting the Database-Dependent SAP system Executables Info: EXTRACTSAPEXEDBDATA2 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Info: EXTRACTSAPEXEDBDATA2 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Warning: CDSERVERBASE ConfirmKey 2 816 Can not read from X:. Please ensure this path is accessible. Info: LvKeyRequest For further information see HTML documentation: step: EXTRACTSAPEXEDBDATA3 and key: DATA2_LOCATION Info: InstController MakeStepsDeliver 2 816 Extracting the Database-Dependent SAP system Executables Info: EXTRACTSAPEXEDBDATA3 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Info: EXTRACTSAPEXEDBDATA3 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Warning: CDSERVERBASE ConfirmKey 2 815 We tried to find the label SAPDB:MINI-WAS-DEMO:620:KERNEL for CD KERNEL in path E:\. But the check was not successfull. Info: LvKeyRequest For further information see HTML documentation: step: EXTRACTSAPEXE and key: KERNEL_LOCATION Info: InstController MakeStepsDeliver 2 815 Extracting the SAP Executables Info: EXTRACTSAPEXE SyCoprocessCreate 2 815 Creating coprocess SAPCAR ... Info: EXTRACTSAPEXE SyCoprocessCreate 2 815 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 815 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 815 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 815 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 815 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 223 Setting new Rundirectory. Info: ADADBREGISTER_IND_ADA SyCoprocessCreate 2 223 Creating coprocess C:\sapdb\programs\pgm\dbmcli.exe ... Info: ADADBREGISTER_IND_ADA ExecuteDo 2 223 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 223 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 223 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 223 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 223 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1 ADASETDEVSPACES_IND_ADA Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 10 Performing Service BCHECK Error: CommandFileController StoreMasterTableFromCommandFile 2 10 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 10 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 10 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 10 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 263 Creating XUSER File for the User ADM for Dialog Instance Info: ADAXUSERSIDADM_DEFAULT_NT_ADA SyCoprocessCreate 2 263 Creating coprocess C:\sapdb\programs\pgm\dbmcli.exe ... Info: ADAXUSERSIDADM_DEFAULT_NT_ADA ExecuteDo 2 263 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 263 Creating XUSER File for the User ADM for Dialog Instance Info: ADAXUSERSIDADM_COLD_NT_ADA SyCoprocessCreate 2 263 Creating coprocess C:\sapdb\programs\pgm\dbmcli.exe ... Info: ADAXUSERSIDADM_COLD_NT_ADA ExecuteDo 2 263 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 263 Creating XUSER File for the User ADM for Dialog Instance Info: ADAXUSERSIDADM_WARM_NT_ADA SyCoprocessCreate 2 263 Creating coprocess C:\sapdb\programs\pgm\dbmcli.exe ... Info: ADAXUSERSIDADM_WARM_NT_ADA ExecuteDo 2 263 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 760 Creating the Default Profile Info: DEFAULTPROFILE_IND_IND SyFileVersionSave 2 760 Saving original content of file C:\MiniWAS\DEFAULT.PFL ... Error: CommandFileController StoreMasterTableFromCommandFile 2 760 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 760 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 760 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 760 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1267 Modifying or Creating the TPPARAM File Info: TPPARAMMODIFY_NT_ADA SyFileVersionSave 2 1267 Saving original content of file C:\MiniWAS\trans\bin\TPPARAM ... Error: CommandFileController StoreMasterTableFromCommandFile 2 1267 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1267 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1267 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1267 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1111 Creating the Service Entry for the Dispatcher Info: R3DISPATCHERPORT_IND_IND IaServicePortAppend 2 1111 Checking service name sapdp00, protocol tcp, port number 3200 ... Info: R3DISPATCHERPORT_IND_IND IaServicePortAppend 2 1111 Port name sapdp00 is known and the port number 3200 is equal to the existing port number 3200 Error: CommandFileController StoreMasterTableFromCommandFile 2 1111 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1111 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1111 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1111 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1122 Creating the Service Entry for the Message Server Info: R3MESSAGEPORT_IND_IND IaServicePortAppend 2 1122 Checking service name sapmsBSP, protocol tcp, port number 3600 ... Info: R3MESSAGEPORT_IND_IND IaServicePortAppend 2 1122 Port name sapmsBSP is known and the port number 3600 is equal to the existing port number 3600 Error: CommandFileController StoreMasterTableFromCommandFile 2 1122 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1122 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1122 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1122 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1114 Creating the Service Entry for the Gateway Service Info: R3GATEWAYPORT_IND_IND IaServicePortAppend 2 1114 Checking service name sapgw00, protocol tcp, port number 3300 ... Info: R3GATEWAYPORT_IND_IND IaServicePortAppend 2 1114 Port name sapgw00 is known and the port number 3300 is equal to the existing port number 3300 Error: CommandFileController StoreMasterTableFromCommandFile 2 1114 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1114 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1114 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1114 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1 PISTARTBSP Info: PISTARTBSP SyDirCreate 2 1 Checking existence of directory C:\Users\Karthikeyan\Desktop\. If it does not exist creating it with user , group and permission 0 ... Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1 PISTARTBSPGROUP Info: PISTARTBSPGROUP SyDirCreate 2 1 Checking existence of directory C:\Users\Karthikeyan\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Mini SAP Web Application Server. If it does not exist creating it with user , group and permission 0 ...

    Read the article

  • Issue 15: Oracle PartnerNetwork Exchange @ Oracle OpenWorld

    - by rituchhibber
         ORACLE FOCUS Oracle PartnerNetwork Exchange@ ORACLE OpenWorld Sylvie MichouSenior DirectorPartner Marketing & Communications and Strategic Programs RESOURCES -- Oracle OpenWorld 2012 Oracle PartnerNetwork Exchange @ OpenWorld Oracle PartnerNetwork Exchange @ OpenWorld Registration Oracle PartnerNetwork Exchange SpecializationTest Fest Oracle OpenWorld Schedule Builder Oracle OpenWorld Promotional Toolkit for Partners Oracle Partner Events Oracle Partner Webcasts Oracle EMEA Partner News SUBSCRIBE FEEDBACK PREVIOUS ISSUES If you are attending our forthcoming Oracle OpenWorld 2012 conference in San Francisco from 30 September to 4 October, you will discover a new dedicated programme of keynotes and sessions tailored especially for you, our valued partners. Oracle PartnerNetwork Exchange @ OpenWorld has been created to enhance the opportunities for you to learn from and network with Oracle executives and experts. The programme also provides more informal opportunities than ever throughout the week to meet up with the people who are most important to your business: customers, prospects, colleagues and the Oracle EMEA Alliances & Channels management team. Oracle remains fully focused on building the industry's most admired partner ecosystem—which today spans over 25,000 partners. This new OPN Exchange programme offers an exciting change of pace for partners throughout the conference. Now it will be possible to enjoy a fully-integrated, partner-dedicated session schedule throughout the week, as well as key social events such as the Sunday night Welcome Reception, networking lunches from Monday to Thursday at the Howard Street Tent, and a fantastic closing event on the last Thursday afternoon. In addition to the regular Oracle OpenWorld conference schedule, if you have registered for the Oracle PartnerNetwork Exchange @ OpenWorld programme, you will be invited to attend a much anticipated global partner keynote presentation, plus more than 40 conference sessions aimed squarely at what's most important to you, as partners. Prominent topics for discussion will include: Oracle technologies and roadmaps and how they fit with partners' business plans; business development; regional distinctions in business practices; and much more. Each session will provide plenty of food for thought ahead of the numerous networking opportunities throughout the week, encouraging the knowledge exchange with Oracle executives, customers, prospects, and colleagues that will make this conference of even greater value for you. At Oracle we always work closely with our partners to deliver solution offerings that improve business value, simplify the IT experience and drive innovation and efficiencies for joint customers. The most important element of our new OPN Exchange is content that helps you get more from technology investments, more from your peer-to-peer connections, and more from your interactions with customers. To this end we've created some partner-specific tools which can be used by OPN members ahead of the conference itself. Crucially, a comprehensive Content Catalog already lists and organises details of every OPN Exchange session, speaker, exhibitor, demonstration and related materials. This Content Catalog can be used by all our partners to identify interesting content that you can add to your own personalised Oracle OpenWorld Schedule Builder, allowing more effective planning and pre-enrolment for vital sessions. There are numerous highlights that you will definitely want to include in those personal schedules. On Sunday morning, 30 September we will start the week with partner dedicated OPN Exchange sessions, following our Global Partner Keynote at 13:00 with Judson Althoff, SVP, Worldwide Alliances & Channels and Embedded Sales and senior executives, giving insight into Oracle's partner vision, strategy, and resources—all designed to help build and strengthen market opportunities for you. This will be followed by a number of OPN Exchange general sessions, the Oracle OpenWorld Opening Keynote with Larry Ellison, CEO, Oracle and concluded with the OPN Exchange AfterDark Welcome Reception, starting at 19:30 at the Metreon. From Monday 1 to Thursday 4 October, you can attend the OPN Exchange sessions that are most relevant to your business today and over the coming year. Oracle's top product and sales leaders will be on hand to discuss Oracle's strategic direction in 40+ targeted and in-depth sessions focussing on critical success factors to develop your business. Oracle's dedication to innovation, specialization, enablement and engineering provides Oracle partners with a huge opportunity to create new services and solutions, differentiate themselves and deliver extreme value to joint customers across the globe. Oracle will even be helping over 1000 partners to earn OPN Specialization certification during the Oracle OpenWorld OPN Exchange Test Fest, which will be providing all the study materials and exams required to drive Specialization for free at the conference. You simply need to check the list of current certification tracks available, and make sure you pre-register to reserve a seat in one of the ten sessions being offered free to OPN Exchange registered attendees. And finally, let's not forget those all-important networking opportunities, which can so often provide partners with valuable long-term alliances as well as exciting new business leads. The Oracle PartnerNetwork Lounge, located at Moscone South, exhibition hall, room 100 is the place where partners can meet formally or informally with colleagues, customers, prospects, and other industry professionals. OPN Specialized partners with OPN Exchange passes can also visit the OPN Video Blogging room to record and share ideas, and at the OPN Information Station you will find consultants available to answer your questions. "For the first time ever we will have a full partner conference within OpenWorld. OPN Exchange @ OpenWorld will kick-off on the first Sunday and run the entire week. We'll have over 40 sessions throughout that time and partners will hear from our top development executives, with special sessions dedicated to partnering throughout. It's going to be a phenomenal event, and we look forward to seeing our partners there." Judson Althoff, SVP, Oracle Worldwide Alliances & Channels and Embedded Sales So if you haven't done so already, please register for Oracle PartnerNetwork Exchange @ OpenWorld today or add OPN Exchange to your existing registration for just $100 through My Account. And if you have any further questions regarding partner activities at Oracle OpenWorld, please don't hesitate to contact the Oracle PartnerNetwork team at [email protected] will be on hand to share the very latest information about: Oracle's SPARC Superclusters: the latest Engineered Systems from Oracle, delivering radically improved performance, faster deployment and greatly reduced operational costs for mixed database and enterprise application consolidation Oracle's SPARC T4 servers: with the newly developed T4 processor and Oracle Solaris providing up to five times the single threaded performance and better overall system throughput for expanded application versatility Oracle Database Appliance: a new way to take advantage of the world's most popular database, Oracle Database 11g, in a single, easy-to-deploy and manage system. It's a complete package engineered to deliver simple, reliable and affordable database services to small and medium size businesses and departmental systems. All hardware and software components are supported together and offer customers unique pay-as-you-grow software licensing to quickly scale from two to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades Oracle Exalogic: the world's only integrated cloud machine, featuring server hardware and middleware software engineered together for maximum performance with minimum set-up and operational cost Oracle Exadata Database Machine: the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) applications, making it the ideal platform for consolidating onto grids or private clouds. It is a complete package of servers, storage, networking and software that is massively scalable, secure and redundant Oracle Sun ZFS Storage Appliances: providing enterprise-class NAS performance, price-performance, manageability and TCO by combining third-generation software with high-performance controllers, flash-based caches and disks Oracle Pillar Axiom Quality-of-Service: confidently consolidate storage for multiple applications into a single datacentre storage solution Oracle Solaris 11: delivering secure enterprise cloud deployments with the ability to run hundreds of virtual application with no overhead and co-engineered with other Oracle software products to provide the highest levels of security, manageability and performance Oracle Enterprise Manager 12c: Oracle's integrated enterprise IT management product, providing the industry's only complete, integrated and business-driven enterprise cloud management solution Oracle VM 3.0: the latest release of Oracle's server virtualisation and management solution, helping to move datacentres beyond server consolidation to improve application deployment and management. Register today and ensure your place at the Extreme Performance Tour! Extreme Performance Tour events are free to attend, but places are limited. To make sure that you don't miss out, please visit Oracle's Extreme Performance Tour website, select the city that you'd be interest in attending an event in, and then click on the 'Register Now' button for that city to secure your interest. Each individual city page also contains more in-depth information about your local event, including logistics, agenda and maybe even a preview of VIP guest speakers. -- Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • SQL SERVER – The Difference between Dual Core vs. Core 2 Duo

    - by pinaldave
    I have decided that I would not write on this subject until I have received a total of 25 questions on this subject. Here are a few questions from the list: Questions: What is the difference between Dual Core and Core 2 Duo? Which one is recommended for SQL Server: Core 2 Duo or Dual Core? Can I upgrade my Dual Core to Core 2 Duo? If Dual Core has 2 CPUs, how many CPUs does Core 2 Duo have? Is it true that Core 2 Duo and Dual Core meant the same thing? Well, let us see the answer. Optimistically, I would be directing everybody to this blog post if I receive a question of the same kind sometime in the future. To verify the information that I provide, visit Intel’s site. For additional information regarding the subject, visit Wikipedia. My Answer: Any computer that has two CPUs or two “cores“ is known as Dual Core. Core Duo is a brand name of Intel for Dual Core. Core 2 Duo is simply a higher version of Core Duo. (e.g. for Pentium brand, it`s like Pentium I, Pentium II, etc.) The computer I am using now has Core 2 Duo. Intel has launched a new brand, which they call i3, i5, and i7.  Here, the numbers are not related to the number of cores; rather, they show the range of the CPU. I3 is of low range and i7 is of high range. Feel free to add more details by adding valuable comments here. And if you still want to ask why I created this blog post, well, I mentioned that I was waiting for 25 questions threshold to hit, before I write about this subject which I didn`t really plan to write about. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Problem with setup VPN on Ubuntu Server 12.04

    - by Yozone W.
    I have a problem with setup VPN server on my Ubuntu VPS, here is my server environments: Ubuntu Server 12.04 x86_64 xl2tpd 1.3.1+dfsg-1 pppd 2.4.5-5ubuntu1 openswan 1:2.6.38-1~precise1 After install software and configuration: ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.2.0-24-virtual (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] /var/log/auth.log message: Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [RFC 3947] method set to=115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: ignoring Vendor ID payload [FRAGMENTATION 80000000] Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [Dead Peer Detection] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: responding to Main Mode from unknown peer [My IP Address] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R1: sent MR1, expecting MI2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R2: sent MR2, expecting MI3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: Main mode peer ID is ID_IPV4_ADDR: '192.168.12.52' Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: new NAT mapping for #5, was [My IP Address]:2251, now [My IP Address]:2847 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: the peer proposed: [My Server IP Address]/32:17/1701 -> 192.168.12.52/32:17/0 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: responding to Quick Mode proposal {msgid:8579b1fb} Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: us: [My Server IP Address]<[My Server IP Address]>:17/1701 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: them: [My IP Address][192.168.12.52]:17/65280===192.168.12.52/32 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x08bda158 <0x4920a374 xfrm=AES_256-HMAC_SHA1 NATOA=192.168.12.52 NATD=[My IP Address]:2847 DPD=enabled} Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA(0x08bda158) payload: deleting IPSEC State #6 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: ERROR: netlink XFRM_MSG_DELPOLICY response for flow eroute_connection delete included errno 2: No such file or directory Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received and ignored informational message Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA payload: deleting ISAKMP State #5 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address]: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:51:16 vpn pluto[3963]: packet from [My IP Address]:2847: received and ignored informational message xl2tpd -D message: xl2tpd[4289]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[4289]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[4289]: setsockopt recvref[30]: Protocol not available xl2tpd[4289]: This binary does not support kernel L2TP. xl2tpd[4289]: xl2tpd version xl2tpd-1.3.1 started on vpn.netools.me PID:4289 xl2tpd[4289]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[4289]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[4289]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[4289]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[4289]: Listening on IP address [My Server IP Address], port 1701 Then it just stopped here, and have no any response. I can't connect VPN on my mac client, the /var/log/system.log message: Oct 16 15:17:36 azone-iMac.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0 Oct 16 15:17:36 azone-iMac.local pppd[3799]: pppd 2.4.2 (Apple version 596.13) started by azone, uid 501 Oct 16 15:17:38 azone-iMac.local pppd[3799]: L2TP connecting to server 'vpn.netools.me' ([My Server IP Address])... Oct 16 15:17:38 azone-iMac.local pppd[3799]: IPSec connection started Oct 16 15:17:38 azone-iMac.local racoon[359]: Connecting. Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 started (Initiated by me). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 started (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local pppd[3799]: IPSec connection established Oct 16 15:17:59 azone-iMac.local pppd[3799]: L2TP cannot connect to the server Oct 16 15:17:59 azone-iMac.local racoon[359]: IPSec disconnecting from server [My Server IP Address] Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Anyone help? Thanks a million!

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Start small, grow fast your SOA footprint by Edwin Biemond, Ronald van Luttikhuizen and Demed L’Her

    - by JuergenKress
    A set of pragmatic best practices for deploying a simple and sound SOA footprint that can grow with business demand. The paper contains details about Administrative considerations & Infrastructure considerations & Development considerations& Architectural considerations.  Edwin Biemond Ronald van Luttikhuizen Demed L’Her We are very interested to publish papers jointly with our partner community. Here is a list of possible SOA whitepapers that I am very interested in seeing published (note that the list is not exhaustive and I welcome any other topic you would like to volunteer). The format for these whitepapers would ideally be a 5 to 12 pages document, possibly with a companion sample (to be hosted on http://java.net/projects/oraclesoasuite11g ). It is not a marketing stuff. We will get them published on OTN, with proper credits and use social media (Twitter, Facebook, etc.) to promote them. For information, the "quickstart guide" was downloaded more than 11,000 titles over just 2 months, following a similar approach. These papers are a great way to get exposure and build your resume. We would prefer if we could get 2 people to collaborate on these papers (ideally 1 partner or customer and 1 oracle person). This guarantees some level of peer review and gives greater legitimacy to the paper. If you are Interested? Please contact Demed L’Her Thank you! SOA Partner Community For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Start small grow fast,Edwin Biemond,Ronald van Luttikhuizen,Demed L’Her,SOA Suite,Oracle,OTN,SOA Partner Community,Jürgen Kress

    Read the article

  • web application or web portal? [closed]

    - by klo
    as title said differences between those 2. I read all the definition and some articles, but I need information about some other aspects. Here is the thing. We want to build a web site that will contain: site, database, uploads, numerous background services that would have to collect information from uploads and from some other sites, parse them etc...I doubt that there are portlets that fits our specific need so we will have to make them our self. So, questions: 1. Deployment ( and difference in cost if possible), is deploying portals much more easier then web app ( java or .net) 2. Server load. Does portal consume much of server power ( and can you strip portal of thing that you do not use) 3. Implementation and developing of portlets. Can u make all the things that you could have done in java or .net? 4. General thoughts of when to use portals and when classic web app. Tnx all in advence...

    Read the article

  • What's the significance of this Google Apps error code '0x8004010B' ?

    - by Freddy
    There seems to be a lack of good information around this particular Google Apps error code. Has anyone found common environmental information that causes this error? Any kind of solution? Uninstalling/re-installing Google Sync doesn't seem to do anything, I've run Outlook's executable that scans/fixes the PST file etc. Task 'Google Apps - ::user email address:: - Sending' reported error (0x8004010B) : 'Unknown Error 0x8004010B' I found a listing of error codes for the Gdata API but nothing for these types of 'unknown' error codes. Does Google have available a list of common 'unknown' error codes/messages? Thanks in advance.

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >